0% found this document useful (0 votes)
103 views

Evaluationof Web Vulnerability Scanners Basedon OWASPBenchmark

This document evaluates the effectiveness of web vulnerability scanners Arachni and OWASP ZAP based on the OWASP benchmark. The authors find that neither Arachni nor the latest version of ZAP (version 2.7) have been previously evaluated using the OWASP benchmark. They run the scanners against the OWASP benchmark and compare the results to existing evaluations using the WAVSEP benchmark. This allows them to assess the capabilities of these two benchmarks in evaluating scanner performance and make recommendations for benchmarking web vulnerability scanners.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views

Evaluationof Web Vulnerability Scanners Basedon OWASPBenchmark

This document evaluates the effectiveness of web vulnerability scanners Arachni and OWASP ZAP based on the OWASP benchmark. The authors find that neither Arachni nor the latest version of ZAP (version 2.7) have been previously evaluated using the OWASP benchmark. They run the scanners against the OWASP benchmark and compare the results to existing evaluations using the WAVSEP benchmark. This allows them to assess the capabilities of these two benchmarks in evaluating scanner performance and make recommendations for benchmarking web vulnerability scanners.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/331038733

Evaluation of Web Vulnerability Scanners Based on OWASP Benchmark

Conference Paper · December 2018


DOI: 10.1109/ICSENG.2018.8638176

CITATIONS READS
13 2,600

2 authors, including:

Balume Mburano
Western Sydney University
2 PUBLICATIONS   13 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Effectiveness of Ethical Hacking Tools for Web Security View project

All content following this page was uploaded by Balume Mburano on 17 February 2019.

The user has requested enhancement of the downloaded file.


Evaluation of Web Vulnerability Scanners Based on OWASP Benchmark
Balume Mburano 1, Weisheng Si 1
1
School of Computing, Engineering and Mathematics, Western Sydney University, Australia
b. mburano, [email protected]

Abstract— The widespread adoption of web vulnerability WAVSEP benchmark, showing the different capabilities of
scanners and their differences in effectiveness make it necessary to these two benchmarks.
benchmark these scanners. Moreover, the literature lacks the
comparison of the results of scanners effectiveness from different The remainder of this paper is structured as follows. In
benchmarks. In this paper, we first compare the performances of Section II, we review benchmarking and vulnerability scanners
some open source web vulnerability scanners of our careful choice literature. In Section III, we describe the experimental
by running them against the OWASP benchmark, which is environment, the selection of benchmark and scanners. In
developed by the Open Web Application Security Project Section IV, we detail our experimental results. Then in Section
(OWASP), a well-known non-profit web security organization. V, we give conclusions drawn from the experiments and make
Furthermore, we compare our results from the OWASP
benchmark with the existing results from the Web Application
recommendations.
Vulnerability Security Evaluation Project (WAVSEP)
benchmark, another popular benchmark used to evaluate scanner II. RELATED WORK
effectiveness. We are the first to make a comparison between these
two benchmarks in literature. Our evaluation results allow us to To form the basis of this study, we reviewed benchmarks
make some valuable recommendations for the practice of and vulnerability scanners literature to make an informed
benchmarking web scanners. selection of appropriate benchmark and web vulnerability
scanners for evaluation. We focused on scanners’ vulnerability
Keywords — security measures, penetration testing, web
vulnerability scanner, benchmarking. detection performance and benchmarks capability to reflect
this.

I. INTRODUCTION A. Benchmarking Literature

W eb applications have become an integral part of everyday


life, but many of these applications are deployed with
critical vulnerabilities that can be fatally exploited. As the
Benchmarks are used to evaluate the performance of web
vulnerability scanners. We examined three such tools including
Web Input Vector Extractor Teaser (WIVET) [3], Web
technology used to develop these applications become Application Vulnerability Scanner Evaluation Project
sophisticated, so are the attacker's techniques. Therefore, Web (WAVSEP) benchmark, and Open Web Application Security
vulnerability scanners have been heavily used to assess the Project (OWASP) benchmark [6]. Of these, WAVSEP
security of web applications [1]. benchmark has been the most used to evaluate a wide variety of
commercial and open source web vulnerability scanners [7-9].
With the popularity of web vulnerability scanners, there is a WIVET, on the other hand, has been used to assess vulnerability
need to measure their effectiveness. Benchmarking is one of the scanners’ crawling coverage [10]. However, OWASP
techniques used to do so [2]. Several benchmarks have been benchmark has not been used to evaluate many popular web
used to evaluate the effectiveness of web vulnerability scanners. vulnerability scanners although it is developed by a well-known
These include Web Input Vector Extractor Teaser (WIVET) organization and is actively maintained. Moreover, no study has
[3], IBM Application Security Insider [4], Web Application compared results from these benchmarks before. Therefore, we
Vulnerability Scanner Evaluation Project (WAVSEP) use OWASP benchmark to evaluate Arachni and ZAP and then
benchmark [5] and Open Web Application Security Project compare the results with existing WAVSEP benchmark results
benchmark (OWASP) benchmark [6]. Although these to show these benchmarks capabilities in reflecting the
benchmarks are considered effective in evaluating scanners, effectiveness of web vulnerability scanners.
there still exist two research gaps: (1) not all popular scanners
have been evaluated by these benchmarks; (2) the results from B. Web vulnerability scanners Literature
these benchmarks have never been compared before.
Although web vulnerability scanners have been established
To fill these research gaps, we first use OWASP benchmark to be effective in unveiling flaws in web applications [11],
to evaluate and compare two popular web scanners, Arachni previous studies have also shown that there are variations
and OWASP ZAP of their latest versions (v1.5.1 and v2.7) between the results reported by different scanners [8, 9, 11, 12].
respectively. Before our work, any version of Arachni has not Specifically, these scanners performance vary in different
been evaluated by OWASP benchmark, nor has ZAP V2.7. We vulnerability categories and crawler coverage [11]. This
note here that ZAP V2.6 has been evaluated before by OWASP increases the necessity for better evaluation of these scanners’
benchmark, and in our evaluation section, we will point out the effectiveness. Although there are an abundance of open source
performance differences between V2.6 and V2.7. Further, we web vulnerability scanners, we reviewed six such tools
compare the performance results of these two scanners from including Wapiti, Watabo, W3af, Arachni and OWASP ZAP.
OWASP benchmark with their existing results from the
Of these, Arachni and ZAP were the most popular and well- ( )
maintained (considering previous studies results [7-9, 11]). =
( )
1) Arachni. A high-performance free Open Source ruby-
based framework that is aimed to help administrators and • False Positive Rate (FPR) [17].
penetration testers evaluate the security of web applications.
Arachni supports multiple platforms including Windows, ( )
=
Linux, and Mac OSX and can be instantly deployed using its ( )
portable packages [13]. Its deployment options include:
Command Line Interface(CLI) for quick scans, Web User • True Negative Rate(TNR) [17, 18].
Interface (WebUI) for multi-user, multi-scan and multi-
dispatcher management and distributed system with remote ( )
agents [13] =
( )
2) OWASP Zed Attack Proxy (ZAP), an easy to use open
source scanner for finding vulnerabilities in web applications.
• False Negative Rate(FNR) [17, 18].
It is one of the OWASP flagship projects that is recommended
by OWASP for web applications vulnerability testing. ZAP is
( )
widely used by people ranging from security professionals, =
developers, and functional testers for automated security tests ( )
that can be incorporated into the continuous development
environment. Additionally, ZAP is a free Open Source cross- • Accuracy: is the proximity of measurement
platform scanner that is becoming a framework for advanced results to the true value[19].
web application vulnerability testing [14].
+
=
+ + +
III. EVALUATION APPROACH
An appropriate methodology was required to evaluate the • Youden Index:
chosen web vulnerability scanners for this study. The following
subsections explain the decision and steps taken throughout the The score produced by OWASP Benchmark is the
study to select and evaluate the scanners. Youden index which is a standard method that
summarises test set accuracy [6]. OWASP benchmark
A. Benchmark Selection
accuracy score is the normalized distance from the
random guess line which is the difference between a
The first phase of our study involved conducting an initial
scanner’s TPR and FPR. Since WAVSEP benchmark
survey of the existing popular open source benchmarks. While
uses accuracy and OWASP benchmark uses Yuden
OWASP Benchmark is a free open source program, it remains
index, we therefore converted the benchmarks results
state-of-the-art as it has a significant number of contributors and
to score for comparison.
it is regularly updated. Therefore, OWASP Benchmark is
considered one of the benchmark choices for measuring the
effectiveness of vulnerability scanners [6, 15]. Besides, = TPR − FPR
OWASP benchmark has not been used to evaluate Arachni and
the latest version of ZAP. It gives the score of a tested scanner C. Scanners Selection
based on True Positive Rate(TPR), False Positive Rate(FPR), Our review of open source web vulnerability scanners
True Negative Rate(TNR) and False Negative Rate(FNR) [6, revealed that Arachi and ZAP were best suited for this study
16]. This is particularly important because time and ability considering their regular update, number of contributors and
needed to discover true and false metrics of a scanner make their popularity as confirmed in [20]. Although not considered
them incredibly important and a clear understanding of these is as better as commercial scanners, some open source scanner
required for the choice of a web vulnerability scanner. We, features such as Crystal Report (RPT) interface have the
therefore, used OWASP benchmark for this study. potential of making these (particularly Arachni) a must-have
scanner in Software as a Service (SAAS) multi-product
B. Metric Selection. environment [20]. Additionally, regardless of the size of the
application scanned, number of threads and even against an
easy target, Arachni appears to produce consistent results” [20].
The used benchmark metrics for scanners evaluation are
ZAP, on the other hand, has also proven to be ideal for
presented as follow:
developers and functional testers who are both experienced and
new to penetration testing [21].
• True Positives (TP), False Positives (FP), True
Negatives (TN), False Negatives (FN), D. Testing Environment Design.
We created a testing environment consisting of a local
• True Positive Rate (TPR) [17, 18] area network of two computers with one acting as a target and
the other as the attacking computer. All applications necessary
to perform the benchmarking which include OWASP when Arachni or ZAP Web interface was used, at the end of a
Benchmark, OWASP ZAP, and Arachni, were installed. successful scan, the scanners automatically generated reports in
different formats that could be downloaded from provided
links.
Step 3. The XML report was then copied back into results
folder in OWASP Benchmark, then the command
createScoreCards.bat (for Windows) or createscorecards.sh
(for Linux) was executed to generate benchmark results known
as Scorecards.
For the accuracy of the results, multiple scans were run with
one scan targeting all categories and then each category
separately. This method was applied more to obtain the Arachni
results.
Five vulnerability categories were considered for evaluating
Figure 1: Experiment environment and scanners evaluation process
the effectiveness of the scanners. These include Command
Injection, LDAP Injection, SQL Injection, Cross Site Scripting
We obtained the benchmarking results by first executing the and Path Traversal [22-25]. These categories were chosen in
scanners against OWASP Benchmark. The scanners results consideration of their criticality.
were then used to generate an XML file that OWASP On each of the categories mentioned above and each
benchmark scorecard generator used to create scorecards. The scanner, OWASP benchmark applied some metrics including
generated scorecards were examined to draw conclusions on the TP, FN, TN, FP, TPR, and FNR [17, 26] to obtain the most
performance of the scanners (see table 1). The benchmarking appropriate measures to score each scanner, to promote a
results of each scanner were discussed and compared to each reasonable interpretation of results and draw sound conclusions.
other. Then, both scanners results were compared to results We, therefore, executed the command createScoreCards to
from a previous study that have used the WAVSEP benchmark instruct OWASP benchmark to produce scorecards that
to evaluate these scanners. highlight the overall performance of each scanner in the
The benchmarking process comprised of three significant selected categories.
steps:
VI. EXPERIMENTAL RESULTS
Step1. Setting the chosen scanner (Arachni and ZAP) to
attack OWASP Benchmark. This was an essential step as it We have organized our results into two Subsections. In
subjected the scanner to the Vulnerability test cases within Subsection A, we compare the OWASP benchmark results of
benchmark and generated reports that we used to measure the the two scanners in chosen vulnerability categories. Then, in
scanners’ performance using true positive and false positive, Subsection B, these (OWASP benchmark results) are then
true negative and false negative metrics. compared with the existing WAVSEP results.
For ZAP, first, we have run a spider on the target (OWASP
Benchmark) to discover all the resources (URLs) that are A. Scanners Comparative Evaluation Results
available in the target before launching the attack, then
launched the attack using the ‘Active Scan’ command. After using Arachni and ZAP to scan the OWASP
For Arachni, using the command line interface, we benchmark test cases, we observed the scanning behavior and
navigated to the bin folder then executed run Arachni command their detection accuracy. We then compared these scanners
while specifying the target URL, the checks to be executed and accordingly. In this section, we present and discuss the
the report name. scanners’ benchmarking results comparatively.
Step 2. For Arachni, the command line interface generated 1. Speed. After scanning the targeted OWASP benchmark test
a. Afr (Arachni Framework Report) report. This report was then cases and recorded the elapsed scanning time, we observed that
used to produce other reports in different formats including Arachni took around 10 hours and 30 minutes to scan each
HTML and XML. For this study, the XML report was the most category whereas ZAP took around six hours and a half per
needed to generate benchmark scorecards. On the other hand, category.

Table 1:Scanners Detection score results for CMDI, LDAPI, SQLI, XSS and Path Traversal
2. Quantitative Measures. Based on OWASP benchmark test The categories including SQLI, XSS, CMDI and Path Tra-
cases, six metrics were calculated to evaluate the effectiveness versal were considered for benchmarks results’ comparison
of the two scanners in detecting Command Injection, LDAP purposes. Although LDAP category was examined in our ex-
Injection, XSS, SQL Injection, and Path Traversal attacks. periments, it was not included in the comparison because it was
Table 1 shows the detailed summary of the scanners results in not examined in Chen’s study.
the calculated metrics.
The obtained experimental results (in table 1) has allowed
us to get an overview of the performances of Arachni and ZAP
related to Command Injection, LDAP Injection, SQL injection,
Cross-Site Scripting and Path Traversal. We then gave a close
comparison of the two scanners performance in the five chosen
categories (See figure 2)

Table 2: Arachni and ZAP OWASP benchmark and WAVSEP benchmark


comparison
Our examination of the results in Table 2 demonstrated that
there was some similarity in the performance pattern of the
scanners in some categories such as XSS and Path Traversal.
However, there were significant variation in detection rate and
dissimilarities of scanners performance in some other
Figure 2: Side by side Comparison of OWASP benchmark Scores of categories such as SQLI. This variation was verifiable by
Arachni and ZAP in CMDI, LDAPI, SQLI, XSS and Path Traversal
examining WAVSEP benchmark results of XSS category
which showed that ZAP had a 100% accuracy score and
After producing the OWASP benchmark scorecard for Arachni 91% whereas OWASP benchmark results indicated
each scanner in the five categories, we compared their that ZAP scored 76% and Arachni 64% in the same category.
performance results. Based on the OWASP benchmark reported While both Arachni and ZAP had a similar score of 100%
scores, Arachni had the highest score of 74% in LDAP injection detection rate in Path Traversal WAVSEP benchmark results,
whereas ZAP scored 30%. However, in Command Injection, they scored 0% in the same category in OWASP benchmark
SQL Injection and XSS categories ZAP outperformed Arachni results. In SQLI on the other hand, OWASP benchmark results
with the score of 33%, 55%, and 76% respectively. Although indicated that ZAP had performed better than Arachni with 58%
each scanner outperformed the other in some categories, we and 50% respectively whereas WAVSEP benchmark results
considered the difference in the scores for a better evaluation of showed the opposite (ZAP 96% and Arachni 100%). This
their performance in each of the categories. This difference, differences in results, however, was explained by the fact that
therefore, shows that although there is a need for both scanners OWASP benchmark results were obtained from the latest
to uplift their performance in their losing categories, it is evident version of ZAP (2.7) while Chen’s study examined the previous
that more work is needed in raising both scanners performance version of ZAP (2.6).
in Path Traversal and ZAP in LDAP Injection.
Moreover, our results (OWASP benchmark) have
B. Comparison of OWASP benchmark with WAVSEP bench- demonstrated that the current version of ZAP has improved
mark compared to its predecessor in some categories (e.g., in XSS
and LDAP, V2.7 scored 76%, and 30% whereas V2.6 scored
Arachni and ZAP have been evaluated before, and the 29%, and 0% respectively). Nevertheless, there was still much
WAVSEP benchmark was used in for benchmarking in these difference in the SQLI score in OWASP benchmark results and
studies. In contrast, our study has evaluated these scanners WAVSEP benchmark results with a score of 100% for Arachni
based on OWASP benchmark. To highlight the importance of and 96 % for ZAP in WAVSEP results, while in OWASP
using a variety of benchmarks to get an overall conclusion in benchmark results ZAP scored 58% and Arachni scored 50%.
the evaluation of the effectiveness of web application vulnera- The interesting part, however, was that the differences in the
bility scanners, we have compared the obtained OWASP scanners performance scores in OWASP benchmark results and
Benchmark results of Arachni and ZAP to a previous study that WAVSEP results were both averaging 3.5%.
have evaluated these canners based on WAVSEP benchmark. In other categories, ZAP outperformed Arachni with a 100%
We, therefore, chose the latest study by Shay Chen [8] for this and 76% score as compared to 91% and 64% in XSS in
purpose. Our choice of Shay Chen’s study was based on the WAVSEP benchmark results and OWASP benchmark results
accuracy of his results, and his reputation as the author of respectively. Additionally, in SQLI there was 8% and 4 %
WAVSEP benchmark. Additionally, his benchmarking results performance difference in OWASP benchmark and WAVSEP
have never been contrasted with results based on other bench- benchmark results respectively.
marks.
After a general discussion of the scanners results from whereas existing WAVSEP benchmark study examined an
OWASP benchmark and WAVSEP benchmark, we compared older version of ZAP. Furthermore, our discussion of OWASP
these in each of the chosen categories side-by-side. benchmark results (see Section IV.B) has confirmed that there
1) Command Injection has been a significant improvement in the examined version of
ZAP as compared to previous versions.

3) Cross Site Scripting

Figure 3: comparison of Arachni and ZAP CMDI score


results in OWASP benchmark and WAVSEP benchmark

Based on the comparison results in figure 3, Arachni


outperformed ZAP in WAVSEP benchmark results with 100% Figure 5: comparison of Arachni and ZAP XSS score results
and 93% detection rates respectively, whereas the opposite in OWASP benchmark and WAVSEP benchmark
occurred in OWASP benchmark results with ZAP scoring 33%
and Arachni 31%. Although the scanners performance In this category, ZAP performed better than Arachni both in
differences are not significant for both WAVSEP and OWASP WAVSEP benchmark results and OWASP benchmark results.
benchmarks (with a difference of 7% and 2% respectively), However, the differences in detection rates were apparent in
WAVSEP benchmark detection rate for both scanners is three both vulnerability scanners in both benchmarks results with
times higher than OWASP benchmark with an average of ZAP scoring 100% and 76% in WAVSEP benchmark and
96.5% and 32 % respectively. OWASP benchmark respectively. The same occurred with
Arachni results which were 91% in WAVSEP benchmark
2) SQL Injection results and 64% in OWASP benchmark results.
The comparison results of SQLI category indicated a contrast 4) Path Traversal
in the detection rate between OWASP benchmark and
WAVSEP benchmark results.

Figure 4: comparison of Arachni and ZAP SQLI score results


in OWASP benchmark and WAVSEP benchmark Figure 6: comparison of Arachni and ZAP Path Traversal
score results in OWASP benchmark and WAVSEP benchmark

Arachni outperformed ZAP by 4% in WAVSEP benchmark The strictness of benchmarks test cases was apparent in the
results whereas OWASP benchmark results indicated that ZAP comparison results of this category. As it can be seen, although
outperformed Arachni by 8% in this category. Although it was both scanners scored 100% detection rate in WAVSEP
clear that Arachni outperformed ZAP in the existing WAVSEP benchmark results, the opposite occurred in the OWASP
benchmark results with a score of 100% and 96% respectively, benchmark results with both scanners scoring 0% detection rate
we considered OWASP benchmark results. This was because in the same category as shown in figure 6 above.
OWASP benchmark examined the latest version of ZAP
V. CONCLUSIONS AND RECOMMENDATIONS Journal of Applied Engineering Research, vol. 12, pp. 11068-
11076, 2017.
The results of our comparative evaluation of the scanners
confirmed again that scanners perform differently in different [10] Y.Smeets, "Improving the adoption of dynamic web security
categories. Therefore, no scanner can be considered an all- vulnerability scanners," Computer Science, In Dei Nomine
rounder in scanning web vulnerabilities. However, combining Feliciter, 2015.
the performances of these two scanners in both benchmarks, we [11] M. Alsaleh, N. Alomar, M. Alshreef, A. Alarifi, and A. Al-
concluded that ZAP performed better than Arachni in SQLI, Salman, "Performance-Based Comparative Assessment of Open
XSS and CMDI categories. Arachni, on the other hand, Source Web Vulnerability Scanners," Security and
performed much better in LDAP category. Communication Networks, vol. 2017, pp. 1-14, 2017.
There were considerable variations in the performances of [12] Y. Makino and V. Klyuev, "Evaluation of web vulnerability
these two scanners between the OWASP benchmark and the scanners," in 2015 IEEE 8th International Conference on
WAVSEP benchmark. Specifically, our results of benchmarks Intelligent Data Acquisition and Advanced Computing Systems:
comparison revealed that for both scanners and all the four Technology and Applications (IDAACS), 2015, pp. 399-402.
vulnerability categories compared, the scores under the [13] Sarosys.LLC. Arachni Web Application Security Scanner
WAVSEP benchmark were much higher than those under the Framework. Available: http://www.arachni-
OWASP benchmark. This reflects that the OWASP benchmark scanner.com/Sarosys LLC, 2017.
is more challenging than the WAVSEP benchmark in these four [14] OWASP. OWASP Zed Attack Proxy Project. Available:
vulnerability categories. Therefore, we recommend that, if a https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy
scanner is to be evaluated on these four vulnerability categories, _Project, 2018.
the OWASP benchmark should be chosen as the main target, [15] P. J. Fleming and J. J. Wallace, "How not to lie with statistics:
while the WAVSEP benchmark can be used as a secondary the correct way to summarize benchmark results,"
target to complement the evaluation results. Communications of the ACM, vol. 29, pp. 218-221, 1986.
[16] A. Baratloo, M. Hosseini, A. Negida, and G. El Ashal, "Part 1:
REFERENCES Simple Definition and Calculation of Accuracy, Sensitivity and
Specificity," Emergency, vol. 3, pp. 48-49, 2015.
[1] CoreSecurity. What is Penetration Testing? Available:
[17] N. Antunes and M. Vieira, "On the Metrics for Benchmarking
https://www.coresecurity.com/content/penetration-testing,
Vulnerability Detection Tools," in 2015 45th Annual IEEE/IFIP
2018.
International Conference on Dependable Systems and Networks,
[2] K.Reintjes, "A benchmark approach to analyse the security of
2015, pp. 505-516.
web frameworks," Master, Computer Science, Radboud
[18] J. S. Akosa, "Predictive Accuracy: A Misleading Performance
University Nijmegen, Nijmegen, Netherlands, 2014.
Measure for Highly Imbalanced Data," 2017.
[3] E. Tatlı and B. Urgun, WIVET—Benchmarking Coverage
[19] Exsilio.Solutions. Accuracy, Precision, Recall & F1 Score:
Qualities of Web Crawlers vol. 60, 2016.
Interpretation of Performance Measures. Available:
[4] IBM. The Most Comprehensive Web Application Security
https://blog.exsilio.com/all/accuracy-precision-recall-f1-score-
Scanner Comparison Available Marks AppScan Standard as the
interpretation-of-performance-measures/, 2018.
Leader (Again). Available:
[20] S. Chen. Price and Feature Comparison of Web Application
http://blog.watchfire.com/wfblog/2012/08/the-most-
Scanners. Available: http://sectoolmarket.com/price-and-
comprehensive-web-application-security-scanner-comparison-
feature-comparison-of-web-application-scanners-unified-
available-marks-appscan-standard-as-the-leader.html, 2012.
list.html, 2017.
[5] Darknet. wavsep-web-application-vulnerability-scanner-
[21] ToolsWatch. 2016 Top Security Tools as Voted by
evaluation-project. Available:
ToolsWatch.org Readers. Available:
https://www.darknet.org.uk/2011/09/wavsep-web-application-
http://www.toolswatch.org/2018/01/black-hat-arsenal-top-10-
vulnerability-scanner-evaluation-project/, 2017.
security-tools/.
[6] OWASP. OWASP Benchmark. Available:
[22] OWASP. Cross Site Scripting. Available:
https://www.owasp.org/index.php/Benchmark, 2017.
https://www.owasp.org/index.php/Cross-site_Scripting_(XSS),
[7] M. El, E. McMahon, S. Samtani, M. Patton, and H. Chen,
2016.
"Benchmarking vulnerability scanners: An experiment on
[23] Microsoft. Establishing an LDAP Session. Available:
SCADA devices and scientific instruments," in 2017 IEEE
https://msdn.microsoft.com/en-
International Conference on Intelligence and Security
us/library/aa366102(v=vs.85).aspx, 2018.
Informatics (ISI), 2017, pp. 83-88.
[24] OWASP. SQL Injection. Available:
[8] S. Chen. Evaluation of Web Application Vulnerability Scanners
https://www.owasp.org/index.php/SQL_Injection, 2016.
in Modern Pentest/SSDLC Usage Scenarios. Available:
[25] PortSwigger_Ltd. SQL injection. Available:
http://sectooladdict.blogspot.com/2017/11/wavsep-2017-
https://portswigger.net/kb/issues/00100200_sql-injection, 2018,
evaluating-dast-against.html, 2017.
2018.
[9] S. E. Idrissi, N. Berbiche, F. G. and, and M. Sbihi, "Performance
[26] GitHub.Inc. OWASP Benchmark. Available:
Evaluation of Web Application Security Scanners for Prevention
https://github.com/OWASP/Benchmark/compare/1.2beta...mast
and Protection against Vulnerabilities.pdf," International
er, 2018, February.

View publication stats

You might also like