Comparing SAST Tools
The aim of this section is to compare the findings of the various SAST tools used in the previous section and rank them to provide a solution to the first sub-segment of the 8th point of the problem statement under
The different tools found various vulnerabilities. Some found more vulnerabilities than others. I went through all the reports generated to find relevant content found in the context of potential security vulnerabilities. I also went through the different methodologies the tools used to identify vulnerable dependencies. Listed below, is a summary of all the findings that I made by going through the reports, the complete reports generated by the tools, the methodology they used for identification of vulnerabilities and a concise list of vulnerabilities found.
A more detailed explanation of the various tools generated are listed in the upcoming sections in this chapter. Here is a short summarized table of the findings that the tools showcased. After reading through the various reports generated by the different tools and consolidating the type of vulnerabilities found, coupled with any additional information that the tool provided, I ranked the tools as follows:
|Rank||Tool||No. of Dependency-based vulnerabilities||No. of Web-based vulnerabilities|
SonarQube's report can only be accessed via the web-based interface that the scanner has, yet the file generated as part of the scan can be found here.
According to NPM's documentation, when one runs
optionalDependencies. It does not check for
Running the NPM Audit on DVNA, there were a total of 5 security vulnerabilities found. The modules associated with those vulnerabilities are:
|Module Name||No. of Vulnerabilities||Severity|
The full report generated by NPM Audit can be found here.
NodeJsScan comes with a set of security rules defined in a file named rules.xml which contains the various kinds of tags that identify different types of vulnerability as well as rules to match vulnerabilities in the project's codebase. The rules are segregated into six segments:
String Comparison: The string comparison rules look for an exact match for the string specified in the rule.
Regex Comparison: The regex comparison rules match a pattern of potentially vulnerable code as specified by the regex signature in the rule.
Template Comparison: The template comparison rules look for vulnerable (potentially unsanitized) variables being used in the template.
Multi-Match Regex Comparison: The multi-match regex rules are a two-staged regex match where, after the first signature matches with a potentially vulnerable entry-point for remote OS command execution, NodeJsScan looks if the second signature matches with the content within the code block for vulnerable parameters.
Dynamic Regex Comparison: The dynamic regex rules have a two-part regex pattern where the first half is fixed and the second half is a dynamic signature.
Missing Security Code: NodeJsScan also looks for some web-based vulnerabilities for things like missing headers and information disclosure.
Scanning DVNA with NodeJsScan exposed 34 dependency-based vulnerabilities:
|Type of Vulnerability||No. of Vulnerabilities|
|Deserialization with Remote Code Execution||8|
|Server Side Injection||1|
|Weak Hash Used||11|
Additionally, NodeJsScan found 5 web-based vulnerabilities:
|Type of Vulnerability||Description|
|Missing Header||Strict-Transport-Security (HSTS)|
|Missing Header||Public-Key-Pin (HPKP)|
The full report generated by NodeJsScan can be found here.
Retire.js maintains a database of known vulnerabilities, which can be found listed here under 'Vulnerabilities' sub-heading, in a JSON format in the tool's repository. Retire.js matches the dependencies mentioned in the target project being scanned against the existing entries present in the vulnerability database maintained by Retire.js' author. The modules that get matched, are added to the report with a severity rating associated based on the type of vulnerabilities listed for that particular module.
Based on the scan, Retire.js identified 3 vulnerabilities within the following vulnerable modules:
The full report generated by Retire.js can be found here.
OWASP Dependency Check
According to Dependency Check's author's site, Dependency Check works by collecting information (called evidence) about the project associated files by Analyzers, which are programs that catalog information from the project-specific to the technology being used, and categorizes them into vendor, product, and version. Dependency Check then queries NVD (National Vulnerability Database), the U.S. government's repository of standards-based vulnerability management data, to find matching CPEs (Common Platform Enumeration). When a there's a match found, related CVEs (Common Vulnerabilities and Exposures), a list of entries where each one contains an identification number, a description, and at least one public reference for a publicly known cyber-security vulnerability, are added to the report generated by Dependency Check.
The evidence that Dependency Check identifies, gets assigned a confidence level - low, medium, high or highest. It is a measure of how confident Dependency Check is about whether or not it has identified a module correctly by collating data about the same module from various sources within the project. Based on the confidence level of the source used to identify the module, the confidence level is assigned to the report for that particular module. By default, Dependency Check assigns the lowest confidence to a module.
Note: Dependency Check mentions explicitly that because of the way it works, the report might contain both false-positives and false-negatives.
The report generated by Dependency Check was quite huge, hence I ended up writing a small Python script to filter the relevant information for me. I wrapped the code I used into a function to do the filtering, which can be found below:
def dependency_check_report(): import json file_handler = open('dependency-check-report') json_data = json.loads(file_handler.read()) file_handler.close() dependencies = json_data['dependencies'] for dependency in dependencies: if 'vulnerabilities' in dependency: print('\n==============================================\n') print(dependency['fileName'] + ' : ' + dependency['vulnerabilities']['severity'])
Dependency Check identified 7 vulnerabilities in total. The vulnerable modules identified are:
The full report generated by Dependency Check can be found here.
Auditjs uses the REST API available for OSS Index, which is a public index of known vulnerabilities found in dependencies for various tech stacks, to identify known vulnerabilities and outdated package versions. Once a match is found, the modules are added to the report along with number of associated vulnerabilities found.
Running Auditjs exposed 22 security vulnerabilities in the 5 vulnerable modules identified:
|Module Name||Version||No. of Vulnerabilities|
The full report generated by Auditjs can be found here.
Snyk maintains a database of known vulnerabilities sourced from various origins like other Databases (NVD), issues and pull requests created on GitHub and manual research into finding previously unknown vulnerabilities. When Snyk scans a project, it queries this database to find matches. The matched modules along with the type of vulnerability associated with them get collated into a report.
Like Dependency Check, I wrote a small script in Python to filter relevant information from the report generated. The code can be found as a function below:
def snyk_report(): import json file_handler = open('snyk-report') json_data = json.loads(file_handler.read()) file_handler.close() for vuln in json_data['vulnerabilities']: print('\n==============================================\n') print("Module/Package Name: " + vuln['moduleName']) print('Severity: ' + vuln['severity']) print('Title: ' + vuln['title'])
Snyk exposed 8 security vulnerabilities in the below-listed modules, with the type of vulnerability, the number of vulnerabilities and severity identified:
|Module Name||Type of Vulnerability||No. of Vulnerabilities||Severity|
|mathjs||Arbitrary Code Execution||3||High|
|node-serialize||Arbitrary Code Execution||1||High|
|typed-function||Arbitrary Code Execution||1||High|
|express-fileupload||Denial of Service||1||High|
|mathjs||Arbitrary Code Execution||2||Medium|
The full report generated by Snyk can be found here.
In conclusion, there were two tools that stood out, NodeJsScan and Auditjs, because of the number of vulnerabilities they found. NodeJsScan also went a step ahead and identified a few web-based vulnerabilities, which no other tool did. All the other tools I used were quite close with the vulnerabilities that they found, except for SonarQube which did not find any vulnerability.