The biggest risk of software security is the opaque nature of testing tools and processes, and different testing technologies (such as automated Dynamic Testing) cannot cover the potential possibility of false negative errors.
Although the security software development lifecycle (SDLC) has many related best practices, most organizations still have a tendency to rely primarily on testing to build secure software. One of the most serious side effects of the current test method is that the organization is not clear about which has been tested by its solution, and (or more importantly) What has not been tested. Our research shows that any single automated assurance mechanism can test up to 44% of security requirements. The NIST static analysis tool expo found 26 known vulnerabilities in Tomcat, but all static analysis tools combined only reported four of them. Because the process of relying on Opacity is a common habit and has even become an industry standard, many organizations are satisfied with testing as the primary means when building secure software.
For example, suppose you hire a consulting company to perform penetration testing for your software. Many people refer to this test as a "black box" (based on Quality Assurance Technology of the same name). Testers do not have detailed knowledge of internal system components (such as system code ). After the test is executed, a report is generated to summarize the vulnerabilities in your application. You fixed the vulnerability and submitted the application for regression testing. The next report reported that the vulnerability was "cleared"-that is, no vulnerability exists. Or, at best, tell you that your application will not be cracked by the same Testers in the same way within the same time range. But on the other hand, it won't tell you:
- What are the potential threats in your application?
- What threats in your application are "not easy to attack "?
- Which of the following threats have not been evaluated by testers? From the runtime perspective, which threats cannot be tested?
- How does the test time and other constraints affect the result reliability? For example, if testers still have five days, what other security tests will they perform?
- How high is the skill level of testers? Can you get the same set of results from different testers or from another consulting company?
In our experience, the Organization cannot answer most of the above questions. The black box has two sides: on the one hand, testers are not clear about the internal structure of the application; on the other hand, the organizations applying for testing are not familiar with the security status of their own software. Not only do we realize this problem: Haroon Meer discussed the challenge of penetration testing on 44con. Most of these issues apply to verification in any form: Automated dynamic testing, automated static testing, manual penetration testing, and manual code review. In fact, a paper recently introduced similar challenges in source code review.
Required instances
To better illustrate this problem, let's look at the security requirements of some common high-risk software and how to apply common verification methods to these requirements.
Requirements:Use a secure Hash algorithm (such as a SHA-2) and a unique obfuscation value (salt value) to Hash the user password. This algorithm is iterated multiple times.
Over the past year, LinkedIn, Last FM, and Twitter have experienced well-known password leaks. This requirement for such defects is specific and compliant.
How to apply common verification methods:
- Automated Runtime Testing: it is impossible to access the existing password, so this method cannot be used to verify this requirement
- Manual run-time testing: This method can be used to verify this requirement only when some other development causes a Saved Password dump. This is not what you can control, so you cannot rely on runtime tests to verify this requirement.
- Automated static analysis: This method can be used to verify this requirement only when the following conditions are met:
- The tool knows how Identity Authentication works (for example, using standard components like Java Realms)
- The tool identifies the specific hash algorithm used by the application.
- If the application uses a unique obfuscation value for each hash, the tool must be clear about the obfuscation algorithm and the obfuscation value.
In fact, there are many implementation methods for authentication, and it is unrealistic to expect static analysis methods to fully verify the original needs. A more practical solution is to simply confirm the authentication program using tools and point out that secure hashing andObfuscationProcessing. Another solution is to create custom rules to identify algorithms and hash values and check if they comply with your exclusive policies, even though this practice is rare in our experience.
- Manual code review: This is the most reliable common verification method for this requirement. Manual evaluators can understand which piece of code has been authenticated, verify the hash andObfuscationThe processing conforms to best practices.
Requirements:Bind variables to SQL statements to prevent SQL Injection
SQL injection is one of the most destructive application vulnerabilities. Recently, it was found that there is a defect in Ruby on Rails. The application system built on its technology stack will be attacked by SQL injection.
How to apply common verification methods:
- Automated Runtime Testing: Although behavioral analysis may be used to detect the existence of SQL injection, it cannot prove that there is no SQL injection. Therefore, tests in the automated runtime cannot fully validate this requirement.
- Manual run-time testing: the same limitations as automated run-time testing
- Automated static analysis: this requirement is usually verified, especially when you use a standard library to access the SQL database. Whether you dynamically splice user input into SQL statements or use correct variable binding can be distinguished by tools. However, this is risky. In the following scenarios, the static analysis may miss the SQL injection vulnerability:
- You use stored procedures on the database and cannot scan the database code. In some cases, stored procedures are vulnerable to SQL injection.
- You use an object relationship ing (ORM) class library, but your static analysis tool does not support this class library. Object relationship ing is also vulnerableInjection.
- You use a non-standard driver or class library to connect to the database, and the driver does not properly implement common security control (such as pre-compiled statements)
- Manual code review: Like static analysis, manual code review can confirm that there is no SQL injection vulnerability. However, in fact, there may be hundreds or thousands of SQL statements in the product application. Manual review of each statement is not only time-consuming, but also error-prone.
Requirements:Use Authorization checks to ensure that users cannot view data of other users.
Every year, we hear new vulnerabilitiesExamples.
How to apply common verification methods:
- Automated Runtime Testing: By accessing data of two different users and using one user's account to try to access data of another user, automated tools can test the cost to a certain extent. However, these tools cannot identify which data of a user account is sensitive, nor modify the parameter "data = account1" to "data = account2", which indicates that authorization is violated.
- Manual run-time testing: In general, manual run-time testing is the most effective way to discover such vulnerabilities, because people can have the necessary domain knowledge to find out the location of such attacks. However, in some cases, the testing staff may not be able to fully grasp all the information necessary to discover such defects. For example, if you append a hidden parameter similar to "admin = true", you can access data of other users without authorization check.
- Automated static analysis: Without Rule customization, automated tools usually cannot find such vulnerabilities because they need to understand the domain. For example, the static analysis tool does not know the condition information indicated by the "data" parameter and requires authorization check.
- Manual code review: manual code review can expose entities with missing authorization (Translator's note, such as code), which is hard to find using runtime tests, for example, add a "admin = true" parameter. However, it is time and effort-consuming to verify whether authorization checks are performed. One authorization check may appear in many different sections of code, so manual reviewers may need to track several different execution paths from start to end to check whether authorization is performed.
Impact on you
The opaque nature of verification means that management of effective software security requirements is necessary. For listed requirements, testers can identify a specific requirement they have evaluated or the technology they use. Critics suggested that penetration testing should not follow a "Audit-like checklist" because no checklist can cover vague scopes and vulnerabilities in specific fields. However, to find a unique problem flexibly, it is inevitable that the requirements have been fully understood. This situation is very similar to the standard software quality assurance (QA): A good quality assurance tester can verify functional requirements, think about the boundary of the box, and find a way to destroy the function. If you simply and blindly test and report some defects unrelated to functional requirements, the effectiveness of quality assurance will be significantly reduced. So why should we accept low-standard security testing?
Make sure you have software security requirements before you perform the next security verification activity.YesUsed for measurement, and you need to specify the requirements that fall within the scope of verification. If you hire manual penetration testers or source code reviewers, they can easily determine which needs are tested by them. If you use an automated tool or service, the partner's suppliers will indicate which requirements cannot be reliably tested using their tools or services. It is impossible for your testers, products, or services to ensure that there are no false negative errors (for example, to prevent your applications from being attacked by SQL injection, testing them can greatly increase your self-confidence and confidence that your system code does not contain known and preventive security vulnerabilities.
About the author
Rohit Sethi(@ Rksethi on Twitter) We are fortunate to have worked with many outstanding people on SD Elements to address application security requirements. He has helped companies that are sensitive to security improve software security in industries including financial services, software, e-commerce, health care, and telecommunications. Rohit creates and teaches SANS courses for Secure J2EE development. He has made speeches in the following organizations: FS-ISAC, RSA, OWASP, Secure Development Conference, Shmoocon, CSI National, SecTor, Infosecurity, CFI-CIRT and so on. He has been in InfoQ, Dr. dobb's daily report, TechTarget, Security Focus, and network application Security alliance (WASC) published an article on Fox News Live, it was also referenced by Discovery News and Computer World as an application security expert. In addition, he created the OWASP Design Patterns Security Analysis project.