0 × 01 Discussion
I want to write such an article a long time ago to talk about some of the problems that I think web 2. 0 and even the 3.0 age have encountered in web Application Security Testing and the solutions I know. It is a matter of discussion.
I will not talk much about it. Let's get into the topic. What is automated web security testing? This is actually a big concept, because web security involves many aspects, such as code auditing, black box testing, and even gray box testing. Performance testing, stress testing, and so on. Code auditing is not only about audit security, but also about audit code quality. Currently, the Code audit software on the market seems to be commercially available with code quality checks. Today, we are mainly talking about the security of web applications in fuzz testing, that is, the black box testing web Application Security that I understand. It mainly covers how to use tools to discover a series of security issues such as SQL injection, xss, command execution, file inclusion, and code injection.
For a long time, we have detected a security vulnerability in a WEB application. One is to manually add parameters such as and 1 = 1 or perform some operations on the numeric type, -1-2 check whether the parameters are passed and the operations are correctly performed in the backend database. The operation is interpreted based on the Content Changes on the page. Of course there are also similar | 1 = 1. It is actually an operation. As long as the SQL standard is met and the correct execution is OK. The second is a tool. Here it refers to a tool. More importantly, it refers to scanning tools such as webinspect, appscan, and awvs Based on crawling. Capture the directory structure of the entire website, and test its dynamic parameters based on the crawled link. No matter which of the above methods, there will be some defects. The first method is not suitable for attacks on large batches of targets, because the scope of manual testing is obviously limited. The second method solves the defects of the first method and can be used in a large application, quickly discover application vulnerabilities. However, there are still many problems. For example, currently large-scale applications all adopt a framework similar to front-end reverse proxy and backend dynamic resolution. The emergence of this architecture involves a large number of ajax applications, interactive connections, json interfaces, and so on. These things cannot be identified by crawlers. Therefore, using tools like webinspect won't solve any problems. A while ago, awvs was able to perform a web 2.0 scan and identify ajax requests. However, I used the latest version for comparative testing and found that it was still unavailable. There are many solutions to this type of problem. In China, I can see that aiscanner claims to be able to solve this problem using the web2.0 engine or js engine, and add sandbox technology to the javascript virtual engine. I checked some information, including a few introductions on developerworks. I guess it's similar to using node. js to run something and imitating XHR to get some interactive and ajax links? Hope you can give me some advice.
0 × 02 proxy-based web fuzz tools
Of course, with the development of technology, some good methods have emerged for a long time, such as using proxy transit to obtain connections, tools such as fiddler and burpsuite. Currently, a good plug-in for fiddler is Ammonite.
In results, you can see the process of sending and collecting packets.
First, let's talk about this plug-in. In my current test, the accuracy is relatively poor. In fact, one reason for his poor accuracy and high false positive rate is that the vulnerability detection methods and types are relatively simple. For example, most of the software similar to the intermediate proxy continues this idea. When detecting SQL injection, they generally do not obtain the SQL keyword in the webpage content, otherwise, it is determined based on the HTTP status code. In fact, this is an old detection method, and more. After calculation, you need to take the page length and determine whether to jump to the page. Leave a pitfall here. When I talk about browser-based automated fuzz tools, I will talk about it.
In addition, there are many similar plug-ins on fiddler, such as www.2cto.com written by the Japanese.
This plug-in is similar to the effect of burpsuite + fuzzdb.
First, the request is intercepted.
Sends lecture requests to the Attack Module
Load fuzzdb data in burpsuite for testing
You can select the fuzz mode.
You can also add payloads on your own.
Set related test parameters
Click start attack to start the attack.
Start the attack and the result after the attack.
Is it much more complicated than fiddler? Haha. In fact, there are many options that are adjustable. You can do this by yourself. After the relevant parameters are set, it is easier to test.
I would like to say more about active and passive scanning. Whether it's burpsuite or fiddler, one of the disadvantages of this proxy-based interception tool is that the active scanning feature is too weak. Of course, there are some improvements in burpsuite to do some proactive scanning, but it is still too weak.
Both Burpsuite and fiddler are two principles. They intercept all http or https requests through proxy, and then replays the requests for testing. fiddler is slightly more powerful. It can directly listen to NICs and so on, and can intercept all network requests. burpsuite still does not work in some scenarios. The detailed functions are still to be tested and mined by everyone. I will not elaborate on them.
Original: http://www.unshadow.com /? P = 430