I will not introduce the concept of domain names too much here. Regarding the role of searching and scanning second-level domain names, I think it is crucial for every comrade engaged in penetration. One point: when the main site cannot start, you can start from CIDR blocks and IP sites. However, the probability of potential vulnerabilities on second-level domain sites is very high. At the same time, this is also crucial to identifying the organizational structure of the target.
The following describes four common methods:
1. nslookup Query Method
Nslookup can specify the query type. You can check the DNS record survival time and specify the DNS server to be used for explanation. You can use this command on a computer with TCP/IP protocol installed. It is mainly used to diagnose information about the basic structure of the Domain Name System (DNS.
Specific commands and parameters here are not detailed, interested can query: http://support.microsoft.com/kb/200525
Take www.2cto.com as an example. I did a test, for example:
Sorry, my DNS server cannot be queried.
Advantage: It is very intuitive. by querying the record and CNAME of the DNS server, you can obtain relevant information accurately and fully.
Disadvantage: Many DNS statements prohibit queries. Therefore, it is not recommended
2. Webpage Search Method:
Http://alexa.chinaz.com /? Domain =
Example: http://alexa.chinaz.com /? Domain = 2cto.com
There are only two sites (this is not accurate)
In addition, google and bing
Google hacking must have been familiar with this post.
Example: site: pediy.com
This method is very good and comprehensive. The specific advantages and disadvantages will be explained later.
The efficiency of search engines is very high, but repeated items and endless page turning will be a headache. It would be better to have a data collector.
Advantage: quick retrieval and comprehensive content.
Disadvantages: the website and engine have different mechanisms, which leads to different entries.
3. Brute Force Scanning
In this case, tools are available online, whether in c ++ or python. Let's first introduce the principle:
One scenario is to use the dictionary library to send a page request packet, accept the packet, and analyze its return code, such as 200,404, to determine whether the page exists. This method is very bug, especially the Redirect redirection function, which is prone to false positives. For example, a demon written by Zwell.
Another method is to use a brute-force cracking method that uses a-z, 0-9, 3 or 4 bits to determine whether the IP address exists or whether the ping is successful, determine whether the page exists.
Advantage: some websites with regular domain names have absolute advantages.
Disadvantage: the speed is very slow and the false positive rate is very high.
4. query + Scanning
This is also a method I have come up with. I don't know if any of my predecessors have done this. I collected it to my website through bing and made quick judgments through some common level-2 headers, this can avoid the disadvantages of both parties, but the collection is still incomplete. This is just a demonstration (because the software contains other functions, it does not provide download ):
Author package