SRC getpath user documentation (first version)

Source: Internet
Author: User

SRC getpath user documentation (first version)

 

Part 1: Command Line

 

PATH (directory retrieval command ):

-T <threadnum> indicates the number of threads used for scanning.

-Sgdic <none> uses a self-generated social engineering dictionary.

-Reco <none> enable auto retry.

-F <FILENAME> uses a specific dictionary. If this parameter is not used, the webaddr. DIC dictionary under the DICs directory is used by default.

-P <portnum> specifies the port number used to connect to the target host. You can also use "www.t00ls.net: 80" on the host domain to complete the settings. In this mode, port 80 is used.

-Co <timeout (MS)> connection timeout, remember it is different from "Send timeout" or "receive timeout", the above two timeout settings, you need to enter the setting command to modify it in system settings.

-Spider <deepth> sets the crawling depth of a spider. By default, "no input" or "-spider 0" crawls the directory currently being scanned, therefore, if you only need to crawl a directory, you do not need to set this parameter. When crawling a website, do not set the crawling depth to more than 3, which takes a long time.

-Hf <none> is used for system-reserved host list scanning. Do not set it.

 

Csacn (by-Site query command ):

-T <threadnum> indicates the number of threads used for scanning.

-Co <timeout (MS)> connection timeout.

-Single <none> only scans the same site of the current host; otherwise, scans the same site of all hosts in section C.

 

Hostlist (host list Task Command ):

Note: The host list is currently moved to the right sidebar of the dictionary task tab.

-T <threadnum> indicates the number of threads used for scanning.

-F <FILENAME> uses a specific dictionary. If this parameter is not used, the webaddr. DIC dictionary under the DICs directory is used by default.

-Spider <deepth> sets the crawling depth of a spider.

-Slient <none> sets the task to silent mode, that is, the scanner sends any exceptions encountered to the analyzer for automatic processing without manual intervention, in this case, you can watch a movie or do other time-consuming things.

 

Port (simple Port Scan command ):

-T <threadnum> indicates the number of threads used for scanning.

-P <portbegin> [portend]

-F <FILENAME> obtains the list of ports to be scanned from the file.

-Co <timeout (MS)> connection timeout.

 

DDoS (CC attack command ):

-T <threadnum> indicates the number of threads used by the attack.

-P <portnum> specifies the port number to connect to the target host.

-ADDR <webpath> sets the page address of the attack.

 

Ping (obtain remote host name command ):

<None>

 

Fport (file list multi-host port scan command) [rapid scan] [not accurate enough]:

-T <threadnum> indicates the number of threads used for scanning.

-F <FILENAME> obtains the list of ports to be scanned from the file. The default list is in exts/port. Ext.

-Co <timeout (MS)> connection timeout.

-Sync <synchostnum>: Number of synchronized scan hosts.

 

Dispatchport (Distributed balanced rapid Port Scan) [accurate]: (upgrading module)

 

Reconn (Advanced retry command ):

Note:

1. Use this advanced command on any page with "flag" being "NONE" or marked as 240 in the spider manager below.

2. This command is affected by the maximum number of retries, retry timeout (default), and number of retry threads (default) in the system settings. You can also manually modify these settings using the command line.

-T <threadnum> Number of threads used for retry.

-Co <timeout (MS)> connection timeout.

-Auto <none> auto retry.

-Fpath <none> internal reservation

-Spider <deepth> 240 manual spider depth.

 

Del (Advanced filter command ):

-I <column> the column to be retrieved, starting from 0.

-S <content> content to be deleted

 

Logf (log saving command ):

<None>

 

Newdic (generate a new dictionary Command Based on the directory structure ):

<None>

 

Reshow (re-display the host server version ):

<None>

 

Reload ):

<None>

 

Setting (activation system Settings dialog box ):

<None>

 

Filter (activate filter dialog box ):

<None>

 

Hostinfo (displays the current host information ):

<None>

 

Blog (go to my blog to view the latest software update information ):

<None>

 

Help (a Help Menu written a long time ago ):

<None>

 

Result (activation result processing dialog box ):

<None>

 

Taskfile (from File Import host to task list ):

<FILENAME>

 

Clstask (clear host task list ):

<None>

 

You can also double-click an existing task to delete a single task.

Set (SET system information ):

If you don't want to write it, you can also use the UI interface.

 

More complex and advanced usage will be added later. I am exhausted and hate typing...

================

Important handouts on question 404

Recently, I found that many people talked about the 404 Analysis Error. Here I declare that the software has no bugs on this issue, because we didn't use the 404 filter correctly.

I want to explain why the Error 404 failed.

The keyword "sorry, the page you want to access does not exist" appears on this Baidu error page ."

But you are:

This string is added here. Why didn't the page be judged as 404?

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The answer is:

 

Submitted content:

Returned content:

 

 

The above is a non-existent address "/DSSS" that I manually submitted"

The returned page does not contain the keyword "sorry, the page you want to access does not exist ." So what should I add to the filter?

The answer is that all the strings on the above page, such as "/search/error.html", can correctly parse the 100% error, but if I have to fill in "sorry, the page you want to access does not exist."

What should we do?

You need:

Select "302" to follow, so that getpath will follow the "/search/error.html" page. The previously set "sorry, the page you want to access does not exist ." This takes effect. The following is the software that follows 302:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.