Use Google to crawl any website and use Spreadsheet as a DDoS weapon

Source: Internet
Author: User

Use Google to crawl any website and use Spreadsheet as a DDoS weapon

You only need to use a notebook, open several web tabs, just copy some links pointing to 10 MB files, Google crawls the same file, the traffic is more than 700 Mbps.

Reminder: the following content is for security testing and teaching reference only, and any illegal use is prohibited.

Google's FeedFetcher crawler caches any link in the = image ("link") of spreadsheet.

For example:

If we input = image ("http://example.com/image.jpg") to any Google spreadsheet, Google will send a" FeedFetcher crawler to capture the image and save it to the cache to display it.

However, we can append a random parameter to the file name so that FeedFetcher can capture the same file multiple times. That is to say, if a website has a 10 MB file and the following list is input to Google spreadsheet, Google crawlers will crawl the file 1000 times.

=image("http://targetname/file.pdf?r=0")=image("http://targetname/file.pdf?r=1")=image("http://targetname/file.pdf?r=2")=image("http://targetname/file.pdf?r=3")...=image("http://targetname/file.pdf?r=1000")

After adding random parameters, each link is considered as a different link. Therefore, Google crawlers crawl multiple times to generate a large amount of outbound traffic. Therefore, anyone who needs to use a browser and open some labels can launch http get flood attacks with huge traffic to the web server.

However, attackers do not need much bandwidth. They only need to input the "image" address into spreadsheet, and Google will capture the 10 mb data from the server, however, because the address points to a pdf file (non-image file), attackers get feedback from Google as N/. Obviously, this type of traffic can be multiplied, and the consequences may be disastrous.

You only need to use a notebook, open several web tabs, just copy some links pointing to 10 MB files, Google crawls the same file, the traffic is more than 700 Mbps. This-Mbps crawling traffic lasted for about 30-45 minutes, and I shut down the server. If no error is returned, GB of traffic is consumed in 45 minutes.

My friends and I were stunned by such high outbound traffic. If the file size is larger, I think the outbound traffic can easily reach the Gpbs level, and the inbound traffic can also reach 50-Mbps. It can be imagined that if multiple attackers use this method to attack a website at the same time, how much traffic will be there. At the same time, because Google uses multiple IP addresses for crawling, it is difficult to prevent this type of GET flood attacks, and it is easy to sustained attacks for several hours, this attack is too easy to implement.

After discovering this bug, I began to search for the actual cases generated by it. I actually found two examples:

The first attack case explains how the blogger accidentally attacked himself and received a huge traffic bill. Another article, ddos-weapon/"href =" http://blog.radware.com/security/2012/05/spreadsheets-as-ddos-weapon/ "> using Spreadsheet as a DDoS weapon, describes another similar attack, however, it is pointed out that attackers must capture the entire website and store the link in spreadsheet with multiple accounts.

However, it is strange that no one attempts to append random request variables. Although it is only the same file of the target website, using this method to add random request variables can request thousands of requests to the same file, the consequences are quite scary, and the implementation process is very easy, anyone just needs to copy some links with their fingers.

I submitted the bug to Google yesterday and received their feedback today, saying that this is not a security vulnerability. It is regarded as a violent denial-of-service attack and is not included in the bug bonus range.

Maybe they knew this problem beforehand and thought it was not a bug?

However, even if I don't get the bonus, I still hope they will fix this problem. Due to the low implementation threshold, anyone can use Google crawlers to launch such attacks. There is a simple solution, that is, Google only crawls links without request parameters. We hope Google can fix this bug as soon as possible to protect webmasters from threats.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.