The purpose of this article is to put your super list number X 10
The two types of list mentioned in the last chapter, to separate the operation do not mix. (one is automatically through the audit, one is high PR of 3, more than 4)
To see how I do:
1. export list to root domain URL
Load your previously accumulated list into Scrapebox-> first remove duplicate domain remove Duplicate domains-> then trim to Root
Then the results are exported, next I will be the domain name of the other article pages are dug out, so as to expand the list number.
2, root domain URL local processing
The next two things to do:
(1), the export of root domain list in front of the batch plus site: Save as List1.txt (the purpose is to dig these domain names by Google included pages)
(2), the export of root domain list at the end of the batch plus/sitemap.xml Save as List2.txt (the purpose is to use the Sitemal tool to mining the page)
3, with Scrapebox treatment List1.txt
In the Scrapebox Harvest module Select Custom Footprint, the List1.txt into the keyword list, search engine only Google, loading agent, start the search bar.
4, with Scrapebox http://www.aliyun.com/zixun/aggregation/9103.html ">sitemap Scraper processing List2.txt
In Scrapebox's addons, there is a gadget called Sitemap Scraper, we install it, threads open to 100, import List2.txt, start scrape.
5. Save the result
After searching, load the results of steps 3 and 4 into the Scrapebox and remove the duplicate URLs, and the list should be multiplied many times. But there are a lot of pages that are not in the article, to dispose of them.
You can send this list 2-3 times, sending a successful save is available
I usually like to use this to deal with automatically through the audit list, because most of the WordPress one but one article is automatically passed, the whole station article page can be automatically passed.
The author of this article: Catop
This article address: http://www.xrumer.cn/67