Scrapebox Mass introduction of the second: universal net

Source: Internet
Author: User
Keywords Scrapebox Universal Net

For several main keywords to their own home page and individual target page to do outside the chain this everyone should have to do, the workload is relatively small, then in addition to these pages, mass outreach how to operate?

Remember the Q-teaching method: First to have been Google included in the page to do outside the chain!

In this way we need to collect their own site by Google included in the page, specify keywords, and generate SB can be a mass format, but the manual to operate the words of the workload is not generally large ...

Here's how I do it:

1. Collect the article page which has included in own website

I generally use the collection function provided in Scrapebox to extract all the pages included in the site (only for WordPress type of site)

The specific way is to select custom footprint in the Harvest module

In the Keywords input box input site:yourdomain, you can enter a lot of domain name together collection, Hook Agent open Multithreading Collection

Remove duplicate URLs After collection and export list

These are the pages included in the site. But there will be some pages except the first page and the article page, we need to filter

URLs for non-article pages generally have uniform features: &http://www.aliyun.com/zixun/aggregation/37954.html ">nbsp; /about/;  wp-login.php;   /tag/; /page/, wait, you need to delete all of their URL lines.

The method is to use some editors to match these URLs with regular expressions and then batch to replace them, such as editplus,dreamweaver, etc.

This use of Scrapebox collection has been included in the pages of the advantages of the method: to support multiple sites together mining, gathering up super fast

The disadvantage is: you will collect some non-article page, you need to filter

If anyone has a better way, please let me know.

2. Specify keywords for the collected URL and generate Scrapebox websites.txt

URL acquisition will be given to them after the keyword, but if a manual writing is still tired people ...

Lazy Person Method: Use the locomotive to collect these URL lists, and generate Scrapebox websites format

Simple next operation, collection of URL list into the locomotive, the locomotive needs to collect two fields, URLs and keywords

Keyword items can be based on the actual situation of your site URL to collect the source code in the <meta name= "keywords" items or collect tags items (this is the URL corresponding to the anchor text keyword)

In the acquisition of the results after the acquisition of some string replacement operations, directly generated Scrapebox can be sent in the format post or trackback, remember the last mention of the lazy mixed mass dafa, to follow the format of the inside set

After the collection, export CSV, roughly check, copy content saved as text can be distributed to scrapebox inside

PS: If the target site does not have keywords and tags you can also directly to the site of the main keyword long trailing machine assigned to the URL as a keyword anyway everyone is the same theme

Careful friends may feel that this operation will appear some keywords and URLs are not very match the situation, because it depends on keywords and tags

But the theme of this article is to use Scrapebox for most of the URLs have been included in the previous chain ... So I feel this way of operation: The problem is not big haha!

Because for the new station, especially Autoblog have hundreds of pages, so that you can ensure that the initial SEO site included in the page more evenly do outside the chain ...

When the site is established for a period of time and has a certain outside the chain, some SEO data will slowly surface, then we take second action plan

How to do it? Haha please continue to follow the report

The author of this article: Catop

This article address: HTTP://WWW.XRUMER.CN/32

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.