10 minutes Check Site page missing or wrong

Source: Internet
Author: User

Intermediary transaction http://www.aliyun.com/zixun/aggregation/6858.html ">seo diagnose Taobao guest cloud host technology Hall

I never suggested you try to solve your own SEO problems in just 10 minutes, but it is amazing what you can do when you are forced to really make your time count. I would like to share a common SEO solution problem, my 10 minutes (give or take) process – looking for "missing" pages. You can actually apply a series of questions, including:

Find out the reason is not getting a page index

Why did you find the page not ranked

Determine if a page is penalized

The problem of the duplicate content of the clamping

I will rest 10 minutes down, every minute (give or take). See what you can do for the next 10 minutes.

0:00-0:30– confirm the site index

Always starting from scratch-is your page really missing? Although sometimes getting a bad reputation for accuracy (mainly the total number of pages), Google's website: command is still the best tool for this job. This is great for diving deep because you can combine it with search keywords, "keyword" search (exact match), and operators (other titles:, inurl: etc.). Of course, the most basic format is:

  

For this particular work, the root domain is always used. You never know when Google is indexing multiple subdomains (or the wrong subdomain), and that information can be useful after that. Of course, now you just want to see Google know you exist.

0:30-1:00-Confirm the index of the page

Assuming Google knows your site exists, it's time to check the page for specific questions. You can enter the full path behind the site: command or use of the combined site: and inurl:

  

If the page doesn't seem to be Google's radar, by detecting the problem of narrowing just "/folder", see if anything can be indexed at the same level. If the page is not indexed by all, you can skip the next step.

1:00-1:30-Confirm the page ranking

If the page is indexed, but you can't seem to find it in Serp, pull out a tag fragment of the title and match the query (in quotes) at Google for an exact one. If you still can't find it, combine the site: example.com's page title or part of it. If the page is indexed rather than ranked, you may be able to skip the next few steps (jump to the 4-point mark).

1:30-2:00– 's examination of robots.txt

Now let's assume that your site is partially indexed but missing from the index in the problem page. While in Robots.txt's paper, luckily, it's getting rarer, but it's still worth taking a quick peek to make sure you don't accidentally block the search bots. Luckily, the file is almost always:

Http://www.example.com/robots.txt

What you're looking for is the source code that looks like this:

  

This can be either an instruction block for all user proxies, or just a hope googlebot. Similarly, check any instructions that do not allow a specific folder or page to be problematic.

2:00-2:30– Check Noindex

Another unexpected blocking problem may cause a bad noindex directive. In the HTML source code (between the and ") headers, you are looking for something like this:

  

Although this sounds strange to someone to stop them obviously want to index the pages, bad meta tags and rel = spec (see below) can easily create a bad CMS setting.

2:30-3:00– Check rel Typical

This one is a little tricky. The typical rel = tag itself is often a good thing, helping to effectively normalize pages and remove duplicate content. The label itself looks like this:

  

The problem is when you standardize too narrowly. For instance, for example, every Web page on a website has a URL "www.example.com" Spec tag – Google will accept this instruction to fold the entire search index down to only one page.

Why are you doing this? You may not, intentionally, but it's easy to cause a problem with a serious CMS or plugin. Even if it's not the whole site, it's easy to be too narrow and normalized to eliminate important pages. This is a problem that seems to be on the rise.

3:00-4:00– Check Header/redirect

In some cases, a page may return a bad header for example, (404 error codes, or poorly designed redirects (301/302), which is appropriate for indexation prevention. You need to check this header-there's plenty of free online (try HTTP sniffer). You're looking for a 200-line status code. If you receive a redirect string, 404, or any error code (4XX or 5XX series), you can have a problem. If you get a redirect (301 or 302), you send "lost" pages to another page. Originally, this is not really missed.

4:00-5:00– Check Cross site duplication

There are basically two kinds of potential reusable buckets – repeat the duplication between your site's pages and your site. The latter may occur as a result of sharing content with its own attributes, the legal re-use of content (such as affiliate marketing might do), or flat scraping out. The problem is that once Google detects these repetitions, it may pick one and ignore the rest.

If you suspect that your "missing" page content has been taken by any other site or other site to take a unique sounding sentence, it quotes Google (make an exact match). If other sites pop up, your page may have been marked in two copies.

5:00-7:00– Internal Repeat Check

Internal repetition usually occurs when Google crawls multiple URL changes on the same page, such as the CGI parameters in the URL. If Google reaches the URL path of two identical pages, it sees two separate pages, one of which is likely to be ignored. Sometimes, it's good, but other times, Google ignores the wrong.

For internal replication, use a centralized site:) queries and references in a number of unique page titles (whether independent or using headings:. URL-oriented, nature has repeated titles and meta data, so the title of the page is one of the easiest places to find it.

7:00-8:00– Review Anchor text quality

The last two are a bit tough and more subjective, but I would like to give a few simple tips on where to start if you suspect a page-specific penalty or depreciation. It's easy to find a problem when you have a suspicious anchor text pattern – Usually, a rare combination of keywords that dominate your external links. This may come from a very aggressive (and usually low-quality) link-building activity, or something like a widget that dominates your link profile.

Opening a Web browser makes it easy for you to view your anchor text with a broad stroke. Just enter your URL, click on the anchor text distribution (Fourth tab), and select the phrase:

  

What you are looking for is an unnatural pattern of repetition. Some repetitions are good – you will have to naturally anchor text to your domain keywords and exact brand names, for example. For example, still, we 70 back SEOmoz's link to the% anchor text "Danny Buddha awesome." "It would be unnatural. If Google sees this as a sign of link building manipulation, you may see the target page penalized.

8:00-10:00– Review Link appearance quality

Link profile quality is very subjective, it is not a task that you can do for two minutes of justice, but if you have a play penalty, sometimes it will be easy to find some shady links soon. Again, I will use the browser to open the Web site, I will select the following options: + 301 Second, only external pages, all Web pages on the root:

  

You can export to Excel links if you want (deep analysis large), but now, only spot checks. As long as the first few pages fishy, the odds are good that the weak link is a mess. By clicking on a few pages, find out the problem, for example:

Suspicious anchor text (nothing, rubbish, etc.)

Sites that are not related to the topic crazy

An obvious link is embedded to pay or swap blocks

Links are part of a multiple-link footer

The second is advertising (should not) RELATED links

Also, look for any over-reliance on a low quality link in kind (blog comments, article marketing, etc.). While fully linked, profile analysis may take several hours, but is usually very easy to find in just a few minutes of spam link building. If you can find it fast and the chances are pretty good, Google can.

(10:00) – Time to

10 minutes does not look much (or it may take you so long just to read this article), but once you have established a process, you can learn a lot about the site in a few minutes. Of course, finding a problem and solving it is two completely different things, but I hope this at least gives you a start in the process of trying for yourself and perfecting your own SEO problems, so SEO optimization is not a problem.

This article from: Search Engine optimization seo Moon Chang Mian

This article from: http://www.soonseo.com (Welcome reprint please keep the original link)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.