Tips for adding search engines on webpages)

Source: Internet
Author: User
Tags microsoft frontpage

Solemnly declare: This article describes and exchanges some correct methods and skills for adding search engines, hoping that more websites with rich content can be better loaded into various search engines, it was discovered and appreciated by everyone. I will never introduce and resolutely oppose some opportunistic search engines that add "skills" such as hidden ghost pages.

1. How to Determine keywords
2. Use the META Value
3. Guide Web robot to serve you
4. Tips for improving rankings
5. Introduction to world-renowned search engines

1. How to Determine keywords

Keywords are the basis for the search engine to classify websites and the words we enter when searching for information. Therefore, Keywords are crucial when registering a search engine. So how can we select the correct keywords?

Method 1: select several major search engines (such as AltaVista, Lycos, and Excite );
1) enter the keywords of your site and Search. In general, you will get a long list;
2) open the top 10 sites, and view each META tag (open the source file and the META tag is in 3) check their Keywords and learn from them some words you didn't think;
4) Finally, summarize your keywords.
You can select multiple words to repeat this process.

Method 2: select the most commonly used search term (that is, the site keyword) and then select. There are many such resources on the Internet:
To http://www.searchterms.com/take a look, there is a ranking of the most popular search words on the Internet each month;
Want to know the 200 most popular yahoo keywords? Go http://eyescream.com/yahootop200.htm
...... These resources are worth looking at, but they also have great limitations. For example, 60% of YAHOO's top 20 KEYWORDS are about "SEX, it seems that what people are most interested in is the mysterious "nature ". If your website is about the computer, it seems that the horse and the horse are inconsistent. What should I do? For more extensive statistics, the GOTO search engine provides the following service: "search term Usage Frequency Statistics ":
* Go To The GOTO site
* Click "GetListedonGoTo" in the lower left corner.
* Click the "ClientToolKit" link above.
* Select "SearchTermSuggestionList" in "Tools"
* Enter the keyword to be queried in the new window that appears.
* Click "FindIt" to query

2. Use the META Value
                         
The Meta tag is placed in the

<Meta name = "GENERATOR" content = "Microsoft FrontPage 3.0"> describes the editing tool;
<Meta name = "KEYWORDS" content = "..."> description KEYWORDS;
<Meta name = "DESCRIPTION" content = "..."> DESCRIPTION of the home page;

<Meta http-equiv = "Content-Type" content = "text/html; charset = gb_2312-80"> and
<Meta http-equiv = "Content-Language" content = "zh-CN"> describes the Language and text used.

There are two types of META: name and http-equiv.

Name is mainly used to describe webpages, corresponding to content, this allows search engine robots to search for and classify web pages. Currently, almost all search engines use online robots to automatically search for META values ). Among them, the most important are DESCRIPTION (the engine DESCRIPTION of your site) and KEYWORDS (KEYWORDS of Search Engine nationality classification ), you should insert these two META values for each page. Of course, you can also choose not to search for engines, available:
<Meta name = "ROBOTS" content = "all | none | index | noindex | follow | nofollow"> to determine:
When it is set to "all", the file will be retrieved and the link on the page can be queried;
If it is set to "none", the file is not retrieved and the link on the page is not queried;
If it is set to "index", the file will be retrieved;
If it is set to "follow", you can query the links on the page;
If it is set to "noindex", the file is not retrieved, but the link can be queried;
If it is set to "nofollow", the file is not retrieved, but the link on the page can be queried.

Http-equiv, as its name implies, is equivalent to an http File Header, which can directly affect webpage transmission. Examples of direct comparison, such:

A. Automatically refresh and refer to the new webpage
<Meta http-equiv = "Refresh" content = "10; url = http: // newlink"> Refresh after 10 seconds
B. effect added during webpage Conversion
<Meta http-equiv = "Page-Enter" content = "revealTrans (duration = 10, transition = 50)">
<Meta http-equiv = "Page-Exit" content = "revealTrans (duration = 20, transition = 6)">
In a webpage, there are some special effects in and out. This function is the Format/PageTransition of FrontPage98. However, note that the added webpage cannot be a Frame page;
C. Force the webpage not to be stored in the Cache
<Meta http-equiv = "pragma" content = "no-cache">
<Meta http-equiv = "expires" content = "wed, 26Feb199708: 21: 57GMT">
You can look at the http://www.internet.com/, its home page when you are disconnected, it can not be in the cache to call up. (It's about a great site)
D. Define a window pointing
<Meta http-equiv = "window-target" content = "_ top">
This prevents the webpage from being called as a Frame by others.

The following are some useful META value settings:
<Meta name = "robots" content = "ALL"> You Can Tell ROBOTS to search for ALL content on the site;
<Meta name = "revisit-after" content = "7 days"> ROBOTS will search again 7 days later, which is useful for sites that are regularly updated;
<Meta http-equiv = "pragma" content = "no-cache"> the webpage content cannot be viewed offline in the CACHE, and each access is forcibly refreshed;
......

3. Guide Web robot to serve you

Sometimes you may find that the content of your homepage is indexed in a search engine, even if you have never been in touch with them. In fact, this is the credit of Web Robot. Web Robot is actually a program that can traverse the Hypertext Structure of a large number of Internet URLs and recursively retrieve all the content of the website. These programs are sometimes called "Spider", "online tramp", "Web worm", or Webcrawler. Some well-known Internet search engine sites (SearchEngines) have specialized Web Robot programs to collect information, such as Lycos, Webcrawler, Altavista, and Chinese search engine sites such as Polaris, netease And GOYOYO.
WebRobot is like a non-fast customer. No matter whether you care about it or not, it will be loyal to the duties of its own host, working tirelessly in the space of the World Wide Web. Of course, it will also visit your home page, retrieve the content of the home page and generate the record format required by it. Maybe you are happy to know the content of the home page, but you are reluctant to gain insight and index some content. You can use the following method to deploy a road map and tell the Web Robot how to retrieve your home page and what can be searched and what cannot be accessed.

A. RobotsExclusionProtocol

The administrator of a network site can create a file in special format on the site to indicate which part of the site can be accessed by the robot. This file is placed under the root directory of the site, namely http: //... /robots.txt. When a Robot accesses a Web site, such as http://www.sti.net.cn/, first check the file http://www.sti.net.cn/robots.txt. If the file exists, it will be analyzed according to the record format:

User-agent :*
Disallow:/cgi-bin/
Disallow:/tmp/
Disallow :/~ Joe/

To determine whether it should retrieve the site files. Only one "/robots.txt" file can be found on a website, and each letter in the file name must be in lowercase. In the Robot record format, each individual "Disallow" line indicates the URL you do not want the Robot to access. Each URL must be a single line and cannot contain "Disallow: /cgi-bin/tmp. At the same time, empty rows cannot appear in a record, because empty rows are the marker of multiple records.
The User-agent line indicates the name of the Robot or other proxies. In the User-agent line, '*' indicates a special meaning-all robots.

The following is an example of robot.txt:

Deny all robots on the server:
User-agent :*
Disallow :/

Allow all robots to access the entire site:
User-agent :*
Disallow:
Or generate an empty "/robots.txt" file.

Part of the server content allows all robots to access
User-agent :*
Disallow:/cgi-bin/
Disallow:/tmp/
Disallow:/private/

Deny a specific robot:
User-agent: BadBot
Disallow :/

Only one robot is allowed to patronize:
User-agent: WebCrawler
Disallow:
User-agent :*
Disallow :/

B. RobotsMETAtag

A webpage Author can use a specialized html#ag to identify whether a webpage can be indexed, analyzed, or linked. These methods are suitable for most Web Robots. As to whether or not these methods are implemented in software, they also depend on Robot developers, and do not guarantee compliance with any Robot. If you urgently need to protect your content, consider using other protection methods, such as adding a password.
The RobotsMETAtag commands are separated by commas. The available Commands include [NO] INDEX and [NO] FOLLOW. The INDEX command specifies whether an indexed robot can INDEX this page. The FOLLOW command indicates whether the robot can track the link on this page. The default values are INDEX and FOLLOW. For example:
<Meta name = "robots" content = "index, follow">
<Meta name = "robots" content = "noindex, follow">
<Meta name = "robots" content = "index, nofollow">
<Meta name = "robots" content = "noindex, nofollow">

C. Tips for improving rankings

Use the plural form of the keyword (for example, replace "book" with "books", so when someone queries book or books, your site will be displayed in front of it)

Keyword can be spelled in uppercase or lowercase. (Such as books, Books, and BOOKS) more than three spelling methods of the same word do not play much role, even though the common misspelling works.

Use the combination of the selected keywords. People often use phrases that contain two or more keywords to search. (For example, "storagefacilities", "STORAGEFACILITIES", in order to truly find your target market, add words such as "self," SELF "and" your city/State. Visitors who do not need your products or services have no value even if they have more visitors.

You must use the META value. Many search engines index your site based on these META values. The META value is located between

Use a combination of your 10 to 20 best keywords. META content with rich keywords is usually the decisive factor in your site ranking.
Tip: if possible, try to use your most representative keywords at the beginning of each section or at the top of the search phrase.

Fill in the ALT value in your image link with keywords.

You should log on to the search engine on each page of your site, not just the homepage.
Tip: Many search engines will research your site on a regular basis. If the site does not change your ranking, it will drop, so keep your site updated.

Create or customize an independent page for each of your major keywords and design it separately for each major search engine. It will take some time, but once you have done it, it will increase your ranking unconfidence.
Tip: Make sure that each page can be directly linked to the home page and other related pages.
Warning! In the past, many people were opportunistic and abuse keywords to get a higher ranking. Is to use your keywords again and again, and change the text color to make it

Adapt to the page background color. If you find that you have done so, most search engines will take punitive actions.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.