Search engine Learning Resource Collection [Post]

Source: Internet
Author: User
Tags domain name registration perl script

 Search engine Learning Resource Collection [Post]Http://www.zhihere.com/bbs/dispbbs.asp? Boardid = 8 & id = 630

 

I. Search Engine Technology/Dynamic Resources

<1> comprehensive

1. Lu Liang's search engine research [url = http://www.wespoke.com/##/url#http://www.wespoke.com/

Lu Liang is an expert in search engine development. He used to develop a search engine "boosuo" ([url = http://booso.com/##/url#http://booso.com/). It seems that he has stopped development and is currently working on Boke. He can learn a lot about the technology and experience of search engine development on his blog, which deserves continuous attention.

2. laolu/'blog

There are a lot of foreign information about search engines, focusing on data and numbers.

3. haslog [url = http://www.loverty.org/?#/url=http://www.loverty.org/

Here we can see the latest developments in several major search engines at home and abroad. It is worth looking at the search development situation.

4. Beijing yitian ruixin Technology Co., Ltd. [url = http://www.21cnbj.com/##/url?http://www.21cnbj.com/

News in search engines, Seo, SEM, and other industries

5,Chinese search engine Guide[Url = http://www.sowang.com/?#/url=http://www.sowang.com/

Latest Developments in search engines, various search techniques and methods

6. Full-text Chinese search network [url = http://www.fullsearcher.com/##/url#http://www.fullsearcher.com/

Fullsearcher. com was founded by two young people who are interested in searching. Our goal is to bring the Chinese internet into the search age and make search everywhere. Search to change people's lives.
Fullsearcher provides full-text search-related knowledge, vertical search engine knowledge, search-related news, and other search-related content.

<2> Google News

Google Blog: Google blackboard report [url = http://googlechinablog.com/?#/url=http://googlechinablog.com/

Google's blog in China is approaching our products, technologies, and culture.

 

1. gfans[Url = http://gfans.org/##/url#http://gfans.org/

A group of Google fans

No PageRank, no hilltop, no Seo. If Google is Longjing, I hope this is the tiger run, and the fragrance is like the fragrance of blue. It's hard to look at the sea and search for Google. Google is not just culture, but my belief.

Three chapters of this article:

Do not discuss Seo and related issues;

Must not be bored;

It is strictly prohibited to insult Baidu.
2. Zombie microphone[Url = http://www.kenwong.cn/##/url?www.kenwong.cn

Google world

3. Google observation[Url = http://blog.donews.com/googleview/##/url#http://blog.donews.com/googleview/

<2> other search engine Dynamics

1. Yahoo Search logs[Url = http://ysearchblog.cn/##/url#http://ysearchblog.cn/

Record the dynamics, products, and technologies of Yahoo Search Engine

2. Search Engine Code Resources

1> Search Engine/web spider program code

Related Programs developed abroad

1. nutch

Official website [url = http://www.nutch.org/##/url?http://www.nutch.org/
Chinese site [url = http://www.nutchchina.com/##/url?http://www.nutchchina.com/
Latest Version: nutch 0.7.2 released

Nutch is a search engine implemented by open-source Java. It provides all the tools we need to run our own search engine. you can create your own search engine on the Intranet, or you can create a search engine on the entire network. Free and free ).

2. Lucene

[B] [/B]

Official website [url = http://developere.apache.org/##/url#http://developere.apache.org
Chinese site [url = http://www.e.com.cn/##/url?http://www.e.com.cn/

Lucene is a sub-project of the Jakarta Project Team of the Apache Software Foundation. It is an open-source full-text search engine toolkit [written in Java], that is, it is not a complete full-text search engine, it is a full-text search engine architecture that provides a complete query engine and index engine, some text analysis engines (English and German ). Lucene aims to provide software developers with a simple and easy-to-use toolkit to conveniently implement full-text retrieval in the target system, or build a complete full-text retrieval engine based on this.

3. larbin: [Url = Response

Larbin is an open-source Web Crawler/web spider, developed independently by French young man s é Bastien ailleret. Larbin aims to track the URLs of pages for extended crawling, and finally provides a wide range of data sources for search engines.

Related Programs developed in China

1. sqlet-Open Source Chinese Search Engine

Official website [url = http://www.sqlet.com/##/url?http://www.sqlet.com/

Sqlet, Which is search & query & link, with the suffix "let", indicating small and small meanings. we plan to build a topic-based Chinese search engine that can search hundreds of millions of webpages. three indexing methods are supported: mysql_table_index, e_e_index, and sqlet_index. web page capture can be stored in the file system and database. Webserver.

2. findu vertical search engine code 

Feidu [url = http://www.faydu.net/##/url?http://www.faydu.net is a demonstration version of vertical online search. It mainly searches and organizes some domestic shopping sites,

The code in the open-source beta version is available for discussion. Download instructions:

1. Because the program runs on the server and runs on multiple processors, please control the number of threads on your PC.

2. Restore a database containing data to SQL Server

3. After the collection is completed, the reverse index file generated by licene is displayed in the bin directory by default.

4: [url = Response

Open Date: Source: http://blog.csdn.net/faydu/archive/2006/04/18/667997.aspx
Language: VB.net (C #)

2> Chinese word segmentation program code

1. Calculate the Chinese lexical analysis system ICTCLAS

Based on years of research, the Institute of Computing Technology of the Chinese Emy of Sciences developed the Chinese lexical analysis system ICTCLAS (Institute of computing technology, Chinese Lexical Analysis System) based on the multilayer hidden horse model ), the system has the following functions: Chinese word segmentation, part-of-speech tagging, and unregistered word recognition. The word segmentation accuracy rate is as high as 97.58% (the most recent 973 Expert Group evaluation results). Role-Based unregistered word recognition can achieve a recall rate higher than 90%, of which the recall rate of Chinese names is close to 98%, the processing speed of Word Segmentation and part-of-speech tagging is 31.5kb/s. The results of 14 free ICTCLAS and 14 other computing institutes were widely reported by Chinese and foreign media. Many free Chinese Word Segmentation modules in China have referenced ICTCLAS code more or less.

Download Page: [url = http://www.nlp.org.cn/project/project.php? Proj_id = 6] [/url] http://www.nlp.org.cn/project/project.php? Proj_id = 6

Since ICTCLAS is written in C language, it is not convenient to use mainstream development tools. Therefore, some enthusiastic programmers have changed ICTCLAS to other languages such as Java and C.

(1) fenci, Java ICTCLAS, download page: [url = http://www.xml.org.cn/printpage.asp? Boardid = 2 & id = 11502] [/url] http://www.xml.org.cn/printpage.asp? Boolean id = 2 & id = 11502

(2) autosplit, another Java ICTCLAS, cannot find the download page. Click Download locally

(3) Xiao Ding-dong's Chinese word segmentation has been found on the download page and cannot be found now. According to the author's introduction, ICTCLAS has three versions: Java, C #, and C ++. The introduction page is [url = http://www.donews.net/accesine#/url#http://www.donews.net/accesine.

2. Massive smart word segmentation research Edition

The massive intelligent Computing Technology Research Center aims to enable researchers in the Chinese Information Processing field to share the research results of the massive intelligent center and jointly improve the level of Chinese Information Processing, we hereby release the massive Intelligent Word Segmentation research edition for research by experts, scholars and enthusiasts.

Download Page: [url = http://www.hylanda.com/cgi-bin/download/download.asp? Id = 8] [/url] http://www.hylanda.com/cgi-bin/download/download.asp? Id = 8

3. Others

(1) CSW Chinese Intelligent Word Segmentation component

Running Environment: Windows NT, 2000, XP, or higher, which can be called in Microsoft development languages such as ASP and VB.

Introduction: CSW intelligent Chinese Word Segmentation DLL component, which can automatically split a piece of text into regular Chinese phrases and separate them in a specified way, the split phrases can be marked with semantics and word frequency. It is widely used for information retrieval and analysis in various industries.

Download Page: [url = http://www.vgoogle.net/##/url#http://www.vgoogle.net/

(2) C # Chinese Word Segmentation component written

According to the author, a DLL file can be used as a Chinese/English word splitting component. Fully C # managed code writing and independent development.

Download Page: [url = http://www.rainsts.net/article.asp? Id = 48] [/url] http://www.rainsts.net/article.asp? Id = 48

3> overview of open-source spider

Spider is a required module for search engines. The results of spider data directly affect the evaluation indicators of search engines.

The first Spider Program was operated by MIT's Matthew K gray to count the number of hosts on the Internet.

Spier definition (there are two definitions of spider: broad and narrow ).

Narrow sense: software programs that use standard HTTP protocol to traverse the World Wide Web Information Space Based on the hyperlink and web document retrieval methods.
Broadly speaking, all software that can use HTTP to Retrieve Web documents is called Spider.
Protocol gives sites way to keep out of the/'Bots Jeremy Carl, Web week, Volume 1, Issue 7, November 1995 is a protocol closely related to Spider. For details, refer to robotstxt.org.

Heritrix

Heritrix is the Internet Archive/'s open-source, extensible, web-scale, archival-quality web crawler project.

Heritrix (sometimes spelled heretrix, or misspelled or missaid as heratrix/heritix/heretix/heratix) is an archaic word for heiress (woman who inherits ). since our crawler seeks to collect and preserve the Digital artifacts of our culture for the benefit of future researchers and generations, this name seemed apt.

Language: Java ,()

Weblech URL spider
 
Weblech is a fully featured web site download/mirror tool in Java, which supports many features required to download websites and emulate standard web-browser behaviour as much as possible. weblech is multithreaded and comes with a GUI console.

Language: Java ,()

Jspider

A Java implementation of a flexible and extensible web spider engine. optional modules allow functionality to be added (searching dead links, testing the performance and scalability of a site, creating a sitemap, Etc ..

Language: Java ,()

Websphinx

Websphinx is a web crawler (robot, spider) Java class library, originally developed by Robert Miller of Carnegie Mellon University. multithreaded, tollerant HTML parsing, URL filtering and page Classification, pattern matching, grouping, and more.

Language: Java ,()

Pysolitaire

Pysolitaire is a fork of pysol solitaire that runs correctly on Windows and has a nice clean installer. Equals (Python solitaire) is a collection of more than 300 solitaire and mahjongg games like Klondike and spider.

Language: Python ,()

The spider web network xoops mod team

The spider web network xoops module team provides modules for the xoops community written in the PHP coding language. we develop mod and or take existing PHP script and port it into the xoops format. high quality mod is our goal.

Language: PHP ,()

Fetchgals

A multi-threaded web spider that finds free porn thumbnail galleries by visiting a list of known tgps (thumbnail gallery posts ). it optionally downloads the located pictures and movies. TGP list is supported ded. public Domain Perl script running on Linux.

 

Language: Perl ,()

Where spider 

 

The purpose of the where Spider software is to provide a database system for storing URL addresses. the software is used for both ripping links and browsing them offline. the software uses a pure XML database which is easy to export and import.

Language: XML ,()

[B] [/B]Sperowider

Sperowider website archiving suite is a set of Java applications, the primary purpose of which is to Spider dynamic websites, and to create static distributable archives with a full text search index usable by an associated Java applet.

Language: Java ,()

Spiderpy

Spiderpy is a web crawler spider program written in Python that allows users to collect files and search Web sites through a writable able interface.

Language: Python ,()

[B] [/B]Spidered Data Retrieval

Spider is a complete standalone Java application designed to easily integrate varied CES. * XML driven framework * Scheduled pulling * highly extensible * provides hooks for custom post-processing and Configuration

Language: Java ,()

[B] [/B]Webloupe

Webloupe is a Java-based tool for analysis, interactive visualization (sitemap), and isolation of the Information Architecture and specific properties of local or publicly accessible websites. based on Web Spider (or web crawler) technology.

Language: Java ,()

Aspider

Robust featureful multi-threaded CLI web spider using Apache commons httpclient V3.0 written in Java. aspider downloads any files matching your given mime-types from a website. tries to Reg. exp. match emails by default, logging all results using log4j.

Language: Java ,()

Larbin

Larbin is an HTTP web crawler with an easy interface that runs under Linux. It can fetch more than 5 million pages a day on a standard PC (with a good network ).

Language: C ++ ,()

Iii. Seo-related resources

1. Domain Name Information Query

★Query the information of the international top-level domain name (. aero ,. arpa ,. biz ,. com ,. coop ,. edu ,. info ,. int ,. museum ,. net ,. can be queried through the domain name registrar authorized by ICANN, or directly to the InterNIC website, the URL is

[Url = http://www.internic.com/whois.html?#/url=http://www.internic.com/whois.html

[Url = http://www.iwhois.com/?#/url=http://www.iwhois.com/

★To query whether a global top-level domain name has been registered, you can go to the following website (including the domestic domain name. cn ):

[Url = http://www.uwhois.com/cgi/domains.cgi? User = noads] [/url] http://www.uwhois.com/cgi/domains.cgi? User = noads

★Query the registration status of domestic domain names,

[Url = Response

★Hichina domain name registration information query

[Url = http://www.net.cn/##/url?http://www.net.cn/

★IP address query and domain name registration information whois Query

[Url = http://ip.zahuopu.com/##/url?http://ip.zahuopu.com/

2. Alexa-related and search rankings

★Ranking Top 500 in Chinese

[Url = http://www.alexa.com/site/ds/top_sites? Ts_mode = Lang & lang = zh_gb2312] [/url] http://www.alexa.com/site/ds/top_sites? Ts_mode = Lang & lang = zh_gb2312

★Google Zeitgeist-Google search rankings

[Url = Response

★Baidu Chinese search rankings

[Url = http://top.baidu.com/?#/url=http://top.baidu.com/

★Yahoo search rankings

[Url = Response

★Sogou search index

[Url = http://www.sogou.com/top/?#/url=http://www.sogou.com/top/

3. Search for keywords

★Google Keyword query https://adwords.google.com/select/KeywordSandbox
★Baidu keyword Query [url = Baidu
★Sohu keyword [url = Scheme

4. Seo projects/tools

★Webpage quality [url = webpage
★Keyword density [url = dense
★Search engine spider simulator [url = Crawler

★Google dance query tool: [url = http://www.google-dance-tool.com/##/url#http://www.google-dance-tool.com/

5. Seo websites

English website:

Search for [url = http://www.searchenginewatch.com/##/url?http://www.searchenginewatch.com/
Seochat [url = http://www.seochat.com/?#/url=http://www.seochat.com

Chinese website

1> American tech PLC[Url = http://www.zunch.cn/##/url?http://www.zunch.cn

Liu huanbin, head of the world's leading website design and search engine optimization Service Company in China

Blog.zunch.cn


The latest SEO industry information can be obtained here

2> Search Engine Optimization Communication Center[Url = http://www.seoonline.cn/##/url?http://www.seoonline.cn

Seo practitioner website

1>Liu huanbin, head of shangqi China, [url = Shanghai/

2>Seo professional-bianyue [url = http://www.bianyue.com/##/url#http://www.bianyue.com/

4. Search engine companies

1. Contact Information

Google 

[Url = Response
Headquarters
1600 amphitheatre Parkway
Mountain View, CA
94043 USA
Phone: (650) 253-0000
Fax: (650) 253-0001
Email: chinese_s@google.com


Baidu

[Url = Response
Tel. (010) 82621188
Fax (010) 82607007 82607008
E-mail webmaster@baidu.com
Address: 12f, IDEAL International Building, No. 58, beisihuan West Road, Beijing
Zip code 100080


Yahoo/yisearch

[Url = Response
Switchboard: 010-65811221
Address: Yahoo China search business department, 5f, Block B, heqiao building, Guanghua East Road, Chaoyang District, Beijing
Zip code: 100026
Fax: 010-65812440
Online question submission: [url = Response


Chinese search

[Url = Response
Address: Room 15.16, block A, Huaxing building, 42 Xizhimen North Street, Beijing
Zip code: 100088
Switchboard: 010-62266296
Fax: 010-82211302


Sohu search

[Url = Response
Address: 10f, Weixin International Building, No. 9 Tsinghua Science Park, no. 1 Zhongguancun East Road, Haidian District, Beijing
Zip code: 100084
Tel: 86-10-62726666
Fax: 86-10-62728300


Sina search

[Url = http://ads.sina.com.cn/contact.html?#/url=http://ads.sina.com.cn/contact.html
20/F, IDEAL International Building, No. 58 beisihuan West Road, Beijing
Zip code: 100080
Tel: (86-10) 82628888
Fax :( 86-10) 82607166
Search engine consultation phone: 010-82628888 to 6688
Search engine contact mailbox searchcn@staff.sina.com.cn


Netease search

[Url = Response
Room 1901, East 3 office building, Oriental Plaza, no. 1, East chang'an Street, Dongcheng District, Beijing
Zip code: 100738
Netease search engine Customer Service Hotline:
Tel: 010-82110163-8350, 8121, 8136
E-mail: adp_complaint@service.netease.com

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.