Sphider + SCWS to create a perfect Chinese PHP Search Engine

Source: Internet
Author: User
Today, we need to create a full-text search engine for several websites, find several PHP open-source projects, and first try Sphinx. Unfortunately, it is based on databases, which is equivalent to database search extension. Sphider is good, but the Chinese word segmentation is not good. It can only be segmented by spaces and symbols. To use luence, only Java and. net, no php version, so today we only need to create a full-text search engine for several websites, find several PHP open-source projects, and first try Sphinx, unfortunately it is based on the database, it is equivalent to the extension of database search. Sphider is good, but the Chinese word segmentation is not good. It can only be segmented by spaces and symbols. If you want to use luence, you can only use Java and. net. If you do not use php, You have to modify the Sphider. Fortunately, we found SCWS, a good Chinese word segmentation system. We only need to add its functions to Sphider.


First deploy Sphider and SCWS according to their installation documents, the SCWS-1.1.6 used here, need to deploy PHP extension, pay attention to the Linux to modify the dictionary permissions, otherwise, Word Segmentation separates all Chinese characters separately. Sphider here uses the perfect Chinese Simplified Chinese version of Ding tingchen with the spider search engine.


After the deployment is correct, modify Sphider and find the spider file in the admin folder. First, add the code to initialize the word segmentation program.

$ Cws = scws_new (); $ cws-> set_charset ('gbk'); $ cws-> set_rule ('/usr/local/scws/etc/rules. ini '); // note the path $ cws-> set_dict ('/usr/local/scws/etc/dict. xdb'); $ cws-> set_ignore (true );

Pay attention to the gbk used here. If your webpage uses UTF-8 encoding, you need to change the location of this file and the dictionary and rule file.

In the index_url function, replace the original English word segmentation with $ wordarray = unique_array (explode ("", $ data ['content ']).

$cws->send_text($data['content']);$list = $cws->get_tops(1000, $xattr);settype($list, 'array');$wordarray=array();$i=0;// segmentforeach ($list as $tmp){    $wordarray[$i][1]=$tmp['word'];    $wordarray[$i][2]=$tmp['times'];    $i++;}

Delete

$wordarray = unique_array(explode(" ", $data['content']));

And

$wordarray = calc_weights ($wordarray, $title, $host, $path, $data['keywords']);

Two statements, because Sphider's original English Word Segmentation is completely unnecessary here. Here we can limit and optimize $ wordarray on our own. Here I am writing very simple.

After the modification is complete, the crawler can properly perform word segmentation on Chinese. The effect is good. Note that if garbled characters appear, the webpage or dictionary encoding is utf8 or gb2312.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.