Windows3.5 the Word2vec of Wikipedia corpus to find synonyms similarity

Source: Internet
Author: User

Let's start by listing everything you need to download.

1. Corpus: https://dumps.wikimedia.org/zhwiki/latest/zhwiki-latest-pages-articles.xml.bz2, or find https://here dumps.wikimedia.org/zhwiki/. This file contains only the title and text, does not contain the link information between the terms, the size is about 1.3G.

2.WikiExtractor: Used to extract the title and body from the original XML file. Address https://github.com/attardi/wikiextractor/blob/master/WikiExtractor.py. Because this file does not depend on other files and libraries, just create a new script locally, and then copy the source code in the wikiextractor.py.

3. Traditional simplified conversion tool: As the original text of the entry may be mixed with traditional and simplified, it is necessary to convert to simplified. Linux can be directly wget after use in the terminal, the Windows version can only be manually downloaded, address https://code.google.com/archive/p/opencc/downloads. After the direct decompression can be.

At this point there are three files, zhwiki-latest-pages-articles.xml.bz2,wikiextractor.py, and folder opencc-0.4.2 (link: https://bintray.com/ PACKAGE/FILES/BYVOID/OPENCC/OPENCC).

1, first we want to get Wikipedia's Chinese corpus, this file is very large, need to slowly download;

is : https://dumps.wikimedia.org/zhwiki//

2, through the https://dumps.wikimedia.org/zhwiki/latest/zhwiki-latest-pages-articles.xml.bz2

We got the 1.45GB Chinese corpus zhwiki-latest-pages-articles.xml.bz2.

3, the content is stored in XML format, so we still need to do the processing (converted to text document)

There are two ways to extract this:

(1) process_wiki.py Source for extraction (but I tried several times did not succeed, after importing the parsed text file does not respond, but output a bunch of documents, do not know how to do)

Training method: In the file directory Python process_wiki.py zhwiki-latest-pages-articles.xml.bz2 wiki.zh.text

(2) wikiextractor.py Source training: https://github.com/attardi/wikiextractor/edit/master/WikiExtractor.py

command line into current folder inputpython WikiExtractor.py -b 500M -o extracted zhwiki-latest-pages-articles.xml.bz2

Training results (it takes about one hours, the back is very slow):

 

Windows3.5 the Word2vec of Wikipedia corpus to find synonyms similarity

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.