Let's start by listing everything you need to download.
1. Corpus: https://dumps.wikimedia.org/zhwiki/latest/zhwiki-latest-pages-articles.xml.bz2, or find https://here dumps.wikimedia.org/zhwiki/. This file contains only the title and text, does not contain the link information between the terms, the size is about 1.3G.
2.WikiExtractor: Used to extract the title and body from the original XML file. Address https://github.com/attardi/wikiextractor/blob/master/WikiExtractor.py. Because this file does not depend on other files and libraries, just create a new script locally, and then copy the source code in the wikiextractor.py.
3. Traditional simplified conversion tool: As the original text of the entry may be mixed with traditional and simplified, it is necessary to convert to simplified. Linux can be directly wget after use in the terminal, the Windows version can only be manually downloaded, address https://code.google.com/archive/p/opencc/downloads. After the direct decompression can be.
At this point there are three files, zhwiki-latest-pages-articles.xml.bz2,wikiextractor.py, and folder opencc-0.4.2 (link: https://bintray.com/ PACKAGE/FILES/BYVOID/OPENCC/OPENCC).
1, first we want to get Wikipedia's Chinese corpus, this file is very large, need to slowly download;
is : https://dumps.wikimedia.org/zhwiki//
2, through the https://dumps.wikimedia.org/zhwiki/latest/zhwiki-latest-pages-articles.xml.bz2
We got the 1.45GB Chinese corpus zhwiki-latest-pages-articles.xml.bz2.
3, the content is stored in XML format, so we still need to do the processing (converted to text document)
There are two ways to extract this:
(1) process_wiki.py Source for extraction (but I tried several times did not succeed, after importing the parsed text file does not respond, but output a bunch of documents, do not know how to do)
Training method: In the file directory Python process_wiki.py zhwiki-latest-pages-articles.xml.bz2 wiki.zh.text
(2) wikiextractor.py Source training: https://github.com/attardi/wikiextractor/edit/master/WikiExtractor.py
command line into current folder inputpython WikiExtractor.py -b 500M -o extracted zhwiki-latest-pages-articles.xml.bz2
Training results (it takes about one hours, the back is very slow):
Windows3.5 the Word2vec of Wikipedia corpus to find synonyms similarity