before are used to calculate the word breaker, the effect is good but more trouble, recently began to use Python "stutter" module for word segmentation, feeling very convenient. Here will I write some of the small program to share to you, I hope to be helpful to everyone.
The following procedure is a procedure for segmenting the contents of a text file: test.py
#!/usr/bin/python#-*-encoding:utf-8-*-import Jieba #导入jieba模块def splitsentence (Inputfile, outputFile): Fin = open (inputfile, ' R ') #以读的方式打开文件 fout = open (OutputFile, ' W ') #以写得方式打开文件 for eachline in fin: line = Eachline.strip (). Decode (' utf-8 ', ' ignore ') #去除每行首尾可能出现的空格 and converted to Unicode for processing wordList = List (Jieba.cut ( Line)) #用结巴分词, word outstr = "for word in wordList: outstr + = Word outstr + = '/' Fout.write (Outstr.strip (). Encode (' utf-8 ') + ' \ n ') #将分词好的结果写入到输出文件 fin.close () fout.close () Splitsentence (' MyInput.txt ', ' myOutput.txt ')
After writing the program, in Linux key input: Python test.py can execute the program to do participle.
The contents of the input file are as follows:
After stuttering participle, the output results are as follows:
Note: The structure returned by Jieba.cut () on line 11th is an iterative generator that can be used with list (Jieba.cut (...)). Convert to List
"Python" uses Python's "stutter" Module for Word segmentation