Inspired by http://yixuan.cos.name/cn/2011/03/text-mining-of-song-poems/This article, think Python to do word processing analysis should be good, you can do a frequency analysis, Analysis of the chat record can see everyone's speaking habits
It's a violent method. Semantic analysis lists all occurrences of words directly
I think it's difficult to do it in the Chinese code this part of Python in the Chinese language involved in the coding transformation does have to ponder
First data file to save as Utf-8 format
Key code for displaying Chinese in Python:
Import sys
Reload (SYS)
sys.setdefaultencoding (' UTF8 ')
txt.encode (' GB18030 ')
TXT to Chinese string
Search Chinese and match with regular expressions:
r = Re.compile (' [\x80-\xff]+ ')
m = r.findall (TXT)
Dictionary sorting, sorted by value, the code is very concise:
Dict=sorted (Dict.items (), Key=lambda d:d[1])
Code:
#coding =utf-8
#Author: http://blog.csdn.net/boksic
import sys,re
reload (SYS)
Sys.setdefaultencoding (' UTF8 ')
txt = open (' Blog.csdn.net.boksic.txt ', ' R '). Read ()
wfile=open (' Result.txt ') , ' W ')
r = Re.compile (' [\x80-\xff]+ ')
m = r.findall (TXT)
dict={}
z1 = Re.compile (' [\x80-\xff]{2} ')
z2 = Re.compile (' [\x80-\xff]{4} ')
z3 = re.compile (' [\x80-\xff]{6} ')
z4 = Re.compile (' [\x80-\xff]{8} ' For
I in M:
x = I.encode (' gb18030 ')
i = Z1.findall (x)
#i + = Z2.findall (x)
#i + = Z2.findall (x[2 :]
#i + + z3.findall (x)
#i + + + z3.findall (x[2:])
#i + + = Z3.findall (x[4:])
#i + = Z4.findall (x)
# i+= Z4.findall (x[2:])
#i + + z4.findall (x[4:])
#i + + = Z4.findall (x[6:]) for
J-I:
if (j in Dict):
dict[j]+=1
Else:
dict[j]=1
dict=sorted (Dict.items (), Key=lambda d:d[1]) for
a,b in Dict:
if b>0:
wfile.write (+ ', ' +str (b) + ' \ n ')
Feel the matching code is not very good
So I changed a code that was searched directly in the utf-8 format.
For L in range (len (i)/3):
X+=[i[3*l:3*l+3]] "for
L in range (len (i)/3-1):
x+=[i[3*l:3*l+6]" for
L in Range (len (i)/3-2):
X+=[i[3*l:3*l+9]]
But the actual speed is too slow, sometimes there are errors, look at the master pointing this part
Finally, the regular search code, although the code is more verbose run speed can be 500,000 words of the file in less than a second of the statistics are over
(I don't quite understand how fast the Python regular search here is so much faster than array access)
Because there is no semantic algorithm for this method, the resulting results require some manual filtering
The statistics effect of the chat record:
Low frequency word
High frequency word
Multiple words
Analysis of the Tang and song Ci
Word of Word
Incense, 106
HO, 107
There are, 109
Night, 109
Day, 111
Thousand, 114
Years, 114
Yes, 114
When the 115
Phase, 117
Rain, 118
Months, 121
Office, 128
Cloud, 133
Mountain, 141
Spring, 157
Come on, 160.
Days, 163
Flowers, 179
One, 184
No, 189.
None, 193
Wind, 230
Man, 276
Multiple words
Return, 14
Moon, 14
West Wind, 15
Yingying, 15
Not seen, 16
Miles, 17
How much, 17
Acacia, 18
Merry, 18
That year, 18
Raccoon Creek, 19
Looking back, 19
Teen, 20
No one, 20.
Trinidad, 22
World, 24
Where, 31