Python provides a simple example of Chinese Word Frequency Statistics,

Source: Internet
Author: User

Python provides a simple example of Chinese Word Frequency Statistics,

This article introduces a simple example of Chinese Word Frequency Statistics implemented in python and shares it with you, as follows:

Task

Which of the following Chinese characters in a novel have the highest frequency?

Knowledge Point

1. File Operations
2. Dictionary
3. Sorting
4. lambda

Code

Import codecsimport matplotlib. pyplot as pltfrom pylab import mplmpl. rcParams ['font. sans-serif'] = ['hangzhou'] # specify the default font mpl. rcParams ['axes. unicode_minus '] = False # solve the problem of saving the image as a negative sign'-'displayed as a square word = [] counter ={} with codecs.open('data.txt') as fr: for line in fr: line = line. strip () if len (line) = 0: continue for w in line: if not w in word: word. append (w) if not w in counter: counter [w] = 0 else: counter [w] + = 1counter_list = sorted (counter. items (), key = lambda x: x [1], reverse = True) print (counter_list [: 50]) label = list (map (lambda x: x [0], counter_list [: 50]) value = list (map (lambda y: y [1], counter_list [: 50]) plt. bar (range (len (value), value, tick_label = label) plt. show ()

The following figure shows the result of an 11 m novel:

[(',', 288508 ),('. ', 261584), (', 188693), ('chen ', 92565), ('huan', 92505), ('no', 91234 ), ('yes', 90562), ('Ta', 86931), ('yi', 79059), ('A', 77997), ('Ta'
, 71695), ('data', 63580), ('people', 61210), (''', 59719), ('"', 59115 ), ('you', 56054), ('you', 52862), ('you', 49097), ('du', 46850), ('you', 45400 ), ('ler', 42659 ),
('I', 40057), ('in', 37676), ('you', 36966), ('to', 36351), ('said', 35828 ), ('bo', 35260), ('do', 32601), ('under', 31742), ('di', 30692), ('dex', 29904 ), ('up', 2
9627), ('taobao', 28408), ('no', 28333), ('output', 27937), ('da', 27732), ('da ', 27012 ),('? ', 26729), ('A', 26589), ('you', 26076), ('sub', 25035), ('auto', 24012 ),('
Click ', 23942), ('ha', 21345), ('Think', 21242), ('lil', 20915), ('face', 20661 ), ('Ta', 20313), ('Ta', 20304), ('thing', 20110)]

 

Use jieba to first perform word segmentation on Chinese documents

Import sys reload (sys) sys. setdefaultencoding ("UTF-8") import jieba. analyze wf = open('clean_title.txt ', 'W +') for line in open ('/root/clean_data/clean_data.csv'): item = line. strip ('\ n \ R '). split ('\ t') // system table splitting # print item [1] tags = jieba. analyze. extract_tags (item [1]) // jieba word segmentation tagsw = ",". join (tags) // comma joins the word wf. write (tagsw) wf. close ()

Output clean_title.txt content
Cruise ship, Mediterranean, depth, Rome, free Naxi, Berlin visa, walk, three days, approval Schengen, hands-on, Visa, application, how to like explosion, flange, crossing, wine, scenery, valley, World Europe color, one type, country, one aquarium, Palau, seven days, God Olympus, running Santa Claus,
Linini, ancient civilization, visit, Aegean Sea, charm, Greece

2. Count Word Frequency

#! /Usr/bin/python #-*-coding: UTF-8-*-word_lst = [] word_dict ={} with open ('/root/clean_data/clean_title.txt') as wf, open ("word.txt", 'w') as wf2: // open the file for word in wf: word_lst.append (word. split (',') // use commas to split for item in word_lst: for item2 in item: if item2 not in word_dict: // count word_dict [item2] = 1 else: word_dict [item2] + = 1 for key in word_dict: print key, word_dict [key] wf2.write (key + ''+ str (word_dict [key]) + '\ n') // write the document

Result:

Last 4
Europe Blue 1
Jimei 1
French 1 (Portugal)
Site 1
Know Lake light and mountain colors 1
Holy 7
European girly Swiss Canada Game 1

Sort words by number of words:

Cat word.txt | sort-nr-k 2 | more

Holy 7
Last 4
Europe Blue 1
Jimei 1
French 1 (Portugal)
Site 1
Know Lake light and mountain colors 1
European girly Swiss Canada Game 1

The above is all the content of this article. I hope it will be helpful for your learning and support for helping customers.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.