Then the previous article said, climbed the big data related job information, http://www.17bigdata.com/jobs/.
#-*-coding:utf-8-*-"""Created on Thu 07:57:56 2017@author:lenovo""" fromWordcloudImportWordcloudImportPandas as PDImportNumPy as NPImportMatplotlib.pyplot as PltImportJiebadefCloud (root,name,stopwords): filepath= root +'\\'+name F= Open (filepath,'R', encoding='Utf-8') txt=F.read () f.close () Cut=jieba.cut (txt) words= [] forIinchcut:words.append (i) DF= PD. DataFrame ({'words': Words}) S= Df.groupby (df['words'])['words'].agg ([('size', Np.size)]). Sort_values (by='size', ascending=False) s= S[~s.index.isin (stopwords['Stopword'])].to_dict () Wordcloud= Wordcloud (Font_path =r'E:\Python\machine Learning\simhei.ttf', background_color='Black') wordcloud.fit_words (s['size']) plt.imshow (wordcloud) pngfile= root +'\\'+ Name.split ('.') [0] +'. PNG'wordcloud.to_file (pngfile)Importos jieba.load_userdict (R'E:\Python\machine learning\nlpstopwords.txt') Stopwords= Pd.read_csv (r'E:\Python\machine learning\stopwordscn.txt', encoding='Utf-8', index_col=False) forRoot,dirs,fileinchOs.walk (R'e:\ Job Information'): forNameinchFile:ifName.split ('.') [ -1]=='txt': Print(name) cloud (root,name,stopwords)
Word Cloud:
It can be seen that some noise words can not be removed, such as related, or the above academic qualifications and other invalid words. I would like to use DF to determine the stop word, but I climbed the time did not take into account the problem, plus its own record number is not high, no longer find the position information of the stop word. Of course, it can be seen that algorithms and experience are important. Come on
Python generates career requirements word cloud