We used two kinds of extraction methods.
1. Word Frequency statistics
2. Keyword Extraction
Keyword Extraction works better
First step: Data read
#read data, attribute named [' Category ', ' theme ', ' URL ', ' content ']Df_new = Pd.read_table ('./data/val.txt', names=['category','Theme','URL','content'], encoding='Utf-8') Df_new.dropna ()#to remove data that is emptyPrint(Df_new.head ())
Step two: Data preprocessing, splitting the contents of each line into words
# Convert the value of df_new content to a list content = df_new.content.values.tolist ()# splits each line into words content_s = [] for in content: = jieba.lcut (line) Ifand'\ r \ n': content_s.append (current_segement) Print(content_s[1000])
Step three: Compare with the discontinued thesaurus to remove the inactive words from the content
‘
#form a container for a dictionary classDf_content = PD. DataFrame ({'content_s': content_s})Print(Df_content.head ()) Stopwords=pd.read_csv ("Stopwords.txt", index_col=false,sep="\ t", quoting=3,names=['Stopword'], encoding='Utf-8')#Compare to the deactivation table and remove if presentdefdrop_stopwords (Content, stopwords): Content_clean=[] all_words= []#for statistical frequency forLineinchContent:line_clean= [] forWordsinchLine :ifWordsinchstopwords:Continueline_clean.append (words) all_words.append (str (words)) content_clean.append (Line_clean) returnContent_clean, All_words#Turn the value of df_content into a listContent =df_content.content_S.values.tolist () stopwords=stopwords.stopword.values.tolist () Content_clean, All_words=drop_stopwords (content, stopwords)#Build a new dictionary to store content that removes the deactivated tableDf_content = PD. DataFrame ({'Content_clean': Content_clean})
The fourth step is to build the model, where the data we need to do is one step '. Join reconnection for category labels that need to be converted to numeric types
#text categorization based on Bayesian#Create dictionary, x for content, Y for kindDf_train = PD. DataFrame ({'Content_clean': Content_clean,'label':d f_new['category']})#View y altogether there are several categoriesPrint(Df_train.label.unique ())#To facilitate calculations, convert the corresponding label character type to a numberLabel_mapping = {"Automotive": 1,"Finance": 2,"Technology": 3,"Health": 4,"Sports"75A"Education": 6,"Culture": 7,"Military": 8,"Entertainment": 9,"Fashion": 0}df_train['label'] = df_train['label'].map (label_mapping)#splitting data into training and test sets fromSklearn.model_selectionImportTrain_test_splitx_train, X_test, Y_train, Y_test= Train_test_split (df_train['Content_clean'].values, df_train['label'].values, random_state=1)#combine samples into strings with spaces ["Dog cat Fish", "Dog Cat Cat", "Fish Bird", ' bird ')Words = [] forLineinchX_train:Try: Words.append (' '. Join (line))except: Print(line)Print(Words[0]) fromSklearn.feature_extraction.textImportCountvectorizer#Building the corresponding vocabulary statisticsVEC = Countvectorizer (analyzer='Word', max_features=4000, lowercase =False) vec.fit (words)#Building a Bayesian training model fromSklearn.naive_bayesImportMultinomialnbclassifier=MULTINOMIALNB () classifier.fit (Vec.transform (words), y_train)#building a Test_words stringTest_words = [] forLine_indexinchRange (len (x_test)):Try: #X_train[line_index][word_index] = str (x_train[line_index][word_index])Test_words.append (' '. Join (X_test[line_index]))except: Print(Line_index)#The final training resultsClassifier.score (Vec.transform (test_words), y_test)
6th step, use keyword extraction, see the classification results, compared to later, found that the effect is better
from Import = Tfidfvectorizer (analyzer='word', max_features=4000, lowercase = False) vectorizer.fit (words)fromimport= MULTINOMIALNB () Classifier.fit ( Vectorizer.transform (words), y_train) Classifier.score (Vectorizer.transform (test_words), y_test)
’
Learn from me algorithm-Bayesian text classifier