Basic steps:
1, training material classification:
I am referring to the official directory structure:
Put the corresponding text in each directory, a txt file corresponding to the article: like this
It is important to note that all footage ratios are kept at the same proportions (adjusted according to the training results, not proportional to the scale, and prone to overfitting (popular point is that most of the articles give you the most material category))
Not much nonsense to say directly on the code bar (testing the code of the Ugly force; take a look at it)
Need a gadget: Pip install Chinese-tokenizer
This is the trainer:
import reimport jiebaimport json fromIO import Bytesio fromChinese_tokenizer.tokenizer Import Tokenizer fromsklearn.datasets Import Load_files fromsklearn.feature_extraction.text import Countvectorizer, Tfidftransformer fromsklearn.model_selection Import Train_test_split fromsklearn.naive_bayes Import MULTINOMIALNB fromsklearn.externals Import Joblibjie_ba_tokenizer=Tokenizer (). jie_ba_tokenizer# Load DataSet Training_data= Load_files ('./data', encoding='Utf-8'# X_train TXT content y_train is a category (plus/minus) X_train, _, Y_train, _=train_test_split (Training_data.data, training_data.target) print ('start Modeling .....') with open ('Training_data.target','W', encoding='Utf-8') asF:f.write (Json.dumps (Training_data.target_names)) # Tokenizer parameter is the function used to segment the text (that is, we stutter participle above) Count_vect= Countvectorizer (tokenizer=jieba_tokenizer) Tfidf_transformer=Tfidftransformer () x_train_counts=count_vect.fit_transform (x_train) X_TRAIN_TFIDF=tfidf_transformer.fit_transform (x_train_counts) print ('Training classifier .....'# polynomial Bayesian classifier training CLF=MULTINOMIALNB (). Fit (X_TRAIN_TFIDF, Y_train) # Save classifier (used in other programs) Joblib.dump (CLF,'MODEL.PKL'# Save Vectorization (pit here!!) You need to use the same vector as the trainer or you will get an error!!!!!! Hint ValueError dimension mismatch ) Joblib.dump (Count_vect,'Count_vect') Print ("information about the classifier:") print (CLF)
Here are the articles that are categorized using a well-trained classifier:
Articles that need to be categorized are placed in the Predict_data directory: still an article a TXT file
#-*-coding:utf-8-*-# @Time: ./8/ at -: Geneva# @Author: Ouch # @Site: # @File: Bayesian classifier. py# @Software: Pycharm import reimport jiebaimport json fromsklearn.datasets Import Load_files fromsklearn.feature_extraction.text import Countvectorizer, Tfidftransformer fromsklearn.externals Import joblib # load classifier CLF= Joblib.load ('MODEL.PKL') Count_vect= Joblib.load ('Count_vect') Testing_data= Load_files ('./predict_data', encoding='Utf-8') Target_names= Json.loads (Open ('Training_data.target','R', encoding='Utf-8'). Read ()) # # string processing Tfidf_transformer=Tfidftransformer () x_new_counts=count_vect.transform (testing_data.data) X_NEW_TFIDF=Tfidf_transformer.fit_transform (x_new_counts) # make predictions predicted=clf.predict (X_NEW_TFIDF) forTitle, CategoryinchZip (testing_data.filenames, predicted): print ('%r =%s'% (title, target_names[category]))
This way the trained classifier is used in the new program without error: ValueError dimension mismatch
Sesame http: Kee scikit-learn Bayesian text classified by pit