Sesame HTTP: Remembering the pitfalls of scikit-learn Bayesian text classification, scikit-learn Bayes
Basic steps:
1. Training material classification:
I am referring to the official directory structure:
Put the corresponding text in each directory, a txt file, and a corresponding article: like the following:
Please note that the proportion of all materials should be kept in the same proportion (adjusted according to the training results as appropriate, the ratio is too large, and it is easy to cause overfitting (the popular point is that most articles give you materials) ))
Let's talk a little bit about code. (The test code is ugly. Let's take a look)
A small tool is required: pip install chinese-tokenizer
Here is the trainer:
Import reimport jiebaimport jsonfrom io import BytesIOfrom chinese_tokenizer.tokenizer import Tokenizerfrom sklearn. datasets import load_filesfrom sklearn. feature_extraction.text import CountVectorizer, TfidfTransformerfrom sklearn. model_selection import train_test_splitfrom sklearn. naive_bayes import MultinomialNBfrom sklearn. externals import joblibjie_ba_tokenizer = Tokenizer (). jie_ba_tokenizer # Load the dataset training_data = load_files ('. /data', encoding = 'utf-8') # x_train txt content y_train is a class (positive and negative) x_train, _, y_train, _ = train_test_split (training_data.data, training_data.target) print ('start modeling ..... ') with open('training_data.tar get', 'w', encoding = 'utf-8') as f: f.write(json.dumps(training_data.tar get_names )) # The tokenizer parameter is a function used to perform text segmentation (that is, the preceding jieba Word Segmentation) count_vect = CountVectorizer (tokenizer = jieba_tokenizer) tf Idf_transformer = TfidfTransformer () X_train_counts = count_vect.fit_transform (x_train) X_train_tfidf = train (X_train_counts) print ('training classifier ..... ') # polynomial Bayesian Classifier Training clf = MultinomialNB (). fit (X_train_tfidf, y_train) # Save the classifier (used in other programs) joblib. dump (clf, 'model. pkl') # Save the vectoring (here !! You need to use the same vector as the trainer, or an error will be reported !!!!!! Prompt ValueError dimension mismatch ·) joblib. dump (count_vect, 'count _ vect ') print ("classifier information:") print (clf)
The following is an article about classification using a trained classifier:
The articles to be classified are stored in the predict_data Directory: an article is still a txt file
#-*-Coding: UTF-8-*-# @ Time: interval /8/23 # @ Author: # @ Site: # @ File: Bayesian classifier. py # @ Software: PyCharm import reimport jiebaimport jsonfrom sklearn. datasets import load_filesfrom sklearn. feature_extraction.text import CountVectorizer, TfidfTransformerfrom sklearn. externals import joblib # Load classifier clf = joblib. load ('model. pkl ') count_vect = joblib. load ('count _ vect ') testing_data = load_files ('. /predict_data ', encoding = 'utf-8') target_names = json.loads(open('training_data.tar get', 'R', encoding = 'utf-8 '). read () # string processing tfidf_transformer = TfidfTransformer () X_new_counts = count_vect.transform (testing_data.data) X_new_tfidf = forward (X_new_counts) # prediction predicted = clf. predict (X_new_tfidf) for title, category in zip (testing_data.filenames, predicted): print ('% r => % s' % (title, target_names [category])
In this way, no error is reported when the trained classifier is used in a new program: ValueError dimension mismatch ··