OPENCV Python Version Learning notes (eight) character recognition-classifier (SVM,KNEAREST,RTREES,BOOST,MLP) __python

Source: Internet
Author: User
Tags function prototype ord svm

OPENCV provides several classifiers, which are described by character recognition in routines.

1, Support vector Machine (SVM): Given the training samples, support vector machines to establish a hyperplane as a decision plane, so that the positive and inverse of the isolation between the edge is maximized.

Function prototype: Training prototype CV2. Svm.train (Traindata, responses[, varidx[, sampleidx[, params]])

Where Traindata is the training data, responses is the identification of the corresponding data,

2, K nearest neighbor (Knearest): K nearest neighbor is mobile inertia learning method, when given a large number of data sets, the algorithm is computationally intensive. The nearest neighbor method is based on analogical learning, which is to learn by comparing a given set of tests with a training tuple similar to it. The training tuple is represented by N attributes. When a given position tuple, K nearest neighbor finds K training tuples closest to the unknown tuple, and the unknown tuple is assigned to the most of k nearest neighbors.

Function prototype: Cv2. Knearest.train (Traindata, responses[, sampleidx[, isregression[, maxk[, Updatebase]]]

Among them, the Traindata is the training data, responses is the corresponding data identification, the isregression represents the regression operation or the training, MAXK is the maximum neighbor number

3. Random tree (rtrees): Each node of the individual decision tree uses the random selection attribute to determine the partition, each tree relies on the independent sampling and the value of the immediate vector of the same distribution as all the trees in the forest. When categorized, each tree votes and returns the class with the most votes.

Function prototype: Cv2. Rtrees.train (Traindata, Tflag, responses[, varidx[, sampleidx[, vartype[, missingdatamask[, params]]]

Where Traindata is the training data, responses for the corresponding data identification, Tflag indicates whether the eigenvector is row or column representation, responses is the corresponding data identification

4, Promotion (boost): weights given to each training tuple. Iterative Learning K classifier, after learning to the classifier MI, update weights, making the classifier mi+1 later more concerned about the classification of the training of the group. AdaBoost is a popular lifting algorithm. The given DataSet D, which contains the tuples of the D class tag. Begin assigning equal weight 1/d to each training tuple. The K-base classifier is generated for the combination classifier. In the first wheel, a sample is made from the tuple D to form a training set di of size d. Use a put-back sampling-the same tuple may be selected more than once. The chance that each tuple is selected is determined by its weight. Export classifier mi from training set di. Di is then used as a test set to calculate the MI error. If the tuple is incorrectly categorized, its weight is increased. If the tuple is properly categorized, its weight is reduced. The higher the weight the more likely it is to classify incorrectly. Use these weights to produce a training sample for the next round classifier.

Function prototype: Cv2. Boost.train (Traindata, Tflag, responses[, varidx[, sampleidx[, vartype[, missingdatamask[, params[, UPDATE]]]]

5. Multi-layer Perception (MLP): Multilayer perceptron is used to solve the problem of nonlinear classification of Single-layer neural networks, and the popular method of training multilayer perceptron is to reverse propagate, through which multiple inputs can produce a single output to achieve the result of classification.

Function prototype: Cv2. Ann_mlp.train (inputs, outputs, sampleweights[, sampleidx[, params[, flags]])

Procedures and Notes:

#decoding: Utf-8 import numpy as NP import Cv2 def load_base (fn): a = Np.loadtxt (FN, Np.float32, delimiter= ",", Conver ters={0:lambda Ch:ord (CH)-ord (' A ')}) #导入的字母特征数据 and converts letters to digital categories samples, responses = a[:,1:], a[:,0] #将类别给responses, features Return samples to the samples, responses class Letterstatmodel (object): Class_n = Train_ratio = 0.5 def loa D (Self, FN): Self.model.load (FN) def save (Self, FN): Self.model.save (FN) def unroll_samples (s Elf, samples): Sample_n, var_n = samples.shape# get feature dimensions and number of features new_samples = Np.zeros (Sample_n * self.class_n , var_n+1), np.float32) new_samples[:,:-1] = np.repeat (samples, Self.class_n, axis=0) new_samples[:,-1] =
        Np.tile (Np.arange (self.class_n), Sample_n) return new_samples def unroll_responses (Self, responses): Sample_n = Len (responses) new_responses = Np.zeros (Sample_n*self.class_n, np.int32) Resp_idx = Np.int3 2 (Responses + Np.arange(sample_n) *self.class_n) New_responses[resp_idx] = 1 return new_responses class Rtrees (Letterstatmodel): def __init__ (self): Self.model = Cv2. Rtrees () def train (self, samples, responses): sample_n, var_n = Samples.shape var_types = Np.array ([c V2. Cv_var_numerical] * var_n + [CV2.
        Cv_var_categorical], np.uint8) #CvRTParams (10,10,0,false,15,0,true,4,100,0.01f,cv_termcrit_iter)); params = Dict (max_depth=10) self.model.train (samples, Cv2. Cv_row_sample, responses, VarType = var_types, params = params) def predict (self, samples): Return Np.float32 ([Self.model.predict (s) for s in samples]) class Knearest (Letterstatmodel): def __init__ (self): SE Lf.model = Cv2. Knearest () def train (self, samples, responses): Self.model.train (samples, responses) def predict (self, s Amples): retval, results, neigh_resp, dists = Self.model.find_nearest (samples, k = ten) RetuRN Results.ravel () class Boost (Letterstatmodel): def __init__ (self): Self.model = Cv2. Boost () def train (self, samples, responses): sample_n, var_n = samples.shape New_samples = self.u Nroll_samples (samples) new_responses = self.unroll_responses (responses) var_types = Np.array ([Cv2. Cv_var_numerical] * var_n + [CV2. Cv_var_categorical, Cv2. Cv_var_categorical], np.uint8) #CvBoostParams (Cvboost::real, 0.95, 5, false, 0) params = dict (max_de pth=5) #, Use_surrogates=false) Self.model.train (New_samples, CV2). Cv_row_sample, new_responses, VarType = Var_types, params=params) def predict (self, samples): New_samples = S
        Elf.unroll_samples (samples) pred = Np.array ([Self.model.predict (s, returnsum = True) for s in New_samples]) pred = Pred.reshape ( -1, Self.class_n). Argmax (1) Return pred class SVM (Letterstatmodel): train_ratio = 0. 1 def __init__ (self): Self.model= Cv2. SVM () def train (self, samples, responses): params = dict (Kernel_type = Cv2. Svm_linear, Svm_type = Cv2. Svm_c_svc, C = 1) self.model.train (samples, responses, params = params) def predict (s  Elf, samples): Return Np.float32 ([Self.model.predict (s) to s in samples]) class MLP (Letterstatmodel): def __init__ (self): Self.model = Cv2. ANN_MLP () def train (self, samples, responses): sample_n, var_n = samples.shape new_responses = self.u
        Nroll_responses (responses). Reshape ( -1, self.class_n) layer_sizes = Np.int32 ([Var_n, MB, Self.class_n]) Self.model.create (layer_sizes) # cvann_mlp_trainparams::backprop,0.001 params = dict (TERM_CR it = (cv2. Term_criteria_count, 0.01, Train_method = Cv2. Ann_mlp_train_params_backprop, Bp_dw_scale = 0.001, Bp_moment_scale =0.0) Self.model.train (samples, Np.float32 (new_responses), None, params = params) def predict (self, samples): RET, resp = self.model.predict (samples) return Resp.argmax ( -1) if __name__ = = ' __main__ ': Import ge TOPT Import SYS models = [Rtrees, Knearest, Boost, SVM, MLP] # Nbayes models = Dict ([(Cls.__name__.lower (), CLS) for the CLS in models]) #将名字之母字母转为小写 print ' USAGE:letter_recog.py [--model <model>] [--data <data;] [--load <model Fn>] [--save <model fn&gt] ' print ' Models: ', ', '. Join (Models) print args, dummy = getopt.getopt (sys.argv [1:], ', [' model= ', ' data= ', ' load= ', ' save= ']) args = Dict (args) args.setdefault ('--model ', ' boost ') args.se Tdefault ('--data ', '. /letter-recognition.data ') print ' loading data%s '% args['--data '] samples, responses = load_base (args['--da Ta '] model = models[args['--model ']] model = model () train_n = Int (len (samples) *modeL.train_ratio) #获取训练数据的数目 if '--load ' in args:fn = args['--load '] print ' loading model from%s '% fn Model.load (FN) else:print ' training%s ... '% model.__name__ model.train (samples[:train_n) , Responses[:train_n]) print ' testing ... ' train_rate = Np.mean (model.predict (samples[:train_n)) = = Responses[:tra In_n]) #前一半进行训练 and obtained the training accuracy rate test_rate = Np.mean (Model.predict (samples[train_n:]) = = Responses[train_n:]) #后一半进行测试,  and get the test accuracy rate print ' train rate:%f test rate:%f '% (train_rate*100, test_rate*100) if '--save ' in ARGS:FN 			
 = Args['--save '] print ' saving model to%s ... '% fn model.save (FN) cv2.destroyallwindows ()

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.