NN: Neural network algorithm advanced optimization method to further improve the accuracy rate of handwritten numeral recognition-jason NIU

Source: Internet
Author: User
Tags theano

Previous article, compared three kinds of algorithms to realize the handwritten numeral recognition, in which, SVM and neural network algorithm performance very good accuracy rate is above 90%, this article further discusses to the Neural Network algorithm optimization, further improves the accuracy rate, through the test discovery, the accuracy rate raises a lot.

First, change one:

First in the initialization of the weight of the part, take a better random initialization method, we still maintain the normal distribution of the mean value, only the standard deviation changes,

Before initializing the weight change,

def Large_weight_initializer (self):            for  in Self.sizes[1:]]        = [Np.random.randn (y, x)   for in Zip (Self.sizes[:-1], Self.sizes[1:])]

After the initialization weight has changed,

    def Default_weight_initializer (self):           for  in Self.sizes[1:]]        = [Np.random.randn (y, x)/np.sqrt (x) as in   Zip ( Self.sizes[:-1], self.sizes[1:])

Change the second:

In order to reduce overfitting and reduce the effect of local noise, the original target function was changed from quadratic cost to cross-enrtopy costs.

class Crossentropycost (object):      def fn (A, y):         return np.sum (Np.nan_to_num (-y*np.log (a)-(1-y) *np.log (1-a)    )def  Delta (z, A, y ):        return (a-y)

Change of three:

Change S function to Softmax function

classSoftmaxlayer (object):def __init__(Self, n_in, n_out, p_dropout=0.0): self.n_in=n_in self.n_out=n_out self.p_dropout=p_dropout SELF.W=theano.shared (Np.zeros (n_in, n_out), Dtype=Theano.config.floatX), name='W', borrow=True) self.b=theano.shared (Np.zeros (N_out,), Dtype=Theano.config.floatX), name='b', borrow=True) Self.params=[SELF.W, self.b]defset_inpt (self, INPT, Inpt_dropout, mini_batch_size): SELF.INPT=Inpt.reshape ((mini_batch_size, self.n_in)) Self.output= Softmax ((1-self.p_dropout) *t.dot (SELF.INPT, SELF.W) +self.b) Self.y_out= T.argmax (Self.output, Axis=1) Self.inpt_dropout=Dropout_layer (Inpt_dropout.reshape (Mini_batch_size, self.n_in), self.p_dropout) Self.output_drop out= Softmax (T.dot (self.inpt_dropout, SELF.W) +self.b)defCost (self, net):"Return the Log-likelihood cost."        return-T.mean (T.log (self.output_dropout) [T.arange (Net.y.shape[0]), Net.y])defaccuracy (self, y):"Return The accuracy for the mini-batch."        returnT.mean (T.eq (y, self.y_out))

NN: Neural network algorithm advanced optimization method to further improve the accuracy rate of handwritten numeral recognition-jason NIU

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.