TensorFlow Combat--cnn (LENET5)--mnist digital recognition

Source: Internet
Author: User

This article address:
http://blog.csdn.net/u011239443/article/details/72861591

We are going to implement the nonstandard Lenet model:
train:https://github.com/xiaoyesoso/tensorflowinaction/blob/master/inactionb1/chapter6/mnist_train_6_4_1.py
Inference:https://github.com/xiaoyesoso/tensorflowinaction/blob/master/inactionb1/chapter6/mnist_inference_6_4 _1.py Train

Train part and "TensorFlow actual combat--dnn--mnist digital recognition" not much different. First, the primary learning rate should be reduced:

Learning_rate_base = 0.01
2nd, X is a four-dimensional matrix:
x = Tf.placeholder (Tf.float32,[batch_size,mnist_inference_6_4_1.image_size,mnist_inference_6_4_1.image_size, Mnist_inference_6_4_1.num_channels],name= ' X-input ')

Mnist_inference_6_4_1.num_channels is the depth of the picture. XS also change to a four-dimensional matrix:

            Xs,ys = Mnist.train.next_batch (batch_size)
            reshaped_xs = Np.reshape (XS, batch_size,mnist_inference_6_4_1.image _size,mnist_inference_6_4_1.image_size,mnist_inference_6_4_1.num_channels))
            _,loss_value,step = Sess.run ([ Train_op,loss,global_step],feed_dict={x:reshaped_xs,y_:ys})
Inference Layer1
    With Tf.variable_scope (' Layer1-conv1 '):
        conv1_weights = tf.get_variable ("Weight", [conv1_size,conv1_size,num_ Channels,conv1_deep],initializer=tf.truncated_normal_initializer (stddev=0.1))
        conv1_biases = tf.get_variable ("bias", [Conv1_deep],initializer=tf.constant_initializer (0.0))

        CONV1 = tf.nn.conv2d (input_tensor,conv1_weights,strides=[1,1,1,1],padding= ' SAME ')
        RELU1 = Tf.nn.relu ( Tf.nn.bias_add (conv1,conv1_biases))
First, let's look at the strides parameter:

Strides:a List of ints.
1-d of length 4. The stride of the sliding window for each dimension
of input. Must is the same order as the dimension specified with format.

The strides represents the moving step, which must be in the same order as Input_tensor, and [Batch_size,mnist_inference_6_4_1.image_size,mnist_inference_6_4_1. Image_size,mnist_inference_6_4_1.num_channels]. Because the batch_size and mnist_inference_6_4_1.num_channels are definitely moving step after step, the first and last values of the array must be 1. Padding= ' SAME ', which means fill 0, does not change the size of the image. Note that Tf.nn.bias_add (conv1,conv1_biases) does not add directly to CONV1 and conv1_biases. Layer2

    With Tf.name_scope (' Layer2-pool1 '):
        pool1 = Tf.nn.max_pool (relu1,ksize=[1,2,2,1],strides=[1,2,2,1],padding= ' SAME ')

The Max_pool representation is the pool layer that takes the maximum value.
Let's take a look at the parameter ksize:

Ksize:a List of ints that has length >= 4. The size of the window for
Each dimension of the input tensor.

The size of each dimension of the window. The first and last value of the array must be 1 because the pool Layer window is only done at the current depth in the current data. Layer5

Layer3 and Layer4 in front of the similar, we skip them to see LAYER5:

    Pool_shape = Pool2.get_shape (). As_list ()
    nodes = pool_shape[1] * pool_shape[2] * pool_shape[3]
    reshaped = Tf.reshape (Pool2,[pool_shape[0],nodes]) with

    tf.variable_scope (' Layer5-fc1 '):
        fc1_weights = tf.get_ Variable ("Weight", [nodes,fc_size],
                                        Initializer=tf.truncated_normal_initializer (stddev=0.1))
        if Regularizer!= None:
            tf.add_to_collection (' Losses ', Regularizer (fc1_weights))

        fc1_biases = tf.get_variable ( "Bias", [fc_size], Initializer=tf.constant_initializer (0.1))
        FC1 = Tf.nn.relu (Tf.matmul (reshaped,fc1_weights) + fc1_biases)
        if train:
            FC1 = tf.nn.dropout (fc1,0.5)
Get_shape () as_list () can get the size of pool2.
pool_shape[1]∗pool_shape[2]∗pool_shape[3]= long x Width x deep pool\_shape[1] * pool\_shape[2] * Pool\_shape[3] = long x Width x deep, This is quite a cuboid drawn into a straight line. Pool_shape[0] for batch_size dropout the output of a certain proportion can become 0 in order to assign a value to fit.

The rest is fully connected to the neural network, and LAYER6 is similar to LAYER5.

Results:

After 1 training step (s), loss are 6.06818 after
training step (s), loss was 2.24668 after
201 training step (s), Loss is 1.65929 after
the training step (s), loss are 1.30799 after
401 training step (s), loss was 1.3613 after
5 Training step (s), loss are 0.960646 after
601 training step (s), loss are 0.954722 after
701 training Step (s), Los S 0.883449 after
801 training step (s), loss are 0.870421 after
901 training step (s), loss was 0.905906 after
1001 Training Step (s), loss is 0.932337

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.