TensorFlow realization of convolution neural network (Simple) _ Neural network

Source: Internet
Author: User

Code (with detailed comments for source code) and dataset can be downloaded in github:
Https://github.com/crazyyanchao/TensorFlow-HelloWorld

#-*-Coding:utf-8-*-' convolution neural network test mnist data ' ######## #导入MNIST数据 ######## from Tensorflow.examples.tutorials.mnist Import input_data import TensorFlow as tf mnist = input_data.read_data_sets (' mnist_data/', one_hot=true) # Create default Interactiv Esession sess = tf.
InteractiveSession () ######## #卷积网络会有很多的权重和偏置需要创建, first define the initialization function in order to reuse ######## # make some random noises to the weights break the complete symmetry (such as the truncated normal distribution noise, the standard deviation set to 0.1) def weight_variable (shape): initial = Tf.truncated_normal (shape, stddev=0.1) return TF. Variable (initial) # because we want to use Relu, we also add some small positive values to the bias (0.1) To avoid death node (dead neurons) def bias_variable (shape): initial = Tf.constant (0.1, Shape=shape) return TF.  Variable (initial) ####### #卷积层, the pool layer is reused next, define the CREATE function ######## # tf.nn.conv2d is a 2-dimensional convolution function def TensorFlow in conv2d (x, W): return tf.nn.conv2d (x, W, strides=[1, 1, 1, 1], padding= ' SAME ') # Maximum pooled def 2*2 (x) with max_pool_2x2: Return Tf.nn.max_pool (x, KSI Ze=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding= ' SAME ') ####### #正式设计卷积神经网络之前先定义placeholder ######## # x is characteristic, Y_ is the real label. Convert picture data from 1D to 2D. Using tensor to transform functionsTf.reshape x = Tf.placeholder (Tf.float32, Shape=[none, 784]) Y_ = Tf.placeholder (Tf.float32, Shape=[none, ten]) X_image = t F.reshape (x,[-1,28,28,1]) ####### #设计卷积神经网络 ######## # First layer convolution # convolution kernel size for 5*5,1 color channel, 32 different convolution nuclei W_CONV1 = weight_variable ([5, 5 , 1, 32]) # using the CONV2D function for convolution operations, plus the bias b_conv1 = Bias_variable ([32]) # to convolution the x_image and weights vector, plus the offset, and then apply the Relu activation function, H_CONV1 = Tf.nn.rel U (conv2d (X_image, W_CONV1) + b_conv1) # Pool The output of the convolution h_pool1 = max_pool_2x2 (H_CONV1) # Second-tier convolution (roughly the same as the first layer, the convolution core is 64, this layer of convolution will extract 6 4 characteristics) = Weight_variable ([5, 5, w_conv2]) b_conv2 = Bias_variable ([)] H_conv2 = Tf.nn.relu (conv2d (H_pool1, W_conv2 + b_conv2) h_pool2 = max_pool_2x2 (h_conv2) # fully connected layer. Number of suppressed nodes 1024. Using the Relu activation function W_FC1 = weight_variable ([7 * 7 *, 1024]) B_FC1 = bias_variable ([1024]) H_pool2_flat = Tf.reshape (H_pool2, [- 1, 7*7*64]) H_fc1 = Tf.nn.relu (Tf.matmul (H_pool2_flat, W_FC1) + b_fc1) # In order to prevent the fitting, add the dropout layer before the output layer Keep_prob = Tf.placehold ER (tf.float32) h_fc1_drop = Tf.nn.dropout (H_FC1, keep_prob) # output layer. Add a Softmax layer, just like SOFTmax regression the same.
Get the probability output. W_FC2 = weight_variable ([1024, ten]) B_FC2 = Bias_variable ([ten]) Y_conv=tf.nn.softmax (Tf.matmul (H_fc1_drop, W_FC2) + b_ FC2) ####### #模型训练设置 ######## # defines loss function as cross entropy, the optimizer uses Adam, and gives a relatively small learning rate of 1e-4 cross_entropy = Tf.reduce_mean (-tf.reduce_sum (Y_*tf.log (Y_conv), reduction_indices=[1]) Train_step = Tf.train.AdamOptimizer (1e-4). Minimize ( Cross_entropy) # Defines the operation of the evaluation accuracy rate correct_prediction = tf.equal (Tf.argmax (y_conv,1), Tf.argmax (y_,1)) accuracy = Tf.reduce_ Mean (Tf.cast (correct_prediction, Tf.float32)) ####### #开始训练过程 ######## # initializes all Parameters Tf.global_variables_initializer (). Run () # Training (the kepp_prob ratio for dropout when setting up training is 0.5.) Mini-batch 50, 2000 iterations, participation in training samples 50,000 # Every 100 training, the accuracy of the evaluation Keep_prob set to 1, real-time monitoring model performance for I in range (1000): Batch = Mni St.train.next_batch (m) if i%100 = = 0:train_accuracy = Accuracy.eval (feed_dict={x:batch[0), Y_: batch[1], Keep_prob : 1.0}) print "-->step%d, training accuracy%.4f"% (i, train_accuracy) Train_step.run (feed_dict={x:bAtch[0], Y_: batch[1], keep_prob:0.5} # After full training, complete test on final Test set to get overall classification accuracy print "convolution neural network in mnist DataSet correct rate:%g"%
 Accuracy.eval (feed_dict={x:mnist.test.images, Y_: Mnist.test.labels, keep_prob:1.0})

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.