Learning notes TF057: TensorFlow MNIST, convolutional neural network, recurrent neural network, unsupervised learning, tf057tensorflow

Source: Internet
Author: User
Tags rounds

Learning notes TF057: TensorFlow MNIST, convolutional neural network, recurrent neural network, unsupervised learning, tf057tensorflow

MNIST convolutional neural network. Https://github.com/nlintz/TensorFlow-Tutorials/blob/master/05_convolutional_net.py.
TensorFlow builds a CNN model to train the MNIST dataset.

Build a model.

Define input data and pre-process data. Read the data MNIST to obtain the training set image, tag matrix, and test set Image Tag matrix. TrX, trY, teX, and teY data matrices. The trX and teX shapes are changed to [-, 28,],-1 regardless of the number of input images, the length and width of the 28x28 images, and the number of channels. MNIST black and white image, Channel 1. RGB color image, Channel 3.
Initialize the weight and define the network structure. Convolutional Neural Network: Three convolution layers, three pooling layers, one full connection layer, and one output layer.
Define the dropout placeholder keep_conv and the ratio of neurons to be retained. Generate a network model and get the predicted value.
Define the loss function. tf. nn. softmax_cross_entropy_with_logits compares the predicted values and actual values, and performs mean processing.
Define training operation (train_op), RMSProp algorithm optimizer tf. train. RMSPropOptimizer, learning rate 0.001, attenuation value 0.9, optimization loss.
Define the prediction operation (predict_op ).
Session start graph, training, and evaluation.

#! /Usr/bin/env python
Import tensorflow as tf
Import numpy as np
From tensorflow. examples. tutorials. mnist import input_data
Batch_size = 128 # size of training batches
Test_size = 256 # evaluate the batch size
# Define the initialization Weight Function
Def init_weights (shape ):
Return tf. Variable (tf. random_normal (shape, stddev = 0.01 ))
# Defining Neural Network Model Functions
# Input parameter: X input data, w weight at each layer, p_keep_conv, p_keep_hidden dropout retention ratio of neurons
Def model (X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden ):
# The first convolutional layer and pooling layer, dropout part of neurons
L1a = tf. nn. relu (tf. nn. conv2d (X, w, # l1a shape = (?, 28, 28, 32)
Strides = [1, 1, 1, 1], padding = 'same '))
L1 = tf. nn. max_pool (l1a, ksize = [1, 2, 2, 1], # l1 shape = (?, 14, 14, 32)
Strides = [1, 2, 2, 1], padding = 'same ')
L1 = tf. nn. dropout (l1, p_keep_conv)
# The second convolutional layer and pooled layer, dropout part of neurons
L2a = tf. nn. relu (tf. nn. conv2d (l1, w2, # l2a shape = (?, 14, 14, 64)
Strides = [1, 1, 1, 1], padding = 'same '))
L2 = tf. nn. max_pool (l2a, ksize = [1, 2, 2, 1], # l2 shape = (?, 7, 7, 64)
Strides = [1, 2, 2, 1], padding = 'same ')
L2 = tf. nn. dropout (l2, p_keep_conv)
# Part of dropout neurons in the third convolutional layer and pooling Layer
L3a = tf. nn. relu (tf. nn. conv2d (l2, w3, # l3a shape = (?, 7, 7,128)
Strides = [1, 1, 1, 1], padding = 'same '))
L3 = tf. nn. max_pool (l3a, ksize = [1, 2, 2, 1], # l3 shape = (?, (4, 4,128)
Strides = [1, 2, 2, 1], padding = 'same ')
L3 = tf. reshape (l3, [-1, w4.get _ shape (). as_list () [0]) # reshape (?, 2048)
L3 = tf. nn. dropout (l3, p_keep_conv)
# Full connection layer, dropout part of neurons
L4 = tf. nn. relu (tf. matmul (l3, w4 ))
L4 = tf. nn. dropout (l4, p_keep_hidden)
# Output layer
Pyx = tf. matmul (l4, w_o)
Return pyx # return predicted value
Mnist = input_data.read_data_sets ("MNIST_data/", one_hot = True)
TrX, trY, teX, teY = mnist. train. images, mnist. train. labels, mnist. test. images, mnist. test. labels
# Data preprocessing
TrX = trX. reshape (-1, 28, 28, 1) #28x28x1 input img
TeX = teX. reshape (-1, 28, 28, 1) #28x28x1 input img
X = tf. placeholder ("float", [None, 28, 28, 1])
Y = tf. placeholder ("float", [None, 10])
# Convolution core size 3x3
# Patch size 3x3, input dimension 1, output dimension 32
W = init_weights ([3, 3, 1, 32]) #3x3x1 conv, 32 outputs
# Patch size 3x3, input dimension 32, output dimension 64
W2 = init_weights ([3, 3, 32, 64]) #3x3x32 conv, 64 outputs
# Patch size 3x3, input dimension 64, output dimension 128
W3 = init_weights ([3, 3, 64,128]) #3x3x32 conv, 128 outputs
# In the full connection layer, the input dimension is 128x4x4, and the upper layer inputs data in three dimensions to one dimension, and the output dimension is 625.
W4 = init_weights ([128*4*4,625]) # FC 128*4*4 inputs, 625 outputs
# Output layer, input dimension 625, and output Dimension 10 represent 10 categories (labels)
W_o = init_weights ([625, 10]) # FC 625 inputs, 10 outputs (labels)
# Define the dropout placeholder
P_keep_conv = tf. placeholder ("float ")
P_keep_hidden = tf. placeholder ("float ")
Py_x = model (X, w, w2, w3, w4, w_o, p_keep_conv, p_keep_hidden) # obtain the predicted value.
# Define the loss function
Cost = tf. performance_mean (tf. nn. softmax_cross_entropy_with_logits (logits = py_x, labels = Y ))
# Define training operations
Train_op = tf. train. RMSPropOptimizer (0.001, 0.9). minimize (cost)
# Define a prediction operation
Predict_op = tf. argmax (py_x, 1)
# Launch the graph in a session
# Session startup Diagram
With tf. Session () as sess:
# You need to initialize all variables
Tf. global_variables_initializer (). run ()
For I in range (100 ):
# Training Model
Training_batch = zip (range (0, len (trX), batch_size ),
Range (batch_size, len (trX) + 1, batch_size ))
For start, end in training_batch:
Sess. run (train_op, feed_dict = {X: trX [start: end], Y: trY [start: end],
P_keep_conv: 0.8, p_keep_hidden: 0.5 })
# Evaluation Model
Test_indices = np. arange (len (teX) # Get A Test Batch
Np. random. shuffle (test_indices)
Test_indices = test_indices [0: test_size]
Print (I, np. mean (np. argmax (teY [test_indices], axis = 1) =
Sess. run (predict_op, feed_dict = {X: teX [test_indices],
P_keep_conv: 1.0,
P_keep_hidden: 1.0 })))

MNIST Recurrent Neural Network. Https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py.

RNN has been successfully applied in the natural language processing field, including machine translation, speech recognition, image description generation (image feature generation description), language model and text generation (generate model to predict the probability of the next word ). Alex Graves Supervised Sequence Labelling with Recurrent Neural Networks http://www.cs.toronto.edu /~ Graves/preprinthistory.

Build a model. Set the training super parameter, set the learning rate, number of training times, and size of training data for each round.
RNN classifies the image, each row of the image, and the pixel sequence ). The MNIST image size is 28x28, with 28 Element Sequences X 28 rows. The input sequence length is 28 in each step, and the number of input steps is 28.
Define input data and weights.
Define the RNN model.
Define the loss function and optimizer ).
Define model prediction results and accuracy calculation methods.
Session start graph, start training, and output an accuracy value every 20 times.

From _ future _ import print_function
Import tensorflow as tf
From tensorflow. contrib import rnn
# Import MNIST data
From tensorflow. examples. tutorials. mnist import input_data
Mnist = input_data.read_data_sets ("/tmp/data/", one_hot = True)
# Training Parameters
# Set training super Parameters
Learning_rate = 0.001
Training_steps = 10000
Batch_size = 1, 128
Display_step= 200
# Network Parameters
# Neural Network Parameters
Num_input = 28 # MNIST data input (img shape: 28*28) input layer
Timesteps = 28 # length of timesteps 28
Num_hidden = 128 # hidden layer num of features hidden layer neurons
Num_classes = 10 # MNIST total classes (0-9 digits) Output quantity, Category 0 ~ 9
# Tf Graph input
# Input data placeholders
X = tf. placeholder ("float", [None, timesteps, num_input])
Y = tf. placeholder ("float", [None, num_classes])
# Define weights
# Define weight
Weights = {
'Out': tf. Variable (tf. random_normal ([num_hidden, num_classes])
}
Biases = {
'Out': tf. Variable (tf. random_normal ([num_classes])
}
# Defining the RNN Model
Def RNN (x, weights, biases ):
# Unstack to get a list of 'timesteps 'tensors of shape (batch_size, n_input)
# Convert input x to (128 batch * 28 steps, 28 inputs)
X = tf. unstack (x, timesteps, 1)
# Define a lstm cell with tensorflow
# Basic LSTM Cyclic Network Unit BasicLSTMCell
L1__cell = rnn. BasicLSTMCell (num_hidden, forget_bias = 1.0)
# Get lstm cell output
Outputs, states = rnn. static_rnn (l1__cell, x, dtype = tf. float32)
# Linear activation, using rnn inner loop last output
Return tf. matmul (outputs [-1], weights ['out']) + biases ['out']
Logits = RNN (X, weights, biases)
Prediction = tf. nn. softmax (logits)
# Define loss and optimizer
# Define the loss function
Loss_op = tf. performance_mean (tf. nn. softmax_cross_entropy_with_logits (
Logits = logits, labels = Y ))
# Define Optimizer
Optimizer = tf. train. GradientDescentOptimizer (learning_rate = learning_rate)
Train_op = optimizer. minimize (loss_op)
# Evaluate model (with test logits, for dropout to be disabled)
Correct_pred = tf. equal (tf. argmax (prediction, 1), tf. argmax (Y, 1 ))
Accuracy = tf. performance_mean (tf. cast (correct_pred, tf. float32 ))
# Initialize the variables (I. e. assign their default value)
Init = tf. global_variables_initializer ()
# Start training
With tf. Session () as sess:
# Run the initializer
Sess. run (init)
For step in range (1, training_steps + 1 ):
Batch_x, batch_y = mnist. train. next_batch (batch_size)
# Reshape data to get 28 seq of 28 elements
Batch_x = batch_x.reshape (batch_size, timesteps, num_input ))
# Run optimization op (backprop)
Sess. run (train_op, feed_dict = {X: batch_x, Y: batch_y })
If step % display_step = 0 or step = 1:
# Calculate batch loss and accuracy
Loss, acc = sess. run ([loss_op, accuracy], feed_dict = {X: batch_x,
Y: batch_y })
Print ("Step" + str (step) + ", Minibatch Loss =" + \
"{:. 4f}". format (loss) + ", Training Accuracy =" + \
"{:. 3f}". format (acc ))
Print ("Optimization Finished! ")
# Calculate accuracy for 128 mnist test images
Test_lename = 128
Test_data = mnist. test. images [: test_len]. reshape (-1, timesteps, num_input ))
Test_label = mnist. test. labels [: test_len]
Print ("Testing Accuracy :",\
Sess. run (accuracy, feed_dict = {X: test_data, Y: test_label }))

MNIST unsupervised learning. Autoencoder ).

Self-coding network. UFLDL http://ufldl.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity.
Supervised Learning data is labeled.
Self-coding network, compress Input samples to the hidden layer, decompress them, and recreate samples at the output end. The number of neurons in the final output layer is equal to the number of neurons in the input layer. Compression: input data (image, text, and sound) has redundant information to varying degrees. Automatic Coding network learning removes redundant information and uses feature input to the hidden layer. Find the main component that can represent the source data. Activation functions do not use non-linear functions such as sigmoid. Linear functions are called PCA models.
Principal Component analysis (PCA) is used to analyze and simplify the dataset technology. Reduce the dataset dimension to maximize the contribution of dataset variance. Low-level principal components are retained, and higher-level principal components are ignored. The most common Linear dimensionality reduction method.
The compression process restricts the number of hidden neurons and learns meaningful features. It is expected that neurons will be restrained most of the time. Neuron output close to 1 is activated, close to 0 is blocked. Some neurons are in the restrained state, with sparse limitations.
Multiple hidden layers, input data images, the first layer learning recognition edge, the second layer learning combination edge, constitute a contour, angle, higher level learning combination more meaningful features.

TensorFlow self-coding network implementation. Https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/autoencoder.py.

Build a model. Set the super parameter, the learning rate, the number of training rounds (epoch), the number of training data each time, and the number of rounds to display the training results.
Define input data. For unsupervised learning, only image data is required, and data is not marked.
Initialize the weight and define the network structure. There are two hidden layers: 256 neurons in the first hidden layer and 128 neurons in the second hidden layer. Including the compression and decompression processes.
Construct the loss function and optimizer. The loss function "Least Squares" is used to calculate the mean value for the original dataset and output dataset. The optimizer uses RMSPropOptimizer.
Training data and evaluation model. Automatically encode the Network trained for the Test Set application. Compare the original image of the test set and the automatically encoded network reconstruction result.

From _ future _ import division, print_function, absolute_import
Import tensorflow as tf
Import numpy as np
Import matplotlib. pyplot as plt
# Import MNIST data
From tensorflow. examples. tutorials. mnist import input_data
Mnist = input_data.read_data_sets ("/tmp/data/", one_hot = True)
# Training Parameters
# Set training super Parameters
Learning_rate = 0.01 # learning rate
Num_steps = 30000 # training wheel count
Batch_size = 256 # quantity of training data each time
Display_step = 1000 # How many rounds of training results are displayed?
Examples_to_show = 10 # select 10 images from the test set to verify the automatic encoder results
# Network Parameters
# Network Parameters
# Number of neurons in the first hidden layer, number of feature values
Num_hidden_1 = 256 # 1st layer num features
# Number of neurons in the second hidden layer, number of feature values
Num_hidden_2 = 128 # 2nd layer num features (the latent dim)
# Number of input data feature values 28x28 = 784
Num_input = 784 # MNIST data input (img shape: 28*28)
# Tf Graph input (only pictures)
# Define input data. Only images are required. Do not mark them.
X = tf. placeholder ("float", [None, num_input])
# Initialize the weight and offset of each layer
Weights = {
'Encoder _ h1 ': tf. Variable (tf. random_normal ([num_input, num_hidden_1]),
'Encoder _ h2 ': tf. Variable (tf. random_normal ([num_hidden_1, num_hidden_2]),
'Decoder _ h1 ': tf. Variable (tf. random_normal ([num_hidden_2, num_hidden_1]),
'Decoder _ h2 ': tf. Variable (tf. random_normal ([num_hidden_1, num_input]),
}
Biases = {
'Encoder _ b1 ': tf. Variable (tf. random_normal ([num_hidden_1]),
'Encoder _ b2': tf. Variable (tf. random_normal ([num_hidden_2]),
'Decoder _ b1 ': tf. Variable (tf. random_normal ([num_hidden_1]),
'Decoder _ b2': tf. Variable (tf. random_normal ([num_input]),
}
# Building the encoder
# Define compression Functions
Def encoder (x ):
# Encoder Hidden layer with sigmoid activation #1
Layer_1 = tf. nn. sigmoid (tf. add (tf. matmul (x, weights ['encoder _ h1 ']),
Biases ['encoder _ b1 '])
# Encoder Hidden layer with sigmoid activation #2
Layer_2 = tf. nn. sigmoid (tf. add (tf. matmul (layer_1, weights ['encoder _ h2 ']),
Biases ['encoder _ b2'])
Return layer_2
# Building the decoder
# Define the decompression Function
Def decoder (x ):
# Decoder Hidden layer with sigmoid activation #1
Layer_1 = tf. nn. sigmoid (tf. add (tf. matmul (x, weights ['decoder _ h1 ']),
Biases ['decoder _ b1 '])
# Decoder Hidden layer with sigmoid activation #2
Layer_2 = tf. nn. sigmoid (tf. add (tf. matmul (layer_1, weights ['decoder _ h2 ']),
Biases ['decoder _ b2'])
Return layer_2
# Construct model
# Build a model
Encoder_op = encoder (X)
Decoder_op = decoder (encoder_op)
# Prediction
# Obtain the predicted value
Y_pred = decoder_op
# Targets (Labels) are the input data.
# Obtain the actual value, that is, the input value
Y_true = X
# Define loss and optimizer, minimize the squared error
# Define the loss function and optimizer
Loss = tf. performance_mean (tf. pow (y_true-y_pred, 2 ))
Optimizer = tf. train. RMSPropOptimizer (learning_rate). minimize (loss)
# Initialize the variables (I. e. assign their default value)
Init = tf. global_variables_initializer ()
# Start Training
# Start a new TF session
With tf. Session () as sess:
# Run the initializer
Sess. run (init)
# Training
# Start Training
For I in range (1, num_steps + 1 ):
# Prepare Data
# Get the next batch of MNIST data (only images are needed, not labels)
Batch_x, _ = mnist. train. next_batch (batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, L = sess. run ([optimizer, loss], feed_dict = {X: batch_x })
# Display logs per step
# Print a loss value for each round
If I % display_step = 0 or I = 1:
Print ('step % I: Minibatch Loss: % F' % (I, l ))
# Testing
# Encode and decode images from test set and visualize their reconstruction.
N = 4
Canvas_orig = np. empty (28 * n, 28 * n ))
Canvas_recon = np. empty (28 * n, 28 * n ))
For I in range (n ):
# MNIST test set
Batch_x, _ = mnist. test. next_batch (n)
# Encode and decode the digit image
G = sess. run (decoder_op, feed_dict = {X: batch_x })
# Display original images
For j in range (n ):
# Draw the original digits
Canvas_orig [I * 28 :( I + 1) * 28, j * 28 :( j + 1) * 28] = \
Batch_x [j]. reshape ([28, 28])
# Display reconstructed images
For j in range (n ):
# Draw the reconstructed digits
Canvas_recon [I * 28 :( I + 1) * 28, j * 28 :( j + 1) * 28] = \
G [j]. reshape ([28, 28])
Print ("Original Images ")
Plt. figure (figsize = (n, n ))
Plt. imshow (canvas_orig, origin = "upper", cmap = "gray ")
Plt. show ()
Print ("Reconstructed Images ")
Plt. figure (figsize = (n, n ))
Plt. imshow (canvas_recon, origin = "upper", cmap = "gray ")
Plt. show ()

References:
Analysis and Practice of TensorFlow Technology

Welcome to the Shanghai Machine Learning job opportunity, my qingxingfengzi

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.