Learning notes TF056: TensorFlow MNIST, dataset, classification, visualization, tf056tensorflow

Source: Internet
Author: User

Learning notes TF056: TensorFlow MNIST, dataset, classification, visualization, tf056tensorflow

MNIST (Mixed National Institute of Standards and Technology) http://yann.lecun.com/exdb/mnist/, entry-level computer vision dataset, handwritten numbers for middle school students in the United States. The training set has 60 thousand images and the test set has 10 thousand images. The number is pre-processed, formatted, adjusted and centered, and the image size is fixed to 28x28. The dataset is small, the training speed is fast, and the convergence effect is good.

MNIST dataset, a subset of NIST dataset. 4 files. Train-label-idx1-ubyte.gz training set markup file (28881, train-images-idx3-ubyte.gz ← Test Set picture file (1648877 bytes ). Test Set. The first 5000 samples are taken from the original NIST training set, and the last 5000 samples are taken from the original NIST test set.

Training set Tag file train-labels-idx1-ubyt format: offset, type, value, description. Magic number (MSB first), number of items, and label.
MSB (most significant bit, maximum valid bits), binary, maximum weighted bits of MSB. MSB is at the leftmost of the binary, and MSB first is at the top of the most effective bit. Magic number writes the ELF header file constant in the ELF Format (Executable and Linkable Format), checks whether the file is consistent with your own settings, and determines whether the file is damaged.

Training set image file train-images-idx3-ubyte formats: magic number, number of images, number of rows, number of columns, pixel.
The value range of pixel (pixels) is 0-255, and 0-255 indicates the background color (white). indicates the foreground color (black ).

Test Set Tag file t10k-labels-idx1-ubyte format: magic number (MSB first), number of items, label.

Test Set image file t10k-images-idx3-ubyte formats: magic number, number of images, number of rows, number of columns, pixel.

Tensor flow-1.1.0/tensorflow/examples/tutorials/mnist. Mnist_softmax.py Regression Training, full_connected_feed.py Feed data training, mnist_with_summaries.py convolutional Neural Network (CNN) training process visualization, mnist_softmax_xla.py XLA framework.

MNIST classification problem.

Softmax regression solves two or more categories. Logistic regression models are widely used in classification. Tensorflow-1.1.0/tensorflow/examples/tutorials/mnist/mnist_softmax.py.

Load data. Import the input_data.py file and tensorflow. contrib. learn. read_data_sets to load data. FLAGS. data_dir MNIST path, which can be customized. One_hot tag. The length is n Array. Only one element is 1.0, and other elements are 0.0. Output layer softmax: Output probability distribution. The input tag probability distribution form is required to calculate cross entropy.

Build a regression model. Enter the original group truth value, calculate the softmax function fitting forecast value, and define the loss function and optimizer. The gradient descent algorithm is used to minimize cross entropy at a learning rate of 0.5. Tf. train. GradientDescentOptimizer.

Train the model. Initialize the creation variable and the session startup model. The model is trained 1000 times in a loop. Each cycle randomly captures 100 data points and replaces placeholders. Stochastic training and SGD method Gradient Descent: each time a small part of data is randomly captured from the training data, the gradient descent training is performed. BGD calculates all training data each time. SGD learns the overall characteristics of a dataset to accelerate the training process.

Evaluate the model. Tf. argmax (y, 1) returns the predicted tag value for any input x of the model. tf. argmax (y _, 1) indicates the correct tag value. Tf. equal checks whether the predicted value matches the real value, predicts that the Boolean value is converted to a floating point number, and obtains the average value.

From _ future _ import absolute_import
From _ future _ import division
From _ future _ import print_function
Import argparse
Import sys
From tensorflow. examples. tutorials. mnist import input_data
Import tensorflow as tf
FLAGS = None
Def main (_):
# Import data to load data
Mnist = input_data.read_data_sets (FLAGS. data_dir, one_hot = True)
# Create the model defines the Regression model
X = tf. placeholder (tf. float32, [None, 784])
W = tf. Variable (tf. zeros ([784, 10])
B = tf. Variable (tf. zeros ([10])
Y = tf. matmul (x, W) + B # predicted value
# Define loss and optimizer Define the loss function and optimizer
Y _ = tf. placeholder (tf. float32, [None, 10]) # enter a placeholder for the actual value
# Tf. nn. softmax_cross_entropy_with_logits calculate the difference between the predicted value y and the actual value y _, and take the average value
Cross_entropy = tf. performance_mean (
Tf. nn. softmax_cross_entropy_with_logits (labels = y _, logits = y ))
# SGD Optimizer
Train_step = tf. train. GradientDescentOptimizer (0.5). minimize (cross_entropy)
# InteractiveSession (): creates an interactive context TensorFlow session. An interactive session becomes the default session. You can run the Operation (OP) method (tf. Tensor. eval, tf. Operation. run)
Sess = tf. InteractiveSession ()
Tf. global_variables_initializer (). run ()
# Train Training Model
For _ in range (1000 ):
Batch_xs, batch_ys = mnist. train. next_batch (100)
Sess. run (train_step, feed_dict = {x: batch_xs, y _: batch_ys })
# Test trained model Evaluation Training model
Correct_prediction = tf. equal (tf. argmax (y, 1), tf. argmax (y _, 1 ))
Accuracy = tf. performance_mean (tf. cast (correct_prediction, tf. float32 ))
# Calculating Model Test Set Accuracy
Print (sess. run (accuracy, feed_dict = {x: mnist. test. images,
Y _: mnist. test. labels }))
If _ name _ = '_ main __':
Parser = argparse. ArgumentParser ()
Parser. add_argument ('-- data_dir', type = str, default = '/tmp/tensorflow/mnist/input_data ',
Help = 'Directory for storing input data ')
FLAGS, unparsed = parser. parse_known_args ()
Tf. app. run (main = main, argv = [sys. argv [0] + unparsed)

Visualize the training process. Tensorflow-1.1.0/tensorflow/examples/tutorials/mnist/mnist_summaries.py.
TensorBoard visualization, training process, recording of structured data, sub-branch local server, listening to port 6006, browser request page, analyzing and recording data, drawing statistical charts, and displaying computing charts.
Run the Script: python mnist_with_summaries.py.
The training process data is stored in the/tmp/tensorflow/mnist Directory, which can be specified by the command line parameter -- log_dir. Run the tree Command, ipnut_data # To store training data, logs # training result log, and train # training set result log. Run the tensorboard command to open the browser and view the training visualization results. The logdir parameter indicates the log file storage path and the command tensorboard -- logdir =/tmp/tensorflow/mnist/logs/mnist_with_summaries. Specify the FileWriter.

# Sess. graph definition and Visualization
File_writer = tf. summary. FileWriter ('/tmp/tensorflow/mnist/logs/mnist_with_summaries', sess. graph)

Open the service address in the browser and enter the visualized operation interface.

Visualization.

Add multiple abstract descriptive functions variable_summaries to a tensor. The SCALARS panel displays the average, standard deviation, maximum, and minimum values of each layer.
Build a network model. weights and biases call variable_summaries. Each layer uses tf. summary. histogram to draw the changes before and after tensor activation functions. HISTOGRAMS panel display.
Draw accuracy, cross entropy, SCALARS panel display.

From _ future _ import absolute_import
From _ future _ import division
From _ future _ import print_function
Import argparse
Import OS
Import sys
Import tensorflow as tf
From tensorflow. examples. tutorials. mnist import input_data
FLAGS = None
Def train ():
# Import data
Mnist = input_data.read_data_sets (FLAGS. data_dir,
One_hot = True,
Fake_data = FLAGS. fake_data)
Sess = tf. InteractiveSession ()
# Create a multilayer model.
# Input placeholders
With tf. name_scope ('input '):
X = tf. placeholder (tf. float32, [None, 784], name = 'x-input ')
Y _ = tf. placeholder (tf. float32, [None, 10], name = 'Y-input ')
With tf. name_scope ('input _ reshape '):
Image_shaped_input = tf. reshape (x, [-1, 28, 28, 1])
Tf. summary. image ('input', image_shaped_input, 10)
# We can't initialize these variables to 0-the network will get stuck.
Def weight_variable (shape ):
"Create a weight variable with appropriate initialization ."""
Initial = tf. truncated_normal (shape, stddev = 0.1)
Return tf. Variable (initial)
Def bias_variable (shape ):
"Create a bias variable with appropriate initialization ."""
Initial = tf. constant (0.1, shape = shape)
Return tf. Variable (initial)
Def variable_summaries (var ):
"Attach a lot of summaries to a Tensor (for TensorBoard visualization )."""
"Add multiple abstract descriptions for a tensor """
With tf. name_scope ('summaries '):
Mean = tf. performance_mean (var)
Tf. summary. scalar ('mean ', mean) # mean
With tf. name_scope ('stddev '):
Stddev = tf. sqrt (tf. performance_mean (tf. square (var-mean )))
Tf. summary. scalar ('stddev', stddev) # Standard Deviation
Tf. summary. scalar ('max ', tf. performance_max (var) # maximum value
Tf. summary. scalar ('Min', tf. performance_min (var) # minimum value
Tf. summary. histogram ('histogram ', var)
Def nn_layer (input_tensor, input_dim, output_dim, layer_name, act = tf. nn. relu ):
# Adding a name scope ensures logical grouping of the layers in the graph.
# Make sure that all layers of the graph are grouped and name_scope is added to each layer.
With tf. name_scope (layer_name ):
# This Variable will hold the state of the weights for the layer
With tf. name_scope ('weight '):
Weights = weight_variable ([input_dim, output_dim])
Variable_summaries (weights)
With tf. name_scope ('biases '):
Biases = bias_variable ([output_dim])
Variable_summaries (biases)
With tf. name_scope ('wx _ plus_ B '):
Preactivate = tf. matmul (input_tensor, weights) + biases
Tf. summary. histogram ('pre _ activations ', preactivate) # pre-activation histogram
Activations = act (preactivate, name = 'activation ')
Tf. summary. histogram ('activations ', activations) # histogram after activation
Return activations
Hidden1 = nn_layer (x, 784,500, 'layer1 ')
With tf. name_scope ('dropout '):
Keep_prob = tf. placeholder (tf. float32)
Tf. summary. scalar ('dropout _ keep_probability ', keep_prob)
Dropped = tf. nn. dropout (hidden1, keep_prob)
# Do not apply softmax activation yet, see below.
Y = nn_layer (dropped, 500, 10, 'layer2', act = tf. identity)
With tf. name_scope ('cross _ entropy '):
Diff = tf. nn. softmax_cross_entropy_with_logits (labels = y _, logits = y)
With tf. name_scope ('Total '):
Cross_entropy = tf. performance_mean (diff)
Tf. summary. scalar ('cross _ entropy', cross_entropy) # cross entropy
With tf. name_scope ('train '):
Train_step = tf. train. AdamOptimizer (FLAGS. learning_rate). minimize (
Cross_entropy)
With tf. name_scope ('accuracy '):
With tf. name_scope ('correct _ prediction '):
Correct_prediction = tf. equal (tf. argmax (y, 1), tf. argmax (y _, 1 ))
With tf. name_scope ('accuracy '):
Accuracy = tf. performance_mean (tf. cast (correct_prediction, tf. float32 ))
Tf. summary. scalar ('accuracy ', accuracy) # accuracy
# Merge all the summaries and write them out
#/Tmp/tensorflow/mnist/logs/mnist_with_summaries (by default)
Merged = tf. summary. merge_all ()
Train_writer = tf. summary. FileWriter (FLAGS. log_dir + '/train', sess. graph)
Test_writer = tf. summary. FileWriter (FLAGS. log_dir + '/test ')
Tf. global_variables_initializer (). run ()
Def feed_dict (train ):
"" Make a TensorFlow feed_dict: maps data onto Tensor placeholders ."""
If train or FLAGS. fake_data:
Xs, ys = mnist. train. next_batch (100, fake_data = FLAGS. fake_data)
K = FLAGS. dropout
Else:
Xs, ys = mnist. test. images, mnist. test. labels
K = 1.0
Return {x: xs, y _: ys, keep_prob: k}
For I in range (FLAGS. max_steps ):
If I % 10 = 0: # Record summaries and test-set accuracy
Summary, acc = sess. run ([merged, accuracy], feed_dict = feed_dict (False ))
Test_writer.add_summary (summary, I)
Print ('accuracy at step % s: % s' % (I, acc ))
Else: # Record train set summaries, and train
If I % 100 = 99: # Record execution stats
Run_options = tf. RunOptions (trace_level = tf. RunOptions. FULL_TRACE)
Run_metadata = tf. RunMetadata ()
Summary, _ = sess. run ([merged, train_step],
Feed_dict = feed_dict (True ),
Options = run_options,
Run_metadata = run_metadata)
Train_writer.add_run_metadata (run_metadata, 'Step % 03d '% I)
Train_writer.add_summary (summary, I)
Print ('adding run metadata for ', I)
Else: # Record a summary
Summary, _ = sess. run ([merged, train_step], feed_dict = feed_dict (True ))
Train_writer.add_summary (summary, I)
Train_writer.close ()
Test_writer.close ()
Def main (_):
If tf. gfile. Exists (FLAGS. log_dir ):
Tf. gfile. DeleteRecursively (FLAGS. log_dir)
Tf. gfile. MakeDirs (FLAGS. log_dir)
Train ()
If _ name _ = '_ main __':
Parser = argparse. ArgumentParser ()
Parser. add_argument ('-- fake_data', nargs = '? ', Const = True, type = bool,
Default = False,
Help = 'if true, uses fake data for unit testing .')
Parser. add_argument ('-- max_steps', type = int, default = 1000,
Help = 'number of steps to run trainer .')
Parser. add_argument ('-- learning_rate', type = float, default = 0.001,
Help = 'initial learning rate ')
Parser. add_argument ('-- dropout', type = float, default = 0.9,
Help = 'keep probability for training dropout .')
Parser. add_argument (
'-- Data_dir ',
Type = str,
Default = OS. path. join (OS. getenv ('test _ tmpdir', '/tmp '),
'Tensorflow/mnist/input_data '),
Help = 'Directory for storing input data ')
Parser. add_argument (
'-- Log_dir ',
Type = str,
Default = OS. path. join (OS. getenv ('test _ tmpdir', '/tmp '),
'Tensorflow/mnist/logs/mnist_with_summaries '),
Help = 'summaries log directory ')
FLAGS, unparsed = parser. parse_known_args ()
Tf. app. run (main = main, argv = [sys. argv [0] + unparsed)

References:
Analysis and Practice of TensorFlow Technology

Welcome to the Shanghai Machine Learning job opportunity, my qingxingfengzi

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.