TensorFlow implements the de-noising self-encoder and uses-masking Noise Auto Encoder

Source: Internet
Author: User

For the principle of self-encoder, please refer to the blog http://blog.csdn.net/xukaiwen_2016/article/details/70767518, for its familiarity with the principle can be directly read the following code.

The first is to use the relevant library, mathematical operations related Operations library NumPy and data preprocessing module Scikit-lean in preprocessing, using TensorFlow mnist as a dataset.

Import NumPy as Npimport sklearn.preprocessing as Prepimport tensorflow as Tffrom tensorflow.examples.tutorials.mnist Imp ORT Input_data

We know that the most important thing about self-encoder is to find the mapping matrix between the input layer and the hidden layer with less data, and this matrix needs to be initialized, and for our deep learning model, the initialization data in this matrix should be as large as possible, uniform or Gaussian distribution. Xavier initialization parameter initialization method is just right and often used.

def xavier_init (fan_in, fan_out, constant = 1): Low    =-constant * NP.SQRT (6.0/(fan_in + fan_out)) High    = Constan T * NP.SQRT (6.0/(fan_in + fan_out))    return Tf.random_uniform ((fan_in, fan_out), minval = low, maxval = High,dtype = t F.FLOAT32)
Next, we define a class of self-encoder which is convenient for us to reuse, which includes the _init_ () constructor and several member functions.

The first is the _init_ () constructor, parameters: N_input (number of input variables), N_hidden (number of hidden layer variables), transfer_function (hidden layer activation function, default softplus), Optimizer (Optimizer, Default is Adam), scale (Gaussian noise figure, default 0.1). _initialize_weights () is defined later, completing the mapping matrix and the bias vector.

def __init__ (self, n_input, n_hidden, transfer_function = tf.nn.softplus, optimizer = Tf.train.AdamOptimizer (), scale = 0. 1): Self.n_input = n_input Self.n_hidden = N_hidden Self.transfer = transfer_function Self.sca Le = Tf.placeholder (tf.float32) #定义成一个placeholder Self.training_scale = Scale Network_weights = Self._initia Lize_weights () self.weights = network_weights # defines the model, which is the input layer, the hidden layer, the output layer, and the mapping matrix between self.x = Tf.placeholder ( Tf.float32, [None, self.n_input]) Self.hidden = Self.transfer (Tf.add (Tf.matmul (self.x + scale * TF.RANDOM_NORMAL (n _input,)), self.weights[' W1 '), self.weights[' B1 '])) self.reconstruction = Tf.add (TF. Matmul (Self.hidden, self.weights[' W2 ']), self.weights[' B2 ']) # defines the loss function, where we use the squared difference, because the following activation function chooses the identity self.cost = 0.5 * Tf.reduce_sum (Tf.pow (Tf.subtract (self.reconstruction, self.x), 2.0)) Self.optimizer = Optimizer.minimize (self        . Cost) #优化器为求损失极小化init = Tf.global_variables_initializer () self.sess = tf. Session () Self.sess.run (init)

_initialize_weights () completes the mapping matrix and the bias vector, which invokes the previous Xavier_init function.

    def _initialize_weights (self):        # dictionary type        all_weights = Dict ()         # Input layer to hidden layer matrix        all_weights[' w1 ' = tf. Variable (Xavier_init (Self.n_input, Self.n_hidden)) #        # input layer to hidden layer bias vector        all_weights[' b1 '] = tf. Variable (Tf.zeros ([Self.n_hidden], dtype = Tf.float32))        # hidden layer to output layer matrix, can be seen as the inverse        W1 ' all_weights[' of w2 ' = tf. Variable (Tf.zeros ([Self.n_hidden, self.n_input], Dtype = Tf.float32))        # hidden layer to output layer bias vector        all_weights[' b2 '] = tf. Variable (Tf.zeros ([self.n_input], dtype = tf.float32))        return all_weights

Partial_fit () A function that uses batch for training, using six cost and optimizer,feed_dict feed data, including input data and Gaussian noise figure.

    def partial_fit (self, X): Cost        , opt = Self.sess.run ((self.cost, self.optimizer), feed_dict = {self.x:x,self.scale: Self.training_scale})        return cost
Calc_total_cost () To minimize loss and minimize operation, only cost is executed

def calc_total_cost (self, X):        return Self.sess.run (self.cost, feed_dict = {Self.x:x,self.scale:self.training_ Scale})
Transform () gets its hidden layer data based on the input.

    Def transform (self, X):        return Self.sess.run (Self.hidden, feed_dict = {Self.x:x,self.scale:self.training_scale})

Generate () Gets the output layer data based on the hidden layer data.

    def generate (self, hidden = None):        if hidden is None:hidden = np.random.normal (size = self.weights["B1"])        return s Elf.sess.run (self.reconstruction, feed_dict = {Self.hidden:hidden})
Reconstruct () Gets the output layer data based on the input layer data, which is equivalent to transform () +generate ().

    def reconstruct (self, X):        return Self.sess.run (self.reconstruction, feed_dict = {self.x:x,self.scale: Self.training_scale})
The next step is to get the mapping matrix and the offset vector.

    def getweights (self):        return Self.sess.run (self.weights[' W1 ")    def getbiases (self):        return Self.sess.run (self.weights[' B1 ')
The above is the complete definition of the self-encoder class.

Reading data sets

Mnist = input_data.read_data_sets (' mnist_data ', one_hot = True)
Standard_scale () standardizes the input image data and test data, which means that the pixel values of the image are mapped to 0-1 space, and we need to use the Standardscaler in the sklearn.preprocessing.

def standard_scale (X_train, x_test):    preprocessor = prep. Standardscaler (). Fit (x_train)    X_train = Preprocessor.transform (x_train)    x_test = Preprocessor.transform (X_ Test)    return X_train, X_test
Get_random_block_from_data () Randomly gets a few pictures, we don't train all the data in the mnist.

def get_random_block_from_data (data, batch_size):    start_index = np.random.randint (0, Len (data)-batch_size)    

Finally is the training, the following code should be very familiar with, and very simple, no longer say, directly paste the rest of the code:

X_train, x_test = Standard_scale (mnist.train.images, mnist.test.images) n_samples = Int (mnist.train.num_examples)                                               Training_epochs = 20batch_size = 128display_step = 1autoencoder = Additivegaussiannoiseautoencoder (N_input = 784, N_hidden = $, transfer_function = TF                                               . nn.softplus, optimizer = Tf.train.AdamOptimizer (learning_rate = 0.001),    Scale = 0.01) for the epoch in range (training_epochs): avg_cost = 0. total_batch = Int (n_samples/batch_size) # Loop Load training data for I in range (total_batch): Batch_xs = Get_random_block _from_data (X_train, batch_size) # Feeding training data with batch cost = Autoencoder.partial_fit (batch_xs) # Compute AV Erage loss Avg_cost + = Cost/n_samples * Batch_size # Show loss if epoch Display_step = = 0:print ("Epoch : ", '%04d '% (Epoch + 1)," cost= "," {:. 9f} ". Format (avg_cost)) print ("Total cost:" + str (autoencoder.calc_total_cost (x_test))) 

TensorFlow implements the de-noising self-encoder and uses-masking Noise Auto Encoder

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.