Convolutional self-encoder

Source: Internet
Author: User
Tags json

Convolutional self-encoder uses the unsupervised learning method of the traditional self-encoder, combines convolution and pooling operation of convolutional neural network, realizes feature extraction, and finally realizes a deep neural network through stack.
Specific articles can be consulted:

Masci J, Meier U, Cireşan D, et al. stacked convolutional auto-encoders for hierarchical feature Extraction[c]//internatio NAL Conference on Artificial neural Networks. Springer Berlin Heidelberg, 2011:52-59. Unsupervised Learning

Unsupervised learning can learn the characteristics of a sample without marking the case

The main purpose of unsupervised learning method is to extract generally useful features from unlabelled, to detect and re Move input redundancies and to preserve only essential aspects of the data in robust and discriminative representations.

The purpose of convolutional self-encoder creation is to realize the unsupervised feature extraction of feature invariance extraction (invariant feature) by convolution and pooling operation of convolutional neural network. convolutional neural Networks and traditional self-encoders

convolutional neural networks consist of a neural network consisting of convolution and pooling. Convolution acts as a filter, while pooling is the extraction of invariant features. The network structure is shown in the following figure:

The self-encoder is a neural network consisting of an input layer, an implicit layer, and an output layer, and its structure is shown in the following diagram:

By using the mapping relationship between the input layer and the output layer, the sample reconstruction is realized and the feature is extracted. convolutional self-encoder

Suppose we have K convolution cores, each convolution core consists of a parameter wk w^k and BK B^k, and HK h^k represents the convolution layer, then
Hk=σ (X∗WK+BK) h^k = \sigma (x * w^k + b^k)
The resulting HK h^k can be reconstructed in the following pattern:
Y=σ (hk∗w^k+c) y = \sigma (h^k * \hat w^k + c)
By comparing the results of the input sample and the final utilization feature to Euclidean distance, a complete convolutional self-encoder (CAE) can be obtained by optimizing the BP algorithm.
E=12n∑ (xi−yi) 2 E = \frac{1}{2n}\sum (x_i-y_i) ^2

Code

This is a Keras-based code found on GitHub:

Def getmodel (): input_img = input (shape= (1) x = convolution2d (3, 3, activation= ' Relu ', border_mode= ' s Ame ', dim_ordering= ' TF ') (input_img) x = Maxpooling2d ((2, 2), border_mode= ' same ', dim_ordering= ' TF ') (x) x = Convolu Tion2d (3, 3, activation= ' Relu ', border_mode= ' same ', dim_ordering= ' TF ') (input_img) x = Maxpooling2d ((2, 2), Border_ Mode= ' same ', dim_ordering= ' TF ') (x) x = convolution2d (3, 3, activation= ' Relu ', border_mode= ' same ', dim_ordering= ' TF ') (x) encoded = Maxpooling2d ((2, 2), border_mode= ' same ', dim_ordering= ' TF ') (x) #6x6x32--Bottleneck x = Upsam Pling2d ((2, 2), dim_ordering= ' TF ') (encoded) x = convolution2d (+, 3, 3, activation= ' Relu ', border_mode= ' same ', Dim_ord ering= ' TF ') (x) x = Upsampling2d ((2, 2), dim_ordering= ' TF ') (x) x = convolution2d (3, 3, activation= ' Relu ', Borde R_mode= ' same ', dim_ordering= ' TF ') (x) decoded = convolution2d (3, 3, 3, activation= ' Relu ', border_mode= ' same ', Dim_order ing= ' TF ') (x) #CreatE Model Autoencoder = Model (input_img, decoded) return Autoencoder # Trains the model for ten epochs def Trainmode

    L (): # Load DataSet Print ("Loading DataSet ...") X_train_gray, X_train, x_test_gray, x_test = GetDataSet () # Create Model Description Print ("Creating model ...") model = Getmodel () model.compile (optimizer= ' Rmsprop ', l Oss= ' binary_crossentropy ', metrics=[' accuracy ']) # Train model print ("Training model ...") Model.fit (x_train_gr Ay, X_train, nb_epoch=10, batch_size=148, Shuffle=true, Validation_data= (X_test_gray, x_test), Callbacks=[TensorBoard

    (log_dir= '/TMP/TB ', histogram_freq=0, Write_graph=false)]) # Evaluate loaded model on test data print ("Evaluating model ...") score = Model.evaluate (X_train_gray, X_train, ve rbose=0) print "%s:%.2f%%"% (Model.metrics_names[1], score[1]*100) # Serialize model to JSON print ("Saving Model ... ") Model_json = Model.to_json () with open (" Model.json "," W ") as Json_fiLe:json_file.write (Model_json) # Serialize weights to HDF5 print ("Saving weights ...") Model.save_wei
 Ghts ("Model.h5")

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.