Deep Learning (10) Keras Learning notes _ deep learning

Source: Internet
Author: User
Tags shuffle theano keras

Keras Learning Notes

Original address: http://blog.csdn.net/hjimce/article/details/49095199

Author: hjimce

Keras and the use of Torch7 is very similar to the recent fire up the depth of the open source Library, the bottom is used Theano. Keras can be said to be a python version of Torch7, very handy for building a CNN model quickly. Also contains some of the latest literature of the algorithm, such as batch noramlize, documentation tutorials are also very full, the official online author is directly to the example of easy to understand, personal feeling very easily. Keras also supports the preservation of well trained parameters and then loads the already trained parameters for further training.

First, install the tutorial.

Because the Keras is encapsulated on the basis of Theano, we need to install the Theano first, then continue to install Keras. The installation of Theano can refer to my other blog post because I built the Theano in a Windows environment in another blog post, so the Keras installation is also under Windows.

In fact, if the Theano is not installed in Ubuntu, it doesn't matter, just use the command: Pip install Keras. If it goes well, the system will help you install all the keras you need, including Theano.

Windows installation steps:

1, refer to my other blog post, install Theano, and test no problem.

2. Use Anaconda, and then enter the command window in command anaconda: Pip install Keras

It is possible that the H5py installation failed at this time:


Then please go to the website: http://www.lfd.uci.edu/~gohlke/pythonlibs/download h5py.

Open the website above to find h5py:


According to their own computer to choose the appropriate version, my computer is 64 for the system, Python for 2.7, so chose: H5PY-2.5.0-CP27-NONE-WIN_AMD64.WHL. Then put the downloaded h5py file in the installation directory where Anaconda is located:
Finally we enter the command in the Anaconda Command window: Pip install H5PY-2.5.0-CP27-NONE-WIN_AMD64.WHL. After the installation of h5py. In the Reinstall Keras Input command: Pip install Keras.


If the installation fails above, try upgrading the PIP version and enter: python-m pip install--upgrade pip to upgrade the PIP version. Second, Keras use the tutorial below a simple example, more examples can oneself to the official website of the document tutorial to see, the official website gave a very detailed tutorial, unlike Caffe documents so few. Take a look at the following example, loosely constructing the CNN model. Keras provides us with two network models. 1, one is the CNN comparison commonly used sequential network structure, the invocation method is as follows:

# Coding=utf-8 Import NumPy as NP #np. Random.seed (MB) from keras.optimizers import SGD import os import matplotlib.p Yplot as PLT import h5py from keras.models import sequential from keras.layers.convolutional import convolution2d, Maxpoo Ling2d, zeropadding2d from Keras.layers.core import Dense,dropout,activation,flatten from keras.layers.normalization Import batchnormalization from keras.optimizers import SGD, Adadelta, Adagrad,rmsprop from Keras.layers.advanced_ Activations import Prelu from keras.callbacks Import Modelcheckpoint,callback class Losshistory (Callback): Def on_ Train_begin (self, logs={}): Self.losses = [] def on_batch_end (self, batch, logs={}): Self.losses.appe nd (logs.get (' loss ')) def Net_mouth (): Keras_model=sequential () #单支线性网络模型 #卷积层输出的特征图为20个, the volume kernel size is 5*5 keras_model.add (
	Convolution2d (5, 5,input_shape= (3,)) #网络输入每张图片大小为3通道, 60*60 pictures. #激活函数层 Keras_model.add (Activation (' Relu ')) #最大池化层 Keras_model.add (Maxpooling2d (pool_siZe= (2, 2)) #卷积层, the number of feature graphs is 40, the volume kernel size is 5*5 keras_model.add (convolution2d (, 5, 5)) Keras_model.add (Activation (' Relu ')) ker As_model.add (Maxpooling2d (pool_size= (2, 2)) Keras_model.add (convolution2d (3, 3)) Keras_model.add (Activation (' Relu ')) Keras_model.add (Maxpooling2d (pool_size= (2, 2)) Keras_model.add (convolution2d (3, 3)) Keras_model.add (Act Ivation (' Relu ')) #全连接展平 Keras_model.add (Flatten ()) #全连接层, the number of neurons is 1000 keras_model.add (dense (1000)) Keras_model.add (Ac Tivation (' Relu ')) keras_model.add (dense) keras_model.add (activation (' Relu ')) Keras_model.add (Dense) Keras _model.add (Activation (' Tanh ')) #采用adam算法进行迭代优化, the loss function uses the mean square error formula Keras_model.compile (loss= ' mean_squared_error '), Optimizer= ' Adam ' return Keras_model Keras_model=net_mouth () #用于保存验证集误差最小的参数, when the validation set error reduction, immediately save Checkpointer = Modelcheckpoint (filepath= "Mouth.hdf5", Verbose=1, save_best_only=true) history = Losshistory () #训练函数, for CNN, The input x of the network is (nsamples,nchanels,height,width) #y的输入是 (nsamples,output_dimension) Keras_model.fit (x, Y, batch_size=128, Nb_epoch=100,shuffle=true,verbose=2,show_accuracy=true,validation_
 Split=0.1,callbacks=[checkpointer,history])
2, graph model, according to the node to name the connection of a network structure

# Coding=utf-8 Import NumPy as NP from keras.models import Graph import NumPy as NP np.random.seed (MB) Import OS Impor T Matplotlib.pyplot as plt import h5py from keras.models import Graph from keras.layers.convolutional import convolution 2d, maxpooling2d from Keras.layers.core import dense,activation,flatten, dropout import keras.optimizers from keras.cal Lbacks Import Modelcheckpoint,callback class Losshistory (Callback): Def on_train_begin (self, logs={}): SELF.L Osses = [] def on_batch_end (self, batch, logs={}): Self.losses.append (Logs.get (' loss ')) #graph model, the following example is CNN's Local unshared Weight example #用于人脸五官特征点的定位预测, with facial features local unshared weight def second_net_graph (): Keras_model=graph () #网络输入, a picture 3 channels, 60*60 's picture. The name of the node that represents the graph model, we name the first level input keras_model.add_input (name= ' input ', input_shape= (3,60,60)) #第一个卷积层节点, we name it CONV1 , this layer is connected to the input node, and the input node acts as input Keras_model.add_node (layer=convolution2d, 5, 5), name= ' conv1 ', input= ' input ') #激活函数节点 , we name it Relul, this node is connected to the CONV1 node KEras_model.add_node (layer=activation (' Relu '), name= ' Relul ', input= ' conv1 ') Keras_model.add_node (layer= Maxpooling2d (Pool_size= (2, 2)), name= ' pool1 ', input= ' Relul ') Keras_model.add_node (layer=convolution2d (5, 5), name = ' Conv2 ', input= ' pool1 ') Keras_model.add_node (layer=activation (' Relu '), name= ' relu2 ', input= ' conv2 ') keras_ Model.add_node (Layer=maxpooling2d (pool_size= (2, 2)), name= ' pool2 ', input= ' RELU2 ') Keras_model.add_node (layer= Convolution2d (3, 3), name= ' conv3 ', input= ' pool2 ') Keras_model.add_node (layer=activation (' Relu '), name= ' RELU3 ', input= ' Conv3 ') Keras_model.add_node (Layer=maxpooling2d (pool_size= (2, 2)), name= ' pool3 ', input= ' RELU3 ') #分支, preceded by weight sharing, Single case, and then here, we branched it into several branches, each of which predicts the location of the feature points of each facial features #分支1, the node name is Brow_input, connected to the Pool3, Pool3 is the input node Keras_model.add_node (layer =convolution2d (3, 3,activation= ' Relu '), name= ' Brow_input ', input= ' pool3 ') #分支2, the node name is Eye_input, and the input node is pool3 Keras_ Model.add_node (layer=convolution2d, 3, 3,activation= ' Relu '), name= ' Eye_input ', input= ' pool3 ') #分支3 Keras_Model.add_node (layer=convolution2d, 3, 3,activation= ' Relu '), name= ' Nose_input ', input= ' pool3 ') #分支4 Keras_ Model.add_node (layer=convolution2d, 3, 3,activation= ' Relu '), name= ' Mouth_input ', input= ' pool3 ') #各个分支展平 Keras_ Model.add_node (Layer=flatten (), name= ' Brow_flatten ', input= ' Brow_input ') Keras_model.add_node (Layer=Flatten (), Name= ' Eye_flatten ', input= ' Eye_input ') Keras_model.add_node (Layer=flatten (), name= ' Nose_flatten ', input= ' Nose_ Input ') Keras_model.add_node (Layer=flatten (), name= ' Mouth_flatten ', input= ' mouth_input ') #各个分支全连接 keras_model.add_ Node (layer=dense (500,activation= ' Relu '), name= ' brow_dense1 ', input= ' Brow_flatten ') Keras_model.add_node (layer= Dense (500,activation= ' relu '), name= ' eye_dense1 ', input= ' Eye_flatten ') Keras_model.add_node (Layer=Dense (500, activation= ' Relu '), name= ' nose_dense1 ', input= ' Nose_flatten ') keras_model.add_node (Layer=dense ' Relu '), name= ' mouth_dense1 ', input= ' Mouth_flatten ') #各个分支的输出 keras_model.add_node (layer=dense 20,activation= ' Tanh ') ), Name= ' Brow_dense2 ', input= ' brow_dense1 ') keras_model.add_node (layer=dense ' 24,activation= '), Tanh ' name= Dense2 ', input= ' eye_dense1 ') Keras_model.add_node (Layer=dense (18,activation= ' Tanh '), name= ' Nose_dense2 ', input= ' Nose_dense1 ') Keras_model.add_node (Layer=dense (38,activation= ' Tanh '), name= ' Mouth_dense2 ', input= ' mouth_dense1 ') Inputs the vectors that each branch receives, by the name of the node, and then as output keras_model.add_output (name= ' output ', inputs=[' brow_dense2 ', ' nose_dense2 ', ' Eye_dense2 ', ' Mouth_dense2 ']) #优化方法选择adam, the loss function selects the mean variance keras_model.compile (optimizer= ' Adam ', loss={' output '): ' MSE '} return Keras_model #读取模型, and Training def train (data_file= '. /.. /h5py/train.h5 '): print ' ReadData ' f=h5py.
	File (data_file, ' R ') x=f[' data '] [:] X=np.asarray (x,dtype= ' float32 ') y=f[' label ' [:] Y=np.asarray (y,dtype= ' float32 ') Print x.shape print y.shape print ' Train ' keras_model=second_net_graph () checkpointer =modelcheckpoint (filepath= "Grap H51point.hdf5 ", verbose=1, save_best_only=true) history = Losshistory () history= Keras_model.fit ({' Input ': x, ' Output ': y}, Shuffle=true,verbose=2,validation_split=0.1,callbacks=[checkpointer,
 History]) train ()

Personal experience: Feel Keras use is very convenient, at the same time the source code is very easy to read, we have to modify the algorithm, you can read the bottom of the source code, learning will not be like the bottom of the caffe so troublesome, personal feeling caffe the only advantage is that there are a lot of open model, the source code, , Keras is not the same, with Python, deep learning, easy many, but CVPR on the source of some of the literature is written with Caffe. Expect Keras to support the Caffe model import function like torch.

Author: hjimce Time: 2015.10.1 Contact qq:1393852684 original article, reprint please keep the original address, author and other information ***************

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.