Let's spit it out. This is based on the Theano Keras how difficult to install, anyway, I am under Windows toss to not, so I installed a dual system. This just feel the powerful Linux system at the beginning, no wonder big companies are using this to do development, sister, who knows ah ....
Let's start by introducing the framework: We all know the depth of the neural network, Python started with Theano this framework to write the neural network, but later we found that Keras is easier to build than Theano, which is suitable for beginners. X.. X
The following is the corresponding English website: http://keras.io/#installation, English good words themselves can understand. first, look at the installation
There are two kinds:
1. Ubuntu under the direct use of sudo pip install Keras installation
2. Or, install the following dependencies first:
NumPy, scipy
pyyaml
Theano
HDF5 and H5py
Once you have the dependencies installed, clone the repo:
git clone https://github.com/fchollet/keras.git two. Module Introduction
1. Optimizers:
This is used to choose the Optimization method, which has Sgd,adagrad,adadelta,rmsprop,adam optional
2. Objectives this defines what form of error to optimize, there
Mean_squared_error/mse: Mean Variance
Mean_absolute_error/mae: Absolute Error
Mean_absolute_percentage_error/mape: Average absolute percent difference
Mean_squared_logarithmic_error/msle: Logarithmic error
Squared_hinge
Hinge
Binary_crossentropy:also known as Logloss.
Categorical_crossentropy: Use this objective function to set the form of a label as a binary array.
3. Model
Model = Keras.models.Sequential () Initializes a neural network
Add a layer of neural network
Compile (optimizer, loss, class_mode= "categorical"):
Parameters:
Optimizer:str (the name of the optimization function) or Optimization object. Reference optimizers.
Loss:str (the name of the target function) or the target function. Reference objectives.
Class_mode: Value is "categorical", "binary". Used to calculate the correct classification rate or invoke the Predict_classes method.
Theano_mode:a Theano.compile.mode.Mode (Reference).
Fit (X, y, batch_size=128, nb_epoch=100, Verbose=1, validation_split=0., Validation_data =none, Shuffle=true, Show_ Accuracy=false): Fixed epochs training a model.
Return value: The loss value of the training success recorded in the dictionary, but also the validation of loss values or accuracy (if applicable).
Parameters:
X:data.
Y:labels.
Batch_size:int. The number of samples per iteration.
Nb_epoch:int.
Verbose:0 indicates that the log is not updated, 1 updates the log, 2 each epoch a progress line.
Validation_split:float (0 < x < 1). Part of the validation set.
Validation_data:tuple (X, y) data as a validation set. The validation_split will be loaded.
Shuffle:boolean. Samples are randomly sampled for each epoch.
Show_accuracy:boolean. Whether each epoch displays the classification correct rate.
Evaluate (X, y, batch_size=128, Show_accuracy=false, verbose=1): Displays the model's performance by validating the set's data.
Return: Returns the loss value of the data.
Arguments: Same as above fit function definition. Verbose is used as a binary identity (progress bar or none).
Predict (X, batch_size=128, verbose=1):
Return: A predictive array of test data.
Arguments: the same as fit.
Predict_classes (X, batch_size=128, verbose=1): Returns an array of class predictions for test data.
Return: An array of tags that test the data.
Arguments: the same as fit.
Train (X, Y, Accuracy=false): A batch gradient update. If Accuracy==false, return tuple (Loss_on_batch, Accuracy_on_batch). Else, return Loss_on_batch.
Return: Loss value, or tuple (loss, accuracy) if accuracy=true.
Test (X, Y, Accuracy=false): A batch performance calculation. If Accuracy==false, return tuple (Loss_on_batch, Accuracy_on_batch). Else, return Loss_on_batch.
Return: Loss value, or tuple (loss, accuracy) if accuracy=true.
Save_weights (fname): Saves all layer weights to the HDF5 file.
Load_weights (fname): Loads the model weights stored in save_weights. Only files with the same structure can be loaded.Here's a little program that you wrote.
#coding: Utf-8 ' Created on 2015-9-12 @author: zzq2015 ' "from keras.models import sequential from Keras.layers.core Imp ORT dense, dropout, activation import Scipy.io as Sio import numpy as NP model = sequential () model.add (dense (4, 200,
init= ' uniform ') model.add (Activation (' Relu ')) Model.add (Dropout (0.5)) Model.add (dense, init= ' uniform ')) Model.add (Activation (' Relu ')) Model.add (Dropout (0.5)) Model.add (Dense (m, init= ' uniform ')) Model.add (
Activation (' Relu ')) Model.add (Dropout (0.5)) Model.add (Dense (m, init= ' uniform ')) Model.add (Activation (' Relu ')) Model.add (Dropout (0.5)) Model.add (Dense (3, init= ' uniform ')) Model.add (Activation (' Softmax ')) Model.compile (loss = ' binary_crossentropy ', optimizer= ' Adam ', class_mode= ' binary ' matfn=u '/media/zzq2015/learning/python/da/kerastrain.mat ' Data=sio.loadmat (MATFN) data = Np.array (Data.get (' Iris_train ')) Trainda = Data[:80,:4] TRAINBL = data[:80,4:] TestDa = Data[80:,:4] Testbl = data[80:,4:] Model.fit (Trainda, TRAINBL,nb_epoch=80, batch_size=20) print model.evaluate (TESTDA, TESTBL, show_accuracy=true) print model.predict_classes ( TESTDA) print ' real label: \ n ' Print Testbl
The output results are as follows:
Epoch
-20/80 [======> ...]-eta:0s-loss:0.1042??????????????????????????????????????????---------- ?????????????????????
40/80 [==============> ...]-eta:0s-loss:0.0857-----??????????????????????????????????????????????????????? ????????
60/80 [=====================> ...]-eta:0s-loss:0.0826??????????????????????????????????????????????????????? ????????
80/80 [==============================]-0s-loss:0.1216
10/10 [==============================]-0s
[ 0.15986641560148043, 1.0]
10/10 [==============================]-0s
[[0 0 1]
[0 0 1] [
0 0 1]
[0 0 1 ]
[0 0 1] [0 0 1] [0 + 0] [1 0 0
] [1 0 0] [1 0 0
]]
true Label:
[[1]. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]
[0. 0. 1.]]
0.15 is the loss value, 1 is the accuracy rate