The title describes the operating environment Win7 2016-07-24
Look at the online a lot of keras identification minist but generally because of the version of the problem, can not be directly used,, here also special thanks to the three-headed SCP. The tutorial is very good to the whole. There is the best you install Anaconda before the original installed py uninstall, or install MinGW when the problem,, installation is not detailed introduction of the online there are many kinds of----approximate process--anaconda-mingw-theano (note environment variables, The system variable what)-keras ...
OK da da
Attached below is a usable program ha, pro-test even ... and attached to the data, the data is from one of the internet down there, you do not have to run when the download data, so it is easy to error .... Win7--64bit,,have not GPU ....
All right: The point is that the new grammar is a problem ... and it's a good thing. The updated place has been marked red ....
1 from __future__ ImportAbsolute_import2 from __future__ Importprint_function3 ImportNumPy as NP4Np.random.seed (1337)#For reproducibility5 Import cpickle as Pickle 6 7 fromKeras.modelsImportSequential8 fromKeras.layers.coreImportdense, dropout, Activation9 fromKeras.optimizersImportSGD, Adam, RmspropTen fromKeras.utilsImportnp_utils One A " " - Train A simple deep NN on the MNIST dataset. - Get to 98.30% test accuracy after epochs (there was *a lot* of margin for parameter tuning). the 2 seconds per epoch on a GRID K520 GPU. - " " - -Batch_size = 128 +Nb_classes = 10 -Nb_epoch = 10 + A at def read_data (data_file): import gzip f = Gzip.open (Data_file, "RB") train, val, test = Pickle . Load (f) f.close () train_x = train[0] train_y = train[1] + test_x = test[0] Test_y = TEST[1] Returntrain_x, train_y, test_x, test_y - the * #the data, shuffled and split between Tran and test sets $ #(X_train, Y_train), (x_test, y_test) = Mnist.load_data ()Panax NotoginsengTrain_x, train_y, test_x, test_y = Read_data ("C:\Users\PC\.spyder2\mnist.pkl.gz") -X_train =train_x theX_test =test_x +X_train = X_train.astype ("float32") AX_test = X_test.astype ("float32") theX_train/= 255 +X_test/= 255 - Print(X_train.shape[0],'Train Samples') $ Print(X_test.shape[0],'Test Samples') $ - #Convert class vectors to binary class matrices -Y_train =np_utils.to_categorical (train_y, nb_classes) theY_test =np_utils.to_categorical (test_y, nb_classes) - WuyiModel =Sequential () theModel.add (Dense(input_dim=784, output_dim=128)) Model.add (Activation (' Relu ')) Model.add (Dropout (0.2)) Model.add (Dense ( output_dim=128) Model.add (Activation (' Relu ')) Model.add (Dropout (0.2)) Model.add (dense (output_dim=10)) 59 Model.add (Activation (' Softmax ')) - ARMS =Rmsprop () +model.compile (loss= ' categorical_crossentropy ', optimizer=rms,metrics=[' accuracy ']) Model.fit (X_train, Y_ Train, Batch_size=batch_size, Nb_epoch=nb_epoch) score = Model.evaluate (X_test, Y_test, batch_size=batch_size) the Print('Test Score:', score[0]) the Print('Test accuracy:', Score[1])
Here's the data: http://www.cnblogs.com/xueliangliu/archive/2013/04/03/2997437.html ... Due to the inability to upload a 15 trillion, I see this Daniel has this data, but also comes with the installation and principle, etc., see for yourself ....
Anaconda+theano+keras handwritten characters recognition new