In the previous blog, we used the RBM-based deep autoencoder to compress the mnist data set, which should be said to have achieved good results. Here, we replace the neural network with the traditional fully-connected feedforward neural network to compress the Mnist data set to see what the similarities and differences between the two effects are. The entire code is still implemented using deeplearning4j, and we combine it with the spark platform to facilitate future extensions. Here, the structure of the model, the training process, and the final compression effect are detailed.
First, we create a new MAVEN project and join deeplearning4j dependencies (this piece of content has been mentioned many times in the previous article, so this is no longer verbose here). Next, we create a new spark task to read the mnist dataset already stored on HDFs (as mentioned in the previous article, the mnist dataset has been stored in HDFs in advance as javardd<dataset>, and can refer to the previous blog. ) and generate training data set Javardd. The specific code is as follows:
[Java] View Plain copy sparkconf conf = new sparkconf () . Set ("Spark.kryo.registrator", "Org.nd4j.Nd4jRegistrator") .setappname ("MLP Autoencoder mnist (Java) "); Javasparkcontext jsc = new javasparkcontext ( conf); // final string inputpath = args[0]; final string savepath = args[1]; double lr = double.parsedouble (args[2]); Final int batchsize = integer.parseint (args[3]); Final int numepoch = integer.parseint (args[4]); // JavaRDD<DataSet> Javarddmnist&nbsP;= jsc.objectfile (InputPath);//read mnist data from hdfs JavaRDD< Dataset> javarddtrain = javarddmnist.map (New function<dataset, dataset> () { @Override public dataset call (dataset next) throws Exception { return new dataset (Next.getfeaturematrix (), Next.getFeatureMatrix ()); } });
After building the training data set, we can define the network structure and match the corresponding hyper-parameters:
[Java] View Plain copy multilayerconfiguration netconf = new Neuralnetconfiguration.builder () .seed (123) .iterations (1) .learningrate (LR) . Learningratescorebaseddecayrate (0.5) . Optimizationalgo (optimizationalgorithm.stochastic_gradient_descent) .updater (Updater.adam). Adammeandecay (0.9). Adamvardecay (0.999) .list () .layer (0, new denselayer.builder (). NIn (784). Nout () Activation ("Relu"). Build ()) .layer (1, new&Nbsp;denselayer.builder (). NIn (. nout). Activation ("Relu"). Build ()) .layer (2, new denselayer.builder (). NIn (. Nout (). Activation ("Relu"). Build ()) .layer (3, new denselayer.builder (). NIn (nout). Activation ("Relu"). Build ()) . Layer (4, new denselayer.builder (). NIn (. Nout (). Activation ("Relu"