The previous section in
"machine learning from logistic to neural network algorithm"
, we have introduced the origin and construction of neural network algorithm from the principle, and programmed the simple neural network to classify and test the linear and nonlinear data. Looking at the previous section, it may be found that the algorithm implemented in the previous section is not perfect for classifying non-linear data, and there are many places to optimize it. MATLAB, as a scientific computing software, itself integrates a lot of optimization algorithms, in which the Neural network Toolbox is one of the excellent toolbox, this section will be the function in the Toolbox to re-experiment with the classification experiment in the previous section.
First, get to know the toolbox. We say that a simple neural network is as follows:
This is the network we've worked on, with a network of two layers and 3 nodes per layer. However, it has been proved that the network with only one hidden layer can fit any finite input and output mapping problem. Based on this, in the MATLAB Integrated toolbox we can only see that there is only one hidden layer of the network, and the number of nodes in the network is we can need to change and design. A simple neural network that simply comes in the toolbox can be expressed as follows:
The output is mapped to the final output by entering and then mapping to the output. The entire network contains the weights W, V, and constant B. The network needs to design is to determine the number of nodes of the hidden layer (MATLAB default 10), while the mapping function needs to be noted, the last section of our default is to use the SIGMOD function, but matlab default is the hidden layer output using SIGMOD, and the output after the map is a linear mapping, The image is shown in red in the output section. In fact, in addition to these two mapping functions, there are tangent and so on functions, can be used as a mapping function in common is that they are all can be guided (this is the principle of derivation of the necessary). So why is the MATLAB default output mapping linear rather than sigmod? We know that the SIGMOD function maps the data to 0-1, but the target values (categorical tags) of the data obtained in our actual application are not necessarily in the 0-1, so if we use the SIGMOD function, we need to convert the target output of the raw data to 0-1. The linear mapping does not have this problem (because the data after it is mapped has no upper and lower bounds).
Okay, here's a few important functions:
-
Network creation function: Feedforwardnet (HIDDENSIZES,TRAINFCN), (newer version after matlab2012), in the old version of Matlab, this function is NEWFF. This function is to create a (feedforward) network above, including two parameters, the first hiddensizes hidden layer size (actually is the number of nodes, the default), TRAINFCN is the method of network training, this method includes: Gradient descent algorithm, momentum gradient descent algorithm, Variable learning rate gradient descent algorithm and so on 10 kinds of methods, various methods have advantages and disadvantages, and representative method is representative of five kinds of algorithms are: ' Traingdx ', ' TRAINRP ', ' TRAINSCG ', ' trainoss ', ' TRAINLM ', the default is ' TRAINLM '. In fact, do not need to know too detailed, the general data default method is very good, so this parameter can be used without tube. Give a link to this function in detail:
matlab Neural network functions (feedforwardnet,fitnet,patternet)
Some records about MATLAB neural network command feedforwardnet
-
Okay, after the network is created, the following is the parameters of the training network, where the parameters are the weight matrix w,v, and the constant matrix B. function is train, about this function can have a lot of parameters, can also have only a few parameters, because a lot of parameters are default, and the default value can be used to achieve good results, such as the number of iterations, training the minimum allowable error and so on. Here are just a few important parameters,
1. The first parameter, the network net that was created above,
2. The second, the training data,
3. The third, the data target value corresponding to the training data (category label or output value, etc.), this output value can not only be one-dimensional, but also can be multidimensional. The
section is detailed as follows: BP Neural Network and MATLAB implementation
And the train function comes out is the training network Net,matlab out of the net is a structure of data, which includes all the information of the network (training methods, errors, including our familiar weights matrix w,b, etc.), then we need to put W and b in the extraction? No, in real-world applications, we can directly input the data we need to test into net. For example, now the network training is net, then a test sample, then its output value is net (sample), so that can achieve the purpose of the prediction.
Well, to understand these can be experimental, there are many detailed details can also be changed, the use of more than that may not be used to say.
The following same experiment in the previous section of two sets of artificial samples, linear sets and nonlinear sets, drawn out as follows:
The code is as follows:
%% % * matlab self-with Neural network Toolbox classification Design% * Linear and non -linear classification% %% Clcclearclose AllPercent Load Data% * Data preprocessing-two types of casesdata = Load (' Data_test1.mat ');d ATA =data.Data ';% set the label to 0,1Data (:,3) = Data (:,3) -1;% Select the number of training samplesNum_train = the;% structured random selection sequenceChoose = Randperm (length(data)); Train_data = data (choose (1: Num_train),:); Gscatter (Train_data (:,1), Train_data (:,2), Train_data (:,3)); Label_train = Train_data (:,End); test_data = data (choose (num_train+1:End),:); label_test = Test_data (:,End);construction and training of percent-percent neural network% constructed neural network (nodes with 10 hidden layers)NET = Feedforwardnet (Ten);% NET.LAYERS{2}.TRANSFERFCN = ' tansig ';% output mapping method, default purelin--linear mapping% Training NetworkNET = Train (Net,train_data (:,1:End-1)', Label_train ');% display of constructed networksView (NET);% Use this network to predict the classification of test setsY_test = Net (Test_data (:,1:End-1)The value of the % output is rounded, which is considered to be greater than 0.5 of the class ' 1 ', other belonging to the class ' 0 ' predict = Round (ABS (y_test)); percent display results-Test training figure;index1 = Find (predict==0); Data1 = (Test_data (index1,:)) ';p Lot (Data1 (1,:), Data1 (2,:),' or '); hold onindex2 =Find(predict==1);d ata2 = (Test_data (index2,:))';p lot (data2 (1,:), data2 (2,:), ' )*') Hold ONINDEXW = find (Predict '~= (label_test));d Ataw = (Test_data (INDEXW,:))';p lot (Dataw (1,:), Dataw (2,:), ' )+G ',' LineWidth ',3); accuracy =length(Find(predict '==label_test))/length(Test_data); Title ([' predict the training data and the accuracy is: ', NUM2STR (accuracy)]);
Load the linear data set first:
Green is the point of the wrong points, the program we are using 10 hidden layer nodes, while the output of the default mapping-linear mapping. There are also sigmod mappings in the program (corresponding commented-out places).
The following is a test of nonlinear data :
Similarly, Green is the point of the wrong points. As you can see, the accuracy rate has finally risen above 80% (compared with the previous section, which is virtually impossible). Here we will also change the output map to SIGMOD mapping, because our output label has been changed to 0, 1 tags, the input is between 0-1, so it can be used directly. The corresponding comments are removed and the result is as follows:
You can see that the output looks better with the SIGMOD function effect. At the same time, look at the network structure drawn out:
Look at the results and the structure above, what have you found? is not the output layer of the mapping relationship has changed.
This is the training and prediction experiment of basic neural network under MATLAB, and its function of neural network toolbox is much more than this. At the same time, I use the function command, MATLAB integrates the GUI function to the neural network, can operate directly in the graphical interface. In the Command window, enter: Nnstart The following GUI interface appears:
MATLAB under the neural network can be seen mainly in four directions use: fitting data, pattern recognition and classification data, cluster data, time series model data processing, we can see that this problem is in fact the input pattern classification data This part, considering the space is limited, interested in their own can go into the detailed study of other uses , exceptionally powerful.
summing up this part, Matlab comes with neural network toolbox compared to the previous section of their own, for linear data accuracy is about the same, but for the division of non-linear data, Toolbox function optimization is very good, and the use of simple, fast operation, Can be said to be a very good classification method.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Machine learning practical matlab Neural Network Toolbox