1. Specific application examples. According to table 2, predict the high jump score of 15.
Table 2 Domestic male high jumpers ' quality index
serial number | valign= width= "center" >
30 marching Run (s) |
Triple jump () |
run-up () |
approach 4-6-step high jump () |
heavy Weight Squat barbell () |
Barbell squat coefficient |
100 (s) |
snatch () |
1 |
2.24 |
3.2 |
9.6 |
|
2.15 |
140 |
2.8 |
11.0 |
50 |
2 |
2.33 |
3.2 |
|
3.75 |
2.2 |
120 |
3.4 |
10.9 |
70 |
3 |
2.24 |
3.0 |
9.0 |
3.5 |
|
140 |
3.5 |
11.4 |
50 |
4 |
2.32 |
3.2 |
|
3.65 |
2.2 |
150 |
2.8 |
10.8 |
80 |
5 |
2.2 |
3.2 |
|
3.5 |
2 |
80 |
1.5 |
11.3 |
50 |
6 |
2.27 |
3.4 |
|
3.4 |
2.15 |
130 |
3.2 |
11.5 |
60 |
7 |
2.2 |
3.2 |
9.6 |
3.55 |
|
130 |
3.5 |
11.8 |
65 |
8 |
2.26 |
3.0 |
9.0 |
3.5 |
|
100 |
1.8 |
11.3 |
40 |
9 |
2.2 |
3.2 |
9.6 |
3.55 |
|
130 |
3.5 |
11.8 |
65 |
10 |
2.24 |
3.2 |
9.2 |
3.5 |
|
140 |
2.5 |
11.0 |
50 |
11 |
2.24 |
3.2 |
9.5 |
3.4 |
|
115 |
2.8 |
11.9 |
50 |
12 |
2.2 |
3.9 |
9.0 |
3.1 |
|
80 |
2.2 |
13.0 |
50 |
13 |
2.2 |
3.1 |
9.5 |
3.6 |
|
90 |
2.7 |
11.1 |
70 |
14 |
2.35 |
3.2 |
9.7 |
3.45 |
|
130 |
4.6 |
10.85 |
70 |
15 |
|
3.0 |
9.3 |
3.3 |
2.05 |
100 |
2.8 |
11.2 |
50 |
4.4 (serial 15) high jump performance forecast
4.4.1 Data Collation
1) We will be the first 14 sets of domestic male high jumper quality indicators as input, that is (30m marching running, standing triple jump, run-up touch high, approach 4-6-step jump, weight-bearing squat barbell, barbell squat coefficient, 100m, snatch), the corresponding high jump results as output. This data is normalized with the premnmx () function, which is in MATLAB.
DataSet:(Note: Each column is a set of input training sets, the number of rows represents the number of input layer neurons, number of columns input training set number of groups)
p=[3.2 3.2 3 3.2 3.2 3.4 3.2 3 3.2 3.2 3.2 3.9 3.1 3.2;
9.6 10.3 9 10.3 10.1 10 9.6 9 9.6 9.2 9.5 9 9.5 9.7;
3.45 3.75 3.5 3.65 3.5 3.4 3.55 3.5 3.55 3.5 3.4 3.1 3.6 3.45;
2.15 2.2 2.2 2.2 2 2.15 2.14 2.1 2.1 2.1 2.15 2 2.1 2.15;
140 120 140 150 80 130 130 100 130 140 115 80 90 130;
2.8 3.4 3.5 2.8 1.5 3.2 3.5 1.8 3.5 2.5 2.8 2.2 2.7 4.6;
11 10.9 11.4 10.8 11.3 11.5 11.8 11.3 11.8 11 11.9 13 11.1 10.85;
50 70 50 80 50 60 65 40 65 50 50 50 70 70];
t=[2.24 2.33 2.24 2.32 2.2 2.27 2.2 2.26 2.2 2.24 2.24 2.2 2.2 2.35];
4.4.2 Model Building
4.4.2.1 BP network model
BP networks (Back-propagation network), also known as the reverse propagation neural network, through the training of sample data, constantly revise the network weights and thresholds to make the error function down in the negative gradient direction, approaching the desired output. It is a widely used neural network model, which is applied to function approximation, model recognition classification, data compression and time series prediction, etc.
BP network consists of input layer, hidden layer and output layer, the hidden layer can have one layer or multilayer, Figure 2 is the mxkxn three-layer BP network model, the network chooses S-type transfer function, through the inverse error function (TI for the desired output, Oi for the calculation of the network output), constantly adjust the network weights and thresholds to minimize the error function E
BP network has high nonlinearity and strong generalization ability, but it also has the disadvantage of slow convergence speed, many iterations, easy to fall into local minimum and global search ability. Genetic algorithm can be used to optimize the "BP network" in the analytic space to find a better search space, and then use the BP network in a small search space to search for the optimal solution.
4.4.2.2 Model Solving
4.4.2.2.1 Network structure Design
1) design of input and output layer
The model is based on each set of data quality indicators as input, high jump results as output, so the input layer node number is 8, the output layer of the number of nodes is 1.
2) Hidden layer design
The research shows that there is a hidden layer of neural network, as long as the hidden node is enough, you can approximate a nonlinear function with arbitrary precision. Therefore, this paper uses a three-layer multi-input single output BP network with a hidden layer to establish a predictive model. In the process of network design, it is very important to determine the number of hidden layer neurons. The number of hidden-layer neurons is too large, which increases the computational capacity of the network and tends to cause overfitting. If the number of neurons is too small, it will affect the network performance and not achieve the desired effect. The number of hidden neurons in the network is directly related to the complexity of the actual problem, the number of neurons in the input and output layers, and the setting of the desired error. At present, there is no definite formula for the determination of the number of neurons in the hidden layer, only some empirical formulae, the final determination of the number of neurons needs to be determined by experience and many experiments. In this paper, the following empirical formulae are referenced in the selection of the number of hidden-layer neurons:
where n is the number of neurons in the input layer, M is the number of neurons in the output layer, and a is a constant between [1, 10].
According to the formula can be calculated that the number of neurons is between 4-13, in this experiment to select the number of hidden layer neurons is 6.
The network structure is as follows:
The selection of 4.4.2.2.2 excitation function
BP neural network usually uses sigmoid function and linear function as the excitation function of the network. This paper chooses the S-type tangent function Tansig as the excitation function of the hidden-layer neurons. As the output of the network is within the range of [-1, 1], the predictive model chooses the S-type logarithmic function Tansig as the excitation function of the output layer neuron.
Implementation of 4.4.2.2.3 Model
This prediction selects the Neural Network Toolbox in MATLAB to train the network, and the concrete implementation steps of the prediction model are as follows:
After the training sample data is normalized to the input network, the network hidden layer and output layer excitation functions are tansig and Logsig respectively, the network training function is TRAINGDX, the network performance function is MSE, and the number of hidden layer neurons is set to 6. Set the network parameters. The number of network iterations epochs is 5,000, the expected error goal is 0.00000001, and the learning rate of LR is 0. 01. After setting the parameters, start training the network.
The network achieves the desired error after 24 repetitions and completes the learning. See appendix for detailed code.
After the completion of network training, only need to input the quality indicators of the network can be predicted data.
The prediction result is: 2.20
Matlab code:
1234567891011121314151617181920212223242526272829303132333435 |
?P=[3.2 3.2 3 3.2 3.2 3.4 3.2 3 3.2 3.2 3.2 3.9 3.1 3.2;
9.6 10.3 9 10.3 10.1 10 9.6 9 9.6 9.2 9.5 9 9.5 9.7;
3.45 3.75 3.5 3.65 3.5 3.4 3.55 3.5 3.55 3.5 3.4 3.1 3.6 3.45;
2.15 2.2 2.2 2.2 2 2.15 2.14 2.1 2.1 2.1 2.15 2 2.1 2.15;
140 120 140 150 80 130 130 100 130 140 115 80 90 130;
2.8 3.4 3.5 2.8 1.5 3.2 3.5 1.8 3.5 2.5 2.8 2.2 2.7 4.6;
11 10.9 11.4 10.8 11.3 11.5 11.8 11.3 11.8 11 11.9 13 11.1 10.85;
50 70 50 80 50 60 65 40 65 50 50 50 70 70];
?T=[2.24 2.33 2.24 2.32 2.2 2.27 2.2 2.26 2.2 2.24 2.24 2.2 2.2 2.35];
?[p1,minp,maxp,t1,mint,maxt]=premnmx(P,T);
?%创建网络
?net=newff(minmax(P),[8,6,1],{
‘tansig‘
,
‘tansig‘
,
‘purelin‘
},
‘trainlm‘
);
?%设置训练次数
?net.trainParam.epochs = 5000;
?%设置收敛误差
?net.trainParam.goal=0.0000001;
?%训练网络
?[net,tr]=train(net,p1,t1);
TRAINLM, Epoch 0/5000, MSE 0.533351/1e-007, Gradient 18.9079/1e-010
TRAINLM, Epoch 24/5000, MSE 8.81926e-008/1e-007, Gradient 0.0022922/1e-010
TRAINLM, Performance goal met.
?%输入数据
?a=[3.0;9.3;3.3;2.05;100;2.8;11.2;50];
?%将输入数据归一化
?a=premnmx(a);
?%放入到网络输出数据
?b=sim(net,a);
?%将得到的数据反归一化得到预测数据
?c=postmnmx(b,mint,maxt);
?c c =
2.2003
|
Https://www.cnblogs.com/sallybin/p/3169572.html turn from
Detailed BP neural network prediction algorithm and implementation process example