Learning efficiency and accuracy of different learning functions in the matlab bp network toolbox, training functions and performance Functions

Source: Internet
Author: User
Tags types of functions

Demo from neural network theory and Matlab 7 Implementation

First, we will introduce several types of functions commonly used by BP networks in the MATLAB toolbox: Forward network creation functions:

Newcf creates a cascaded forward Network

Newff creates a Forward BP Network

Newffd creates a forward network with input delay

Transfer Function:

Logsig S-type logarithm function

Function of dlogsig logsig

Tansig s tangent function

Function of dtansig tansig

Purelin pure linear functions

Guide functions of dpurelin purelin

Learning functions:

Learngd learning functions based on Gradient Descent

Learngdm gradient descent Function

Training functions:

Trainbrbayes normalization BP training function

Trainc cyclic sequence increasing training function

Traincgb Powell-Beale connection gradient BP training function

Traincgf Fletcher-Powell connection gradient BP training function

Traincgp poak-Ribiere joins the gradient BP training function

Traingda adaptive lrbp gradient descent training function

Traingdx momentum and adaptive lrbp gradient decline training function

Trainlm Levenberg-Marquardt BP training function

Trainoss step tangent BP training function

Update training functions in a random sequence with trainr

Trainrp Back-Propagation Training Function

Trains sequentially incrementing BP training functions

Trainscg quantization connection gradient BP training function

Performance functions:

Mse
Mean Squared Error Function

Msereg
Mean square error normalization Function

Display function:

Plotperf
Graph network performance

Plotes
Draw an error surface for a single neuron

Plotep
Draw the position of weight and threshold on the error Surface

Errsurf
Calculate the error surface of a single neuron

Learning efficiency and accuracy of different learning functions in the matlab bp network toolbox, training functions and performance Functions

An important feature of the % BP network is the ability of Nonlinear ing. This function is very suitable for function approximation, that is, to find the relationship between the two groups of data, this example compares different learning functions in the BP network, and training % training function learning rate and accuracy clc; close all; clear; P = [0 1 2 3 4 5 6 7 8 9 10]; t = [0 1 2 3 4 3 2 2 3 4]; net = newff ([0 10], [5, 1], {'tansig ', 'purelin'}); % net = newff ([0 10], [5, 1], {'tansig ', 'purelin'}, 'traingd', 'learngd', 'msereg'); % net = newff ([0 10], [5, 1], {'tansig ', 'purelin'}, 'traingdx', 'learngd', 'msereg '); net. trainparam. epochs = 200; net = train (net, P, T); figure; y = SIM (net, P); plot (P, T, '+', P, y, 'o ')

1. Create a learning function for the BP network. Both the training function and the performance function use the default values, which are the approximate results of learngdm, trainlm, and MSE respectively:

Mean square error curve obtained after training (training function: trainlm ):

Network output after training:

It can be seen that after 200 training, although the network performance is not 0, the output mean square error is very small, mse = 6.72804e-0.06, the results show that the fitting of the nonlinear ing relationship between P and T is very accurate;

2. create a learning function learnd, training function traingd, and performance function msereg BP network to complete the fitting task: Network output (training function: traingd): visible, after 200 training sessions, the output error of the network is large, and the convergence speed of the network error is very slow. This is because the training function traingd is a simple gradient descent training function, the training speed is slow, and it is easy to fall into the local minimum. The results show that the network accuracy is indeed poor. 3. Change the training function to traingdx, which is also a training function of the gradient descent method. However, during the training process, the learning rate is a variable network training error curve:
Network output (training function: traingdx) after 200 training, the network performance evaluated by the msereg function is 1.04725, which is not very large, the results show that the fitting of the nonlinear relationship between P and T is good, and the network performance is good.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.