A linear neural network based on perceptron model

Source: Internet
Author: User

Abstract: With the development of computational intelligence, artificial neural network has been developed. The industry now considers that it may not be appropriate to classify neural networks (NN) in artificial intelligence (AI), and that the classification of computational Intelligence (CI) can explain the nature of the problem. Some topics in evolutionary computing, artificial life and fuzzy logic systems are also classified as computational intelligence. Although the limits of computational intelligence and artificial intelligence are not very obvious, it is beneficial to discuss their differences and relationships, and logical thinking refers to the process of reasoning based on logical rules; it first makes the concept of information, and it is represented by the symbol, and then, according to the symbolic operation, the logical reasoning is based on serial mode. This process can be written as a serial instruction for the computer to execute. However, intuitive thinking is the integration of distributed storage information, resulting in the sudden emergence of ideas or solutions to problems. This paper is devoted to the general idea of the problem of neural network processing, but also for the computer science and technology major in the third course of "artificial intelligence" of the fourth algorithm experiment.


Keywords: artificial intelligence, neural network, Perceptron model

Production system

Abstract: putting forward along with the computing intelligence, artificial neural network have been developing. At present the industry considering the neural network (NN) classified as artificial intelligence (AI) could not appropriate , and classified as computational Intelligence (CI) more telling. Evolutionary computation, artificial life, and some issues of the fuzzy logic system, is classified as computational inte Lligence. Despite the limits of computational intelligence and artificial intelligence are not obvious, however, discuss the Differen Ce and relationship are beneficial, logical thinking refers to the process according to the rules of logic reasoning; It would first information into the concept and symbol, and then, according to the symbolic operation logic reasoning Accor Ding to the serial mode; This process can is written as a serial of instruction, let the computer to perform. However, visual thinking is the distributed storage of information together, the result is sUddenly a idea or a solution to the problem. And this paper are devoted to general thoughts about neural network processing, as well as computer science and technology Under the Junior professional class "artificial intelligence" of the fourth algorithm experiment.

Keywords:artificial Intelligence, neural network, the Perceptron model

1, development of neural networks

In the the 1950s, Rosenblatt and other people put forward the Perceptron (neural network) model, and formed the connection Doctrine School.

1.1 Connectivity point of view core
The essence of intelligence is the connection mechanism, and the neural network is a highly complex and large-scale nonlinear adaptive system composed of a large number of simple processing units.

1.2 Four levels of intelligent behavior that simulates the human brain
• Physical Structure
• Computational Simulations
• Storage and operation
• Training

1.3 Artificial Neural network
Neural network is a parallel and distributed information processing network. It is based on the processing unit as a node, with a weighted arc connected to each other by the direction of the graph. The processing unit is a simulation of the physiological neurons, while the arc is a simulation of the axon-synaptic-dendritic pair. The weighting of the tangential arc indicates the strength of the interaction between the two processing units. It is usually made up of a large number of neurons
• Each neuron has only one output and can be connected to many other neurons
• Each neuron input has multiple connection channels, each connecting channel corresponds to a connection weight factor

2. Establishment of the model

2.1 Artificial Neural network model



2.2 Basic function of response function

– Control input-to-output activation – function conversion of inputs and outputs – transforms the input of a potentially infinite field into an output within a specified limited range





2.3 Perceptron Model

The Perceptron was presented by American computer scientist Rosenblatt (F.roseblatt) in 1957. Single-layer perceptron neuron model diagram:



2.4 Mathematical Models







3. Perceptron algorithm

3.1 Training Steps

1) for the problem to solve, determine the input vector X, the target vector T, thus determining the dimension and network structure parameters, n,m;

2) initialization of parameters;

3) Set the maximum number of cycles;

4) Calculate the network output;

5) Check whether the output vector y is the same as the target vector T, if the same, or up to the maximum number of cycles, the end of training, otherwise transferred to 6;

6) Learn, and return 4.


3.2 Network Training

The network training process for adaptive linear components can be summarized in the following three steps:

1) expression: Calculate the output vector of the model a=w*p 10 B, and the error between the desired output e=t-a;

2) Check: the sum of the squares of the network output error is compared with the expected error, if its value is less than the expected error, or the training has reached the maximum training times set beforehand, stop training;

3) Study: Use W-h learning rules to calculate new weights and deviations and return to 1).


4, Problem Introduction

4.1 Problem Description
Now consider the design problem of a large multi-neuron network pattern Association. The input vector and target vectors are:

p=

{

{1,-1,2},

{1.5,2,1},

{1.2,3,-1.6},

{ -0.3,-0.5,0.9}

};

t=

{

{0.5,1.1,3,-1},

{3,-1.2,0.2,0.1},

{ -2.2,1.7,-1.8,-1.0},

{1.4,-0.4,-0.4,0.6}

};

4.2 Ideas for solving problems

The input vectors and the target output vectors are available: The number of input vectors is r=3, the number of output vectors is s=4, and the number of samples is q=4.

The solution of this problem can also be solved by a linear equation group, that is, each output node writes out the relationship between input and output equation.

In fact, it takes a certain amount of time, not even easy, to ask for the solution of these 16 equations.

For some practical problems, it is often not necessary to ask for a perfect 0 error solution. In other words, there is a certain error allowed.

In this case, the use of adaptive Linear Network solution shows its superiority: because it can quickly train to meet certain requirements of the network weights.

5, programming


1#include <iostream>2#include <ctime>3#include <cmath>4 using namespacestd;5 6 Const intMax_learn_length = -;//Maximum number of studies7 Const floatStudy_rate =0.2;//Learning Rate8 Const floatAnticipation_error =0.01;//expected error9 Const intinput =3;//Three inputsTen Const intOutput =4;//Four outputs One Const intSample =4;//4 sets of samples A  - floatP[sample][input] =//input vectors for 4 groups of 3 items - { the{1,-1,2}, -{1.5,2,1}, -{1.2,3,-1.6}, -{-0.3,-0.5,0.9} + }; - floatT[sample][output] =//expected output vectors for 4 groups of 4 items + { A{0.5,1.1,3,-1}, at{3,-1.2,0.2,0.1}, -{-2.2,1.7,-1.8,-1.0}, -{1.4,-0.4,-0.4,0.6} - }; -  - intMainintargcChar**argv) in { -     floatPrecision//Error Precision Variable to     floatW[input][sample];//3 Item 4 input corresponding network weight value variable +     floatB[sample];//4 Set of threshold variables -     floatA[sample];//actual output values per set the     intII, IJ, IK, ic; *  $Srand (Time (0));//initializing random functionsPanax Notoginseng      for(ii =0; ii<sample; ii++) -     { theB[II] =2* (float) rand ()/Rand_max-1;//threshold variable assigned random value ( -1,1) +          for(ij =0; ij<input; ij++)//assigning random values to network weight variables A         { theW[IJ][II] =2* (float) rand ()/Rand_max-1; +         } -     } $precision = Flt_max;//Initialization Precision Value $      for(IC =0; IC < Max_learn_length; ic++)//maximum number of learning cycles within a cycle -     { -         if(Precision<anticipation_error)//Cyclic pruning function the         { -              Break;Wuyi         } thePrecision =0; -          for(ii =0; ii<sample; ii++)//4 sets of sample loop stacking error accuracy Wu         { -              for(ij =0; ij<output; ij++)//calculates the actual output of 4 items in a set About             { $A[ij] =0.0; -                  for(IK =0; ik<input; ik++)   -                 { -A[ij] + = P[ii][ik] *W[ik][ij]; A                 } +A[ij] + =B[ij]; the             } -              for(ij =0; ij<output; ij++)//Adjust network weights and thresholds by learning rate $             { the                  for(IK =0; ik<input; ik++) the                 { theW[ik][ij] + = study_rate* (T[ii][ij]-a[ij]) *P[ii][ik]; the                 } -B[ij] + = study_rate* (T[ii][ij]-A[ij]); in             } the              for(ij =0; ij<output; ij++)//Calculate error Accuracy the             { AboutPrecision + = POW ((T[ii][ij]-a[ij]),2); the             } the         } the     } +cout <<"Maximum number of studies:"<< Max_learn_length <<Endl; -cout <<"the number of times to complete the goal is:"<< IC <<Endl; thecout << Endl <<"The expected error is:"<< Anticipation_error <<Endl;Bayicout <<"the accuracy of learning after reaching the goal is:"<< Precision <<endl<<Endl; thecout <<"after learning the network weights are:"<<Endl; the      for(ii =0; ii<sample; ii++)//network weights after the output learning -     { -          for(ij =0; ij<input; ij++)//4 samples per group of 3 inputs the         { thecout << W[ii][ij] <<"   "; the         } thecout <<Endl; -     } thecout <<endl<<"the threshold value after learning is:"<<Endl;  the       for(ii =0; ii<output; ii++)//output threshold After learning: four sample outputs the     {94cout << B[ii] <<"   "; the     } thecout << endl<<Endl; theSystem"Pause");98     return 0; About}







A linear neural network based on perceptron model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.