Use Python to write a simple perceptual machine

Source: Internet
Author: User

https://blog.dbrgn.ch/2013/3/26/perceptrons-in-python/

At present, I am in HSR took part in a neural network and machine learning course, in which one of the simplest neural network models was the perceptual machine (perceptronperceptronperceptron ).

Background information

The simplest neural network model is the Perceptron classifier, which has different inputs (x1, x2,...... xn) andthen has different weight values (w1, W2,...... Wn), calculated as follows:


each corresponding weight value is multiplied by the input value, added together, and then passed through a step function f:


To understand the whole process, here is a streamlined version of the flowchart:


with Python code to write

Here's how to use Python and NumPy Library to write the simplest perceptron, which consists of two input values, then uses it to learn the operation of the Boolean operation or , the first step is to first import the library to be used:

From random import choice

From numpy import array, dot, random

then write the step function, define it as Unit_step:

Unit_step = lambda x:0 if x < 0 else 1

Http://reference.wolfram.com/language/ref/UnitStep.html

then write the data that maps the input to the output, using the Numpy Array, the first element is a tuple representation of a three element, while the tuple's first two values represent two input values, the third element is the deviation value (mainly for the threshold calculation), and always uses the value 1 to represent The second element is a value that represents the desired output. This array is defined as follows:

Training_data = [

(Array ([0,0,1]), 0),

(Array ([0,1,1]), 1),

(Array ([1,0,1]), 1),

(Array ([1,1,1]), 1),

]

Boolean operators can be seen from the training data The OR relationship is as follows:


Next, use the random function to generate a weight value between three 0 and 1 as the initialization value:

W = Random.rand (3)

Now you can declare some variables, the list variable errors is to save the error value, but also for the subsequent drawing use, if you do not want to draw, it does not matter, let it stay like this. The variable ETA controls The learning rate, and the variable n is the number of iterations that define how many times to learn:

Errors = []

ETA = 0.2

n = 100

in order to find the appropriate weight value w, the error value needs to be reduced to 0. In this example, the iteration is enough, if the input is a very noisy data set, you need to increase the number of iterations to a larger value.

first, in order to train this perceptron, a random data set is generated as input. The dot product operation between the input value and the weight value vector is then computed, resulting in a result value that can be compared with the expected values. If the expected value is relatively large, you need to increase the weight, if the expected value is relatively small, you need to reduce the weight. This correction factor is calculated on the last line, where the error is multiplied by the learning rate (eta) and the input vector (x), and the error value of these weights is added to the weight value vector. This allows for the next calculation of the output value to be adjusted closer to the desired direction.

For I in Xrange (n):

X, expected = choice (training_data)

result = Dot (w, x)

Error = Expected-unit_step (result)

Errors.append (Error)

W + = ETA * ERROR * x

All the underlying code is written, and then it is trained to learn or manipulate the Perceptron:

For x, _ in Training_data:

result = Dot (x, W)

Print ("{}: {}, {}". Format (X[:2], result, unit_step (result))

[0 0]: -0.0714566687173-0

[0 1]: 0.829739696273--1

[1 0]: 0.345454042997--1

[1 1]: 1.24665040799--1

If you are interested in error values, you can use a visual library to display them:

From Pylab import plot, Ylim

Ylim ([ -1,1])

Plot (Errors)


You can see from the first iteration there is no error value, if you feel this error value, not yet, and then calculate the smaller, you can change the number of training to a number of times, or more times:


Alternatively, you can change the training data to learn Boolean operations. And,nor or not, however, one thing you should be aware of is that it cannot simulate an xor operation because the XOR operation is not linearly divided, If you want to simulate An XOR operation you have to use a multi-layered neuron-sensing machine (basically a small neural network).

Summarize

The full code is as follows:

From random import choice from numpy import array, dot, random unit_step = lambda x:0 if x < 0 else 1 training_data =  [(Array ([0,0,1]), 0), (array ([0,1,1]), 1), (Array ([1,0,1]), 1), (Array ([1,1,1]), 1),] W = random.rand (3) errors = [] eta = 0.2 N = 100for i in range (n): x, expected = choice (training_data) result = Dot (w, x) error = Expected-uni     T_step (Result) errors.append (error) W + = ETA * ERROR * x for x, _ in Training_data:result = Dot (x, W) Print ("{}: {}, {}". Format (X[:2], result, unit_step (result))

1. TensorFlow API Tipshttp://edu.csdn.net/course/detail/4495
2. TensorFlow Introductory Basic Tutorialhttp://edu.csdn.net/course/detail/4369
3. C + + Standard Template Library from getting started to mastering
http://edu.csdn.net/course/detail/3324
4. Learn C + + with old rookie
http://edu.csdn.net/course/detail/29015. Learn Python with the old rookiehttp://edu.csdn.net/course/detail/25926. Learn to use the TinyXML library in VC2015http://edu.csdn.net/course/detail/25907. Version management and combat in SVN under Windowshttp://edu.csdn.net/course/detail/25798.Visual Studio 2015 Basic use of developing C + + programshttp://edu.csdn.net/course/detail/25709. Using the PROTOBUF protocol in VC2015http://edu.csdn.net/course/detail/258210. Learn to use MySQL database in VC2015
http://edu.csdn.net/course/detail/2672

Use Python to write a simple perceptual machine

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.