Python implements minimum mean square algorithm (LMS)

Source: Internet
Author: User

The main difference between LMS algorithm and Rosenblatt Perceptron is that the weight correction method is not the same. LMS uses the batch correction algorithm, which is used by the Rosenblatt Perceptron.

is a single-sample correction algorithm. Both of these algorithms are single-layer perceptron and can only be used for linear sub-conditions.

Detailed code and instructions are as follows:

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "style=" border:0px; "/>

"    algorithm: Minimum mean square algorithm (LMS)     mean square error: The expected value of the difference squared between the sample's predicted output and the actual output value, recorded as mes    set: O bserved  is the sample truth, predicted is the sample prediction value, then the calculation formula:    (converted to easy to write, non-mathematical standard notation, because the mathematical symbol is not written here)     mes=[(Observed[0]-pridicted[0]) * (observed[0]-pridicted[0]) +....           (Observed[n]-pridicted[n]) * (Observed[n]-pridicted[n])]/n ""     variable Conventions: uppercase denotes a matrix or an array, lowercase denotes a number    x: Represents an array or matrix    x: Represents a value of the corresponding array or matrix "'"       About learning efficiency (also called stride Length: Controls the regulation of the weight vector in the nth iteration). (the following parameter a):      learning efficiency is too high: the convergence rate increases, the stability is reduced, that is, the result is fast, but the result is poor accuracy       Learning efficiency is too small: stability is improved, convergence rate is reduced, that is, the result is slow, high accuracy, cost of resources       for learning efficiency, there is a special algorithm, here do not do research. Just in most cases the choice: compromise value "' import numpy as npa=0.1  # #学习率  0<a<1x=np.array ([[[] [1,0],[0,1],[0,0]])  # #输入矩阵D =np.array ([1,1,1,0])   # #期望输出结果矩阵W =np.array ([0,0])    # #权重向量expect_e =0.005 # #期望误差maxtrycount =20 # #最大尝试次数 # #硬限幅函数 (that is, the standard, this is relatively simple: input v is greater than 0, return 1. Less than or equal to 0 return-1) "     The last Weight is W ([0.1,0.1]), then:0.1x+0.1y=0 ==>y=-x     i.e.: Y=-x "def sgn (v):     if v>0:        return 1     else:        return 0 # #跟上篇感知器单样本训练的-1 than adjusted to 0, in order to test the need. -1 Training does not result     # #读取实际输出     '      here are two vectors multiplied, corresponding to the mathematical formula:     a (m,n) *b (p,q) =m*p+n*q     in the following function when the loop is xn=1 (at this point w= ([0.1,0.1])):     np.dot (w.t,x) = (() * (0.1,0.1) =1*0.1+1*0.1=0.2>0 ==>sgn  return 1 "def get_v (W,x): &NBSP;&NBSP;&NBSP;&NBSP;RETURN&NBSP;SGN (Np.dot (w.t,x)) # #dot表示两个矩阵相乘 # #读取误差值def  get_e (w,x,d):     return d-get_v (w,x) # #权重计算函数 (Batch fix) "   corresponding mathematical formula:  w (n+1) =w (n) +a*x (n) *e   Corresponds to the following variable interpretation:    W (n+1)  <= neww  return value   w (n)    <=oldw (old weight vector)   a       <= a (learning rate, Range: 0<a<1)   x (n)    <= x ( Input value)   e      <=  error value or error signal ' DEF&NBSP;NEWW ' (oldw,d,x,a):     e=get_e (oldw,x,d)     return  (oldw+a*x*e,e) # #修正权值 '       the principle of this cycle:     weight correction principle (batch correction) ==> Neural network reads one sample at a time and corrects,          end of expected error value or maximum number of attempts, end of correction process     ' cnt=0while true:     err=0    i=0    for xn in x:     &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;W,E=NEWW (W,D[i],xn,a)          i+=1        err+=pow (e,2)    # #lms算法的核心步骤, namely: MES&NBSP;&NBSp;  err/=float (i)     cnt+=1    print (U " %d  Weight after adjustment: "%cnt)     print (W)     print (u" error:%f "%err)      if err<expect_e or cnt>=maxtrycount:         breakprint ("Last Weight:", w.t) # #输出结果print ("Start verifying results ...") for xn in x:    print ("d%s and w%s =>%d"% (Xn,w.t,get_v (W,XN))) # #测试准确性: "    by the above description: the classification line equation is y=-x, from the axis can be seen:    (2,3) belongs to the + 1 classification, ( -2,-1) belongs to 0 category" "Print ( "Start test ...") Test=np.array ([2,3]) print ("d%s and w%s =>%d"% (Test,w.t,get_v (w,test))) test= Np.array ([ -2,-1]) print ("d%s and w%s =>%d"% (Test,w.t,get_v (w,test)))

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "style=" border:0px; "/>

Output Result:

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "style=" border:0px; "/>

Weighted value after 1th adjustment: [0.1 0.1] Error: 0.250000 weight after 2nd adjustment: [0.1 0.1] Error: 0.000000 final weight: [0.1 0.1] Start validation results ... D[1 1] and w[0.1 0.1] =>1d[1 0] and w[0.1 0.1] =>1d[0 1] and w[0.1 0.1] =>1d[0 0] and w[0.1 0.1] =>0 start Test... D[2 3] and w[0.1 0.1] =>1d[-2-1] and w[0.1 0.1] =>0

650) this.width=650; "src="/img/fz.gif "alt=" Copy Code "style=" border:0px; "/>

The result shows that after 2 training, the optimal result is obtained.

Supplementary note: After several adjustments to the sample or weight, in 20 cycles sometimes results, sometimes can not find the optimal solution. So in the course of the experiment, we didn't reach

The expected result, in addition to the number of cycles is not enough, the most likely is the problem of sample or weight setting.


Python implements minimum mean square algorithm (LMS)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.