1. Gradient Descent
1.1 Batch Gradient Drop
EG1: Determining the value of the parameter T in h (x) =x^2-t*x-t with the gradient descent method
Note that the choice of iterative factors is important qaq, if the program results become divergent, it is necessary to see whether the iterative factors are not selected. "The last one-0.01 was a qwq.
1 defHypo (t,x):#Precise answer:t=22 return(x*x-t*x-t)3 4 defCost (t):5tmp=06 forIinchRange (0,num):7tmp+= (Yy[i]-hypo (t,xx[i)) *Xx[i]8 returntmp9 TenXx=[-2,-1, 0, 1, 2, 3, 4]#xx[] and yy[] are samples Oneyy=[6, 1,-2,-3,-2, 1, 6] ANum=7 - -eps=0.00000000001#accuracy theaa=-0.01#Iterative Factors - -tx=9999 -ty=0 + while(ABS (Tx-ty) >=EPS): -tx=Ty +ty=tx+aa*Cost (TX) A Print(Ty) at - Print(tx,ty) -
View Code
Iteration results:
10.8421.327231.60977641.7736700851.868728646461.92386261491271.9558403166489681.974387383656396891.9851446825207102Ten1.991383915862012 One1.9950026711999669 A1.9971015492959807 -1.9983188985916687 -1.9990249611831679 the1.9994344774862374 -1.9996719969420178 -1.9998097582263703 -1.9998896597712947 +1.999936002667351 -1.9999628815470636 +1.9999784712972968 A1.999987513352432 at1.9999927577444105 -1.999995799491758 -1.9999975637052196 -1.9999985869490273 -1.9999991804304358 -1.9999995246496527 in1.9999997242967986 -1.9999998400921433 to1.9999999072534431 +1.999999946206997 -1.9999999688000583 the1.9999999819040337 *1.9999999895043397 $1.999999993912517Panax Notoginseng1.9999999964692599 -1.9999999979521708 the1.999999998812259 +1.9999999993111102 A1.9999999996004438 the1.9999999997682574 +1.9999999998655893 -1.9999999999220417 $1.9999999999547842 $1.9999999999737748 -1.9999999999847893 -1.9999999999911777
View Code
1.2 Random gradient descent
EG2:
[Exercise] linear regression, gradient descent algorithm