We know that in two-dimensional coordinates, it is known that two o'clock can determine a linear equation, if there is n number of points (x1,y1), (x2,y2),... (Xn,yn), then there will be n linear equations, we use the least squares to fit an optimal linear equation from these n equations, that is, to find out the parameters of the equation, a, b
The expression for a known unary linear equation is:
Y=a+bx
There is a set of weighted equal measurement data (xi,yi), assuming that the error of the self-variable XI can be ignored, then in an independent variable XI, the corresponding measurement data is Yi, the point of the linear equation is a+bxi, the deviation of the two is
di=yi-(A+BXI), if the measurement data Yi happens to fall on a linear equation, then d1=d2=...=di=0, at this time, a A, a, B is undoubtedly the best, but because of the error exists, it is impossible to appear, so long as
Consider all the deviations d1+d2+...di the smallest, because D has positive negative, the addition will cancel each other , and the absolute value is not good solution to the equation, so the form of the sum of squares , namely D12+d22+...di2, in the d12+ If the D22+...di2 is the minimum value, A/b is the best fit.
least squares fitting for linear models (RPM)