Linear programming
Constraints, a linear function is reached to the extremum. That Both the objective function and the constrained linear programming are called linear programming.
Common Forms
Linear programming is convex optimization
Convex optimization:
convex function planning on convex sets is called convex programming.
It can be proved that the linear set is a convex set, which satisfies
A linear function is a convex function, which is
But not strictly convex.
Unified form
To facilitate a unified solution, the unified form of linear programming is obtained:
Of By introducing relaxation variables. Turn inequalities into equations.
It can then be written in the form of a matrix:
A is called a constraint matrix.
Feasible solution
Under the restriction of constraint matrix A, we get a feasible domain, and the solution in the feasible domain is called the feasible solution.
Then solving the feasible domain is a process of solving the linear equation group.
Generally speaking. A's rank m<<n. The ax=b of a linear equation group has infinite solutions. We take the m linear independent vector as its base vector, and set the other non-base vector coefficients to 0. A solution to the constrained equation A is obtained. is called the base solution.
theorem: If there is a feasible solution to the linear programming, then it is inevitable that the base feasible solution is the optimal solution. (certificate)
That If the linear programming has the optimal solution, it can only be found from the base feasible solution.
Canonical formula for base variable
Without losing its generality. If the former K-column vector is a base variable, the matrix form above is written as a block matrix:
Continue the transformation. The form of chunking is deduced into the following forms, for example:
Theorem: X is a feasible solution based on the base B, and the whole discriminant number is non-negative. Then x is the optimal solution
Evidence: We look at the objective function
Divided into two parts. The first part is about base vector b. is a definite number, followed by all non-base vectors. Assume that the second coefficient is not negative. Then there are:
Assuming that the coefficient is 0, a better solution can be obtained, and the non-base vector coefficients are all 0, then the feasible solution x is the optimal solution.
Then, we can continuously iterate over different base vectors, and when the determination factor satisfies the whole nonnegative, the optimal solution is obtained. The corresponding method is the simple method.
Copyright notice: This article blog original articles, blogs, without consent, may not be reproduced.
Linear Programming-Overview