Linear Programming
When a set of linear equations or inequality constraints are met, a linear function is used to reach the extreme value. That is, linear planning is called linear planning for objective functions and constraints.
Common forms
Linear Programming is convex optimization
Convex Optimization:
Convex Programming on convex sets is called convex planning.
It can be proved that linear sets are convex sets and meet
A linear function is a convex function, that is
But not strictly convex.
Unified form
To facilitate unified solution, a unified form of linear planning is obtained:
The inequality is transformed into an equation by introducing relaxation variables.
Then you can write the form of a matrix:
A is called the constraint matrix.
Feasible Solution
Under the constraints of constraint matrix A, we obtain a feasible region. The solution in the feasible region is called a feasible solution.
Then, solving the feasible domain is a process for solving linear equations.
Generally, the rank m of a <n, and the linear equations Ax = B have infinite solutions. Let's take M linear independent vectors as their base vectors and set the coefficient of other non-base vectors to 0. Then we get a solution of constrained equation A, called the base solution.
Theorem: If linear planning has a feasible solution, it must have a basic feasible solution that is the optimal solution.(Certificate omitted)
That is, if linear planning has an optimal solution, you only need to find it from the feasible solution.
Primitive variable
We assume that the first K column vectors are base variables and write the matrix form above into a block matrix form:
Continue to transform and deduce the block form into the following form:
Theorem: X is the base feasible solution corresponding to base B. If the total discriminant number is not negative, X is the optimal solution.
Evidence: Let's look at the target function.
The first part is about base vector B, which is a definite number followed by all non-base vectors. If the second coefficient is not negative, there are:
That is, if this coefficient is set to 0, a better solution can be obtained. If all these non-base vector coefficients are set to 0, the feasible solution x is the optimal solution.
Then, we can continuously iterate different base vectors. When all the above discriminant coefficients are non-negative, we can obtain the optimal solution. The corresponding methods include simple methods.