From: http://blog.csdn.net/celerychen2009/article/details/9058315
For those who are engaged in image processing, there is basically no way to understand some of the classic documents of image processing without having to understand the variational methods. Of course, this was another 10 years ago.
Now, if you do not know Bregman iteration algorithm, l1 norm reconstruction. This makes it impossible to understand the cutting-edge papers on image processing that have been published in recent years. Domestic references basically reference the Bregman iteration algorithm, but the principle of Bregman is basically not detailed. This article briefly describes the principles of the popular Bregman iteration algorithm.
1. Introduction
In recent years, due to the introduction of compression sensing, L1 regularization optimization has aroused widespread attention. Compression sensing allows you to reconstruct image signals with a small amount of data. L1 regularization is a classic topic in convex optimization and is difficult to solve using traditional methods. We will first introduce the following from the classic image restoration problem:
In image restoration, a common model can be described as follows:
Our goal is to find the unknown real image u from the observed image F. U is an element in the n-dimensional vector space, and F is an element in the M-dimensional vector space. F is called a measurement signal in terms of compression sensing. The variance of Gaussian white noise is Sigma ^ 2. A is a linear operator, for example, a convolutional operator in the convolution problem, and a Compressed Sensing operator in the subsampling measurement operator.
In the above equation, we only know f, and other variables are unknown. In addition, this problem is usually abnormal. By introducing a regular expression, we can make it a good state. The regularization method assumes that a prior assumption is introduced to the unknown parameter U, such as sparsity and smoothness. The Tikhonov method is a common method for regularization. It solves the following optimization problems:
Mu is a scalar greater than zero and a preset constant used to weigh the balance between the observed image F and the regular expression. The double absolute value symbol is the L2 norm.
Below, two important concepts need to be described to introduce Bregman Iteration Algorithms.
2. Bregman distance
Note that this definition refers to the definition of subgradient of Function J at the U point, and the P point is a point in its dual space. Subgradient can be translated into subgradient, subgradient, and weak gradient. The rightmost item on the left of the equation is the inner product operation. If Function J is a simple unary function, it is the multiplication of two real numbers. What are the advantages of subgradient? For general derivative definitions, for example, y = | x | is not traceable at 0, but it exists for subgradient.
The preceding definition defines the Bregman distance. For a convex function, the Bregman distance between two vertices u and v is equal to the difference of its function value, and then the Inner Product of the difference between the next gradient point P and the independent variable. Note that the distance does not satisfy the symmetry, which is different from the distance definition in general functional analysis.
3. Bregman Iteration Algorithm
Bregman iteration algorithm can efficiently solve the minimum functional values below.
The first item J in the above formula is defined as the functional from X to R. Its definite domain X is a convex set and closed set. The second item H is defined as a non-negative microfunctional from X to R. F is a known amount and is usually the data of an observed image, so f is a matrix or vector. The above function has different specific expressions based on different issues. For example, for image restoration in the introduction, J (u) is a smoothing prior constraint and a regularization item, while H is a data item.
The Bregman iteration algorithm first initializes the relevant parameter to zero, and then iterates the formula U. The left side is the Bregman distance of the functional J. Let's look at the iteration formula of point P. the rightmost of the formula is the gradient of functional H.
The output produced by one iteration is formula 3.2. After multiple iterations, it can converge to the real optimal solution. For more information about this process, see the following documents.
Functional 3.1 has different definitions for specific problems. For example, for the basic tracing algorithm used for compression sensing, J is the l1 norm. For the image de-noise problem, it may be the l1 norm of the U gradient, and a also becomes the constant operator.
4. The Bregman Iteration Algorithm of the linear Bregman iteration algorithm solves the minimum value of functional 4.1 in each iteration of the Bregman iteration algorithm. The computing cost of this step is very high. The idea of linear Bregman iteration is to linearly expand the second item of function 4.1. According to the Taylor Formula of matrix functions, the second item of function 4.1 can be expanded to the form of above 4.2.
Note that formula 4.2 omitted the quadratic term in Taylor's formula. Add the second item to the first step of the Bregman iteration algorithm formula. We get the formula 4.3. If we calculate the expression between 4.3 and 4.4 and compare the same item, we can easily get the formula 4.4.
If we consider the base tracing algorithm, H is equal to | au-f | ^ 2/2. We bring the derivative of H to formula 4.4, and we get formula 4.5, formula 4.6 is the second step of the basic Bregman iteration algorithm. Note that the supermark of U in formula 4.6 is incorrect and should be changed to k + 1, so that formula 4.7 and formula 4.8 can be obtained, 4.9, 4.10, 4.11 are obvious.
Next we will bring the distance between 4.11 and the previously defined Bregman to 4.5, as shown below:
In the above derivation, u_k is a constant, and C is a constant related to u_k. This formula is used to evaluate U. Since there are absolute values, we should discuss them separately, obtain the above piecewise expression. Further sorting:
Here, we define a shrink operation, which is very important. This operation is available in all subsequent Bregman algorithms. Based on this operation, we export the following expression and Finally summarize the linear Bregman iteration algorithm as follows:
5. Split Bregman Algorithm
The split Bregman algorithm is another efficient algorithm. We already know that the Bregman iteration algorithm is used to solve the following convex optimization problem:
We can convert the above expression to the following equivalent form:
This step seems to be an extra step, but Bregman has come up with an efficient iteration algorithm After derivation to split Bregman iteration.
The above 5.2 is an equality constraint optimization problem, which is transformed into a non-constrained optimization problem as follows:
In the above formula, the optimization variable has another D. Replace the following variables:
If we apply the Bregman iteration algorithm mentioned above to 5.5, it is easy to write the following iteration sequence:
Formula 5.9 is based on the Bregman distance between 5 and 6. Formula 5.7, 5.7 is followed by the partial derivative of the U and D of the 5-5 pairs respectively. If we expand the 5.7 iteration, we get:
Likewise, for 5.8, there are
Note that formula 5.11 and 5.12 have a common Sigma sum, which is redefined as follows:
Add 5.14 and 5.15 to 5.9 as follows:
In the simplification of 5.16, note that u and D are variables, and others are considered constants.
At this point, we can provide general optimization steps for the split Bregman Iteration Algorithm:
For the iteration of U, we regard U as the independent variable, and all other variables as the constant. For the iteration of D, D is the independent variable, and other variables are constants. The general iteration optimization process is because the specific expressions of iteration vary with specific problems. For example, for the same-sex TV de-noise model, the specific expressions of iteration are different for the same-sex TV de-noise model.
Finally, the references in this article are listed as follows:
Http://download.csdn.net/detail/celerychen2009/5552551