"Single image Haze removal Using Dark Channel Prior" The principle, realization, effect and other of image de-fog algorithm.

Source: Internet
Author: User

Time 2013-09-04 17:05:05 iteye-blog original http://hw19886200.iteye.com/blog/1936487

In the field of Image de-fog, few people do not know the "single image Haze removal Using Dark Channel Prior" article, which is the 2009 CVPR best paper. Dr. He Keming, who graduated from Tsinghua University in 2007 and graduated from the Chinese University of Hong Kong in 2011, is a deep-rooted, lamented the level of some of the so-called Ph. D. In China, so the doctor can truly be called Doctor.

For some of Dr. Ho's information and papers, you can visit here: http://research.microsoft.com/en-us/um/people/kahe/

The first contact of the paper was in 2011, said the real time, just casually browse the next, see inside the soft matting process is more complex, and execution speed is very slow, there is no big interest. Recently and occasionally picked up, carefully studied, I think the reasoning steps of the paper is particularly clear, the explanation is in place. coincided with a visit to one of its other articles, Guided Image Filtering, which mentions the process of using guided filtering instead of the soft matting, and is very fast, so my interest in de-fog algorithm has been greatly improved.

This article is mainly on the "single Image Haze removal Using Dark Channel Prior" translation, collation, and part of the interpretation. If your English level is good, suggest that the original text may come to be more cool.

A brief description of the thesis thought

First look at what the Dark channel priori is:

In the vast majority of non-sky local areas, some pixels will always have at least one color channel with very low values. In other words, the minimum value of light intensity in this area is a very small number.

We give a mathematical definition of the dark channel, and for any input image J, the dark Channel can be expressed as follows:

Type J C represents each channel of a color image, and Ω (x) represents a window centered in pixel x.

Formula (5) of the meaning of the code is also very simple expression, first of all the RGB components of each pixel in the minimum value, deposited a pair and the original image size of the same grayscale image, and then the grayscale image of the minimum filter, the radius of the filter is determined by the window size, generally windowsize = 2 * radius + 1;

The theory of dark Channel transcendental suggests that:

In real life, the low channel value of the dark primary color is mainly three factors: a) The shadow of a car, a building and a glass window in a city, or a projection of a natural landscape such as a leaf, tree, or rock; b) A brightly colored object or surface with a low value for some of the three channels of RGB (e.g. green grass/trees Red or yellow flowers/leaves, or blue water); c) darker objects or surfaces, such as dark-coloured trunks and stones. In short, the natural scenery everywhere is the shadow or the color, these scenery's image's dark primary color is always very gloomy.

We put aside the examples listed in the paper, we find a few images of the fog from the Internet, to see the results are as follows:

Some fog-free pictures of their dark passages

Look at the dark passages of some foggy graphs:

Some foggy pictures of their dark passages

The window size used for these dark channel images is 15*15, which is the minimum filter radius of 7 pixels.

From the above images, we can see the universality of the Dark channel transcendental theory obviously. In the author's paper, the statistical characteristics of more than 5,000 pairs of images are basically consistent with this priori, so we can think of a theorem.

With this priori, there is a need for some mathematical derivation to finally solve the problem.

First of all, in computer vision and computer graphics, the following equation describes the fog pattern formation model is widely used:

where I (x) is the image we have now (the image to be foggy), J (x) is the fog-free image we want to restore, a is the global atmospheric light component, and T (x) is the transmittance. Now the known condition is I (x), which requires the target value J (x), obviously, this is an equation with countless solutions, so it takes a priori.

The formula (1) is slightly processed and deformed to the following formula:

As mentioned above, superscript C denotes the meaning of the r/g/b three channels.

First assume that in each window the transmittance t (x) is constant, define him as, and a value is given, and then the formula (7) on both sides of the two-time minimum operation, get the following formula:

In the above-mentioned, J is a fog-free image to be asked, according to the prior theory of dark Primary Colors:

Therefore, it can be deduced that:

Bashi (10) in-band (8), get:

This is the pre-estimate of transmittance.

In real life, even sunny white clouds, there are some particles in the air, so look at the distant objects can still feel the influence of fog, in addition, the existence of fog let human feel the existence of depth of field, it is necessary to retain a certain degree of fog when the fog, which can be introduced in the formula (11) in the [0,1] Between the factors, the formula (11) is modified to:

All of the test results in this article depend on: ω=0.95.

All of the above inferences are assumed to be known when the global Gas light A is used, and in practice we can obtain the value from a fog image with the help of a dark channel diagram. The steps are as follows:

1) Take the first 0.1% pixels from the Dark channel map by the size of the brightness.

2) in these locations, in the original fog image I look for the corresponding value with the highest luminance point, as a value.

At this point, we will be able to perform a non-fog image recovery. by formula (1): J = (i-a)/T + A

Now that the i,a,t have been obtained, it is perfectly possible to perform the J calculation.

When the value of the projection map T is very small, it causes the value of J to be large, so that the boas image is over the whole to the white field, so it is generally possible to set a threshold value of T0, when the T value is less than T0, so t=t0, all of this article is t0=0.1 as the standard calculation.

Therefore, the final recovery formula is as follows:

When directly using the above theory to recover, the effect of fog is also very obvious, such as the following examples:

Fog map, fog map.

Notice that there is obviously an uncoordinated area around the original two words of the first image, and the horizontal direction at the top of the second figure seems to have not been de-fogging, because our transmittance graph is too coarse.

In order to obtain a finer transmittance graph, Dr. Ho put forward the soft matting method in the article, which can get very fine results. But one of his Achilles ' heel is that it is very slow and not used for practical use. In 2011, Dr. Ho, in addition to a paper, referred to the method of guided filtering to obtain a better transmittance graph. The main process of this method focuses on simple box blur, and the box blur has multiple and radius-independent fast algorithms. Therefore, the practicability of the algorithm is very strong, about the guidance of the filter algorithm in the Dr Ho's website can be studied on their own, in addition to the fog, there are other aspects of the application, this part of this article is not many.

Using the guided filter to remove the fog effect:

Use the original projected transmittance graph using a guided filtered transmittance graph

(a) original (b) de-fog result map

(c) Dark channel diagram (d) Guide map (grayscale image of original images)

(e) Projected transmittance graph (f) using a guided filtered transmittance graph

The influence of each parameter on the fog-out result

First: The size of the window. This is a key parameter to the result, the larger the window, the greater the probability that it will contain the dark channel, the darker the Dark channel. We do not go from the theoretical point of view, from the practical effect, it seems that the larger the window, the effect of fog is less obvious, as shown:

(a) original image (b) window size =11

(c) Window size =21 (d) window size =101

My suggestion is that the window size is between 11-51, that is, the radius is between 5-25.

(12) Ω has a clear meaning, the smaller the value, the less the fog effect is less obvious, for example, as follows:

(a) original image (b) ω=0.5

(c) ω=0.8 (d) ω=1

Three: Steps to encode

If you carefully analyze the path of the original text, coupled with appropriate reference, coding is not very difficult.

1) According to the original image dark channel, the reference code is as follows:

for (Y = 0, darkpt = darkchannel; Y < Height; y++)    {        imgpt = Scan0 + Y * Stride;        for (X = 0; X < Width; x + +)        {            Min = *imgpt;            if (Min > * (imgpt + 1)) Min = * (imgpt + 1);            if (Min > * (imgpt + 2)) Min = * (Imgpt + 2);            *darkpt = Min;            Imgpt + = 3;            darkpt++;        }    }    Minfilter (Darkchannel, Width, Height, Radius);

It is important to note that the fast implementation of the Minfilter algorithm provides a paper for needy friends to learn: streaming maximum-minimum FILTER USING NO more THAN three comparisons PER Eleme Nt. The time complexity of this algorithm is O (1).

2) The value of global atmospheric light is automatically obtained by the algorithm described in this article.

The point here is that a in the original paper is ultimately a pixel of a point in the original pixel, and I actually take the average of all the points that match the criteria as the value of a, and I do this because, if you take a point, then the a value of each channel is probably all close to 255, This will result in the processing of the image color and the appearance of a large number of color spots. The original author said that the algorithm to the sky parts do not need special processing, I actually found that the algorithm has a sky image of the effect is generally not good. There is a clear transition area in the sky. As a solution, I added a parameter, the maximum global atmospheric light value, which is taken when the calculated value is greater than the value.

The original does not limit a value to the maximum a value is limited to 220

3) Calculate the estimated transmittance graph by the formula (12).

In the formula (12), each channel of data needs to be divided by the corresponding a value, that is, normalization, there is still a problem, because a selection process, does not guarantee that each pixel component value divided by a value is less than 1, resulting in t value may be less than 0, and this is not allowed, The original author did not explain how this was handled. I found in the actual code, if it really does, the effect is not very good, therefore, my final approach is in the formula (12), do not consider a calculation.

4) Calculation of the Guide filter diagram.

Here can directly use the original image as a guide map, of course, can also use its grayscale, but with the RGB guide map in the next step of the calculation will occupy a relatively large time.

5) obtain a fine transmittance graph according to the formula (5), (6) and (8) coding calculation in the Guided image Filtering paper.

The network has the MATLAB code of the algorithm can be downloaded, here is a part of the code:

  function q = guidedfilter (I, P, r, EPS)% guidedfilter O (1) Time implementation of guided filter. %-Guidance image:i (should is a gray-scale/single channel image)%-filtering input image:p (should be a gray  -scale/single Channel image)%-local window radius:r%-regularization parameter:eps [hei, wid] = size (I); N = Boxfilter (Ones (Hei, wid), R); % the size of each local patch;  N= (2r+1) ^2 except for boundary pixels.  % Imwrite (uint8 (N), ' n.jpg ');    % Figure,imshow (n,[]), title (' N ');  Mean_i = Boxfilter (I, R)./N;  Mean_p = Boxfilter (P, R)./N;  Mean_ip = Boxfilter (I.*p, R)./N; COV_IP = mean_ip-mean_i. * MEAN_P;  % thisis the covariance of (I, p) in each local patch.  Mean_ii = Boxfilter (I.*i, R)./N;  Var_i = mean_ii-mean_i. * MEAN_I; A = cov_ip./(Var_i + EPS);  % Eqn. (5) in the paper; b = mean_p-a. * MEAN_I;  % Eqn. (6) in the paper;  Mean_a = Boxfilter (A, R)./N;  Mean_b = Boxfilter (b, R)./N; Q = mean_a. * I + Mean_b; % Eqn. (8) in THe paper; End

From the above code, visible, the main workload in the mean value of fuzzy, and mean fuzzy is a very fast algorithm, about mean fuzzy optimization can refer to my previous article: Color image high-speed fuzzy lazy algorithm.

Another point is that the above calculations need to be done within [0,1], meaning that both the guide map and the projected transmittance graph must be mapped from [0,255] first to [0,1] in the calculation.

With respect to the radius r value in Guidedfilter, since the image of the Dark Channel is a block after the minimum value, it is recommended that the value of r be not less than 4 times times the radius of the minimum filter, as shown in the following, in order to make the transmittance graph more granular:

(a) twice times the r= minimum filter radius (b) 8 times times the r= minimum filter radius

As you can see, when R is relatively small, there is not much detail in the transmittance graph, so the edges of the image at the recovery point are not obvious at all.

The value of the parameter EPS is also fastidious, he is mainly to prevent the calculation by dividing by 0 error and in order to make some calculation results not too large, it is generally recommended to value 0.001 or smaller.

If you use a color RGB map to do the guide graph, the calculation time will increase a lot, the transmittance of the map to the edge of the image is more than the processing of preserving more detail, the effect is slightly better than the grayscale image, as follows:

(a) original (b) projected transmittance graph

(c) transmittance graph (d) using a gray-scale map as a guide, and the transmittance graph obtained using the RGB graph as the guide graph

(e) The de-fog effect (f) of the grayscale map for the guide map corresponding to the fog effect of the RGB map guide

In the calculation of the guide graph of RGB graph, it involves the process of 3*3 partial matrix inversion, if it is written in non-MATLAB language, it is possible to calculate the calculation result by using the symbolic calculation function of MATLAB and the Symbolic calculation command simple, and then realize it in other high-level languages.

(6) The restoration of the fog-free image by pressing (22).

Iv. Some of the other fog

Map of the original fog transmittance

In the last picture I did two times in a row to deal with fog.

In the original text, there is a passage:

Since The scene radiance is usually not as bright as the atmospheric light, the image after haze removal looks dim. So we increase the exposure of J (x) for
Display.

This means that the image directly after the fog will be darker than the original, so after processing the need for a certain exposure enhancement, but the author does not explain how it is enhanced, so the figure here and his paper's effect is different when normal. Generally after the mist processing and then using automatic color agent and other algorithms to enhance the results will be more satisfactory, such as:

After the original to fog + automatic color scale

There are many other ways to go to fog, but many of the things I get in touch with are based on this, so the first to do this is to study the other fog algorithms to lay a solid foundation.

On the network there are some good dark colors prior to the fog of the MATLAB code: for example, and the basic corresponding MATLAB resources: Http://files.cnblogs.com/Imageshop/cvpr09defog%28matlab%29.rar

PostScript: A little to see a few articles to fog, basically are around to get perspective rate map, such as some articles on the combination of bilateral filtering method to obtain fine transmittance, from my personal shallow understanding, I think to fog has basically not jump out of the category of dark Primary colors.

I to the bilateral filtering algorithm that also did the experiment, found that the effect is also OK, is slow a lot of speed, bilateral filtering fast algorithm actually fast, so this practicality is not high, I chose some images to do the comparison:

(a) original (b) combined bilateral filtered fog map

(c) Guided filtering for transmittance graph (d) Combined bilateral filter transmittance graph (sigmad=sigmar=100)

It is obvious that the transmission graph of the combined bilateral filter has no fine guidance filtering, but it is much better than the original rough transmittance graph, and the transition is smooth, so it can also get good visual fog effect.

The algorithms in the combined bilateral filter are written in reference to the relevant functions in the OPENCV.

As usual, provide a program for everyone to test the effect: Image de-fog demonstration program based on dark primary color priori

I used VB6 and C # to do a program, two programs have been optimized in their own language mode, the algorithm part of the code is the same, C # running speed is about 1.8 times times the VB6.

In the processing speed, much faster than the MATALB, on the I3 laptop, a pair of 1024x768 image to fog time in about 150ms (gray-scale map as the guide map).

PostScript revision: In the follow-up to the algorithm's attention, found himself in front made a wrong judgment, is about the formula (11) Medium/a operation. I said in front that this division would cause some problems, so this step was removed. But then the practice proved that this step, for the low-contrast image can obtain a good high-contrast map to fog.

As mentioned earlier, the/a operation may cause the value of T to be less than 0, in which case the value of T can be set directly to zero resolution.

Another thing is the formula (11) Strictly speaking, the original image of each channel is normalized, then take the minimum value of each channel r/g/b to the middle diagram, and then to the middle of the specified radius of the minimum value filtering, through 11 to get rough transmittance map, then this requires a lot of calculation, In fact, I found that if I use the front dark channel map/A to operate, the effect of the difference is not obvious, so it can be used in this convenient way.

is a classic test chart, although achieved better results, but it seems that the effect of the road is not as good as some other people's public results.

This is a more common test diagram, the figure is also tight with the fog to obtain results, did not do any post-processing, with csdn a case library: Image de-fog algorithm in the study of the effect compared to the overall image of the contrast and coordination of a better grade.

Again, it's much better than the csdn in that case library.

And also:

Summary: I am still satisfied with the effect of this fog algorithm, and the effect and speed are more suitable.

"Single image Haze removal Using Dark Channel Prior" The principle, realization, effect and other of image de-fog algorithm.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.