Implementation and optimization of fog algorithm based on Dark Channel (II.) implementation of OPENCV on PC

Source: Internet
Author: User

In the previous article, the study of what the paper in the Fog method, in this article, I follow the idea of how to use OpenCV 2.4.10 to achieve, the effect is really good, is time-consuming too much, the effect is as follows: Blue circle represents the value of atmospheric light points.



Suddenly found in the previous article forgot to introduce the solution of the atmospheric light value A, the paper is to do this:

1. First take the brightest 1 per thousand pixels in the dark channel graph.

2. According to the location of these pixels in the original image to search for the brightest point, the intensity of this point (intensity) is our request for a.

The author thinks that the advantage of this is to avoid the original image of the lighter object as a value, compared to the white car in the film, if the search from the original image of the brightest point may not be able to find the true atmospheric light value, perhaps the reflection of the white object light (feel I said good wordy)


Start this article: First share some of the points I found in the implementation process:

1. The interpretation of atmospheric light value a , the original paper said that the brightest point of intensity is what we are looking for a, then the problem comes, the brightest point of strength is what? Is the atmospheric light value a a value or an RGB ternary group?

To be sure, a is a ternary group of RGB, RGB three channels have their own a, which can be confirmed in the paper's formula.

So when searching, how do you define the intensity of a point? Here, I'm using an average of RGB.

2. Transmittance T (X) is not a value , is a matrix of the same size as the original, different positions have different transmittance t

In the concrete solution, I made a little simplification, namely: directly the Dark channel diagram input, so as to reduce the process of a minimum filter, and the results have little impact on the small speed.

3. Fine thinning of transmittance images

Here I use the other person to write the Guide filter algorithm, which is an edge hold algorithm, the realization of the time need to note that the Guide diagram needs to be implemented into 0.0-1.0, because t between 0-1.

Attached: http://blog.csdn.net/pi9nc/article/details/26592377 Guided filtering algorithm

4. At the end of the J-time note set the threshold of 0-255 , because the formula when you can have a large number of values may also appear negative!!! (adjusted for a long while to find that J may be negative AH pro!) )

5.opencv traverse the picture , it is best to use the pointer Ah Pro! Using at random elements and using pointers to take elements has a difference of 0.5s in my test.

6. Online spread of a OPENCV C + + code , in the calculation of the minimum value of the Dark Channel filter when the original image is divided into a number of Windows, each window assigned the same value, which is obviously not in line with the paper formula, I hope you do not be misled !! Maybe the blogger is trying to reduce the complexity of the time, but that's too rough to go against the idea of filtering.

Calculation of a dark channel

My idea is to first find the darkest channel value in each pixel, and finally unify the minimum value filter. Minfliter is the least-valued filter function, so let's do it for ourselves.

 mat producedarkimg (mat& i,int windowsize)  {int min=255;      Mat dark_img (I.ROWS,I.COLS,CV_8UC1);      int radius= (WINDOWSIZE-1)/2; int nr= i.rows;      Number of rows      int nl=i.cols;      int b,g,r;      if (i.iscontinuous ()) {        NL = nr * nl;        NR = 1;     } for (int i=0;i<nr;i++)      {        Const uchar* indata=i.ptr<uchar> (i); &n Bsp       uchar* outdata=dark_img.ptr<uchar> (i);                                  for (int j=0;j<nl;j++)          {                  b=*indata++;                  g=*indata++;                  r=*indata++;                  min=min>b?b:min;                  min=min>g?g:min;                  min=min>r?r:min;                  *outdata++=min; min=255;         }  &nbsP  } dark_img=minfliter (Dark_img,windowsize);  return dark_img; }


Two computed atmospheric light value A (airlight)

Input: Dark channel graph, original, window size (must be odd)

Output: Atmospheric light value A, one-dimensional array header with three elements

Among them, pixel is the structure I define, defined in the bottom

Note When C + + returns an array type, be sure to allocate space with new array, not int a[3]={0,0,0}; this way!! Otherwise the returned array may be freed! (Don't ask me why I know so much, it's all the pits I've climbed)

int* getatmospheric_light (mat& darkimg,mat& srcimg,int windowsize)  {int radius= (windowsize-1)/2;      int nr=darkimg.rows,nl=darkimg.cols;      int darksize=nr*nl;      int topsize=darksize/1000;      int *a=new int[3];      int sum[3]={0,0,0};      Pixel *toppixels,*allpixels;      Toppixels=new Pixel[topsize];      Allpixels=new Pixel[darksize];           for (int i=0;i<nr;i++) {Const uchar* outdata=darkimg.ptr<uchar> (i);                   for (int j=0;j<nl;j++) {allpixels[i*nl+j].value=*outdata++;                   Allpixels[i*nl+j].x=i;              Allpixels[i*nl+j].y=j; }}//std::qsort (Allpixels,darksize,sizeof (Pixel), qcmp);    std::sort (allpixels,allpixels+darksize,cmp);    memcpy (Toppixels,allpixels, (topsize) * sizeof (Pixel));      Found the brightest of the 0.1% int val0,val1,val2,avg,max=0,maxi,maxj,x,y in darkimg; for (int i=0;i<topsize;i++) {X=allpixels[i].x;y=alLPIXELS[I].Y;           Const uchar* outdata=srcimg.ptr<uchar> (x);           Outdata+=3*y;           val0=*outdata++;           val1=*outdata++;           val2=*outdata++;           avg= (VAL0+VAL1+VAL2)/3;        if (max<avg) {max=avg;maxi=x;maxj=y;}            } for (int i=0;i<3;i++) {a[i]=srcimg.at<vec3b> (MAXI,MAXJ) [i];            A[i]=srcimg.at<vec4b> (MAXI,MAXJ) [i];       a[i]=a[i]>220?220:a[i];  } return A;  }
Structure:
typedef struct PIXEL{INT x;int y;int value;} Pixel;

three computed transmission graphs (transmission) and fine (refine)

Input: Original, dark channel graph, atmospheric light value A, window size

Output: Transmission graph T

The Dark channel diagram parameters are required because I have simplified the use of a dark channel graph to accelerate the calculation

The final guide filter uses the original image of the gray map to guide, the specific implementation of reference to the above link.

Mat Gettransmission_dark (mat& srcimg,mat& darkimg,int *array,int windowsize) {float test;  Float avg_a= (array[0]+array[1]+array[2])/3.0; float w=0.95; int radius= (WINDOWSIZE-1)/2; int nr=srcimg.rows,nl=srcimg.cols; Mat transmission (NR,NL,CV_32FC1); for (int k=0;k<nr;k++) {const uchar* indata=darkimg.ptr<uchar> (k);  for (int l=0;l<nl;l++) {transmission.at<float> (k,l) =1-w* (*indata++/avg_a);}} Mat Trans (NR,NL,CV_32FC1); Mat Graymat (NR,NL,CV_8UC1); Mat graymat_32f (NR,NL,CV_32FC1); Cvtcolor (Srcimg,graymat, Cv_bgr2gray); for (int i=0;i<nr;i++) {Const uchar* inData= Graymat.ptr<uchar> (i);  for (int j=0;j<nl;j++) graymat_32f.at<float> (i,j) =*indata++/255.0;} Guidedfilter (transmission,graymat_32f,trans,6*windowsize,0.001);//bilateralfilter (transmission,trans,10,30,100 );//gaussianblur (Transmission,trans,size (11,11), 0,0); return trans; }

Four Calculation J (X)

Input: Original, transmission, Atmospheric light, window

Output: Fog Map

Mat Recover (mat& srcimg,mat& t,int *array,int windowsize) {int test; int radius= (windowsize-1)/2; int NR=SRCIMG.R Ows,nl=srcimg.cols; Float tnow=t.at<float> (Radius,radius); float t0=0.1; Mat Finalimg=mat::zeros (NR,NL,CV_8UC3); int val=0; for (int i=0;i<3;i++) {for (int k=radius;k<nr-radius;k++) {const float* indata=t.ptr<float> (k);  Indata+=radius; Const uchar* srcdata=srcimg.ptr<uchar> (k);  Srcdata+=radius*3+i; uchar* outdata=finalimg.ptr<uchar> (k);  Outdata+=radius*3+i; for (int l=radius;l<nl-radius;l++) {  tnow=*indata++; tnow=tnow>t0?tnow:t0; val= (int) ((*srcdata-array[i])/ Tnow+array[i]); srcdata+=3; val=val<0?0:val; *outdata=val>255?255:val; outdata+=3; }}} return finalimg; }


Instance effect:




the transmission map of Tiananmen Square (transmission) is as follows : you can see the effect is really good, very fine, this is the use of the original image of the grayscale map to guide the benefits of the line.



Next, introduce some of my optimizations, mainly the implementation time and transmission graph optimization.


Implementation and optimization of fog algorithm based on Dark Channel (II.) implementation of OPENCV on PC

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.