In the last visual understanding of the light field, the basic concept of the light field, the 3D property, the acquisition method in practical application and the method of seeking arbitrary light by interpolation are discussed in this paper. This article continues on the basis of the use of light field to achieve "first photo after focus" principle and basic steps.
Focus and optical path
First of all, what is focus, let's briefly review Middle school physics.
First look at the left, the object end of the focus surface is the top of the plane, from each point on the plane of light emitted at the end of the other side of the image of the plane, a typical light path as bold as the four-color straight line is shown. If you want to move the focal plane of the object to the position between the original focal plane and the lens, you can see that the light is still the light, but the light that is focused on the image face is no longer the light, such as the right image, the bold light except the red line, the black and blue three colors of the light is no longer the original. To deal with the basic light path of coke, and then come back to see the light field, according to the basic principle of the light field introduced in the previous article, it is natural, we will think, as long as the light emitted in a plane of the corresponding pixels superimposed together, not to achieve a focus? In fact, this is the simplest light-field-based re-focusing algorithm called Shift-and-add[1].
The algorithm of focusing after taking pictures first
Or use the map in the previous article to explain the Shift-and-add algorithm:
As shown in the image on the left, in the original acquisition position, the blue light in two of the captured image corresponds to a different position, so if you want to focus on the blue square, you need to remove their relative displacement, this step is shift, and then the average value of two pixels as the focus of the new image of the pixel value, The image that focuses on the blue block is obtained. Similarly, for the farther green triangle, a larger distance is shifted to eliminate the relative distance between the corresponding pixels, and the new image focusing on the green triangle is superimposed. It is important to note that, as shown in the small figure above, after moving the overlay, the edge part is always some pixels are not coincident, so more or less will form a flaw.
Specific to the previous article on the phone photos of the example, is based on the sample location of each photo relative to the central position of the movement, you can get the image on the different plane of the focus, for example, we select 9 sample points of the center point as the central position, the other 8 sample points to be placed in different locations, To get a different focus on the corresponding image:
The green circle position corresponds to the image:
Blue circle Position corresponding Image:
It's so simple. So, is the algorithm in Lytro Shift-and-add? The answer is no, Lytro's algorithm is to put the translation-superposition of this airspace algorithm into the frequency domain execution. Based on the principle called the central section theorem, here is only two simple sentences, the central section theorem is two-dimensional, but its basic principle can be extended to any dimension, Lytro is used in its 4-dimensional application. In simple terms, after Fourier transform the 4-D light field, in 4D Fourier space, the re-focusing images of different positions correspond to an inverse Fourier transform of the interpolated slices of a two-dimensional Fourier space at different angles through the center. So in essence, this approach is no different from Shift-and-add, except that the linear operation is switched to the frequency domain space. Shift-and-add each time a new focus image is generated, all the collected field information is required, and the algorithm complexity is \ (O\left ({{n}^{4}} \right). And if it is from the transformation of 4D data to produce a new re-focus image, it is divided into two steps: 1) interpolation is worth 2D of Fourier space slices, the complexity is \ (O\left ({{n}^{2}} \right), 2) Two-dimensional Fourier inverse transformation, the complexity is \ (O\left ({{n}^{2 }}\log n \right), of course, in order to obtain a 4D Fourier transform there is a step to initialize the calculation, the complexity is \ (O\left ({{n}^{4}}\log n \right) \). Therefore, it is more economical to focus the time in the frequency domain in the scene where the collected 4D data need to constantly generate new re-focus images. For more details on the frequency domain focus algorithm, interested friends can refer to [1].
It is also important to mention that in this shift-and-add frame of the focus algorithm, and the actual camera imaging images are different. The reason is the focus and the light Path section in the first section. Can be seen in the convex lens in the light path, the focus of the light is not parallel to each other, and the Shift-and-add algorithm, all the light is considered to be "parallel" movement, so in the re-focused photos, the imaginary part of the image is not the same, but this gap for the human eye, Actually, it's not that big of a difference.
Interpolation method to ghosting
Some friends may have seen here, although the focus is completed, but the quality of the image after the focus is not good, such as the previous section focusing on the Dell Logo:
The part of the flower has a noticeable ghosting, and it is clearly different from the camera lens. Explained in the previous section, this is also obvious: because there are only 9 sampling points, in the process of moving-superposition, the corresponding pixels of different images moved more than one pixel, the superimposed image will appear this similar to ghosting artifacts. In fact, this problem is very simple to solve, remember in the previous article, has been told how to plug in the value of the virtual location to sample the image, so it is natural that we just through interpolation, so that the sampling point more dense, dense to each sample point and the adjacent sample point of the image of the corresponding pixel displacement is less than or near a pixel, Then the visual phenomenon of this ghosting can be eliminated. The results are as follows:
Finally, a moving graph of continuous zoom:
The simulation of aperture
Many people prefer to use a traditional camera to capture "bokeh" photos by adjusting the aperture to control the degree of bokeh. This can be simulated in the light field based on the focus, the reason is simple, is to adjust the range of sampling can be. Or use the examples in the previous article, such as using all the sampling points (including interpolation):
The resulting Image:
If only a small fraction of the sample point is used, it is equivalent to the aperture:
You get a picture with a lower degree of blur:
[1] R. Ng, "Digital light Field photography," PhD thesis, Stanford University, Stanford, CA (2006)
A random talk on computational photography (II.): Using light field to realize "focus before taking pictures"