Using the open source software Hugin to realize the depth of the photo to synthesize __ depth

Source: Internet
Author: User
using open source software Hugin to synthesize the depth of image

This article mainly refers to the following article: http://macrocam.blogspot.jp/2013/09/using-hugin-for-focus-stacking.html

I did a small amount of additions and deletions based on my own understanding. First of all, thank the original author.

Hugin is a very famous panoramic photo stitching software. But in addition to the panoramic stitching function, Hugin also provides a series of command-line tools that can be used to manipulate and integrate multiple images to achieve advanced features such as high dynamic image (HDR), depth of fields synthesis (focus stacking). This article will introduce how to use Hugin to do depth of the synthesis.

Let me first introduce what is the depth of the composition. We know that the scene in the scene only in a certain distance is clear, this clear range of size with depth of field to describe, the greater the depth of field can make more different distance of the scene at the same time clear. The depth of this parameter is the characteristic of the lens itself, and is directly related to the aperture value of the lens. Simply put, the larger the aperture, the lighter the depth of the scene. Although we can reduce the aperture to increase the depth of the scene, but the aperture is too small to enter the lens light will be very little, exposure time will increase, and the aperture can not be infinitely reduced.

In order to obtain a large depth of view of the image, there is a combination of depth of the technology. Simply shoot a series of images focusing at different distances, and then combine them into a picture that's far and near clear. This field of depth synthesis technology is very useful in the area of microscopic image applications, because as the magnification of the microscope increases, the depth will become very shallow. For commonly used 50x objective shots, the depth of the scene is only 1-2 um. At this point, as long as the sample has a little height fluctuations can not be all shot clearly.

The English of depth of stacking is the focus of the focus, also known as macro stacking, focal plane merging, z-stacking and focus blending.

The following two command-line tools are used to synthesize the depth of the scene: Align_image_stack.exe
This tool is used to align multiple images. If our image is already aligned, this step can also be omitted. For example, we used a tripod when we took a picture, or the image did not move around when we photographed the microscopic image. Enfuse.exe
Used to blend multiple images together. This tool has a very large number of command-line options to achieve a considerable number of image fusion methods, and naturally includes the image fusion method required for depth synthesis. Enfuse is actually a separate Open-source software that was included in the Hugin.

"Align_image_stack.exe"-A "align_"-m-v t1.jpg t2.jpg
The "-a" argument tells Align_image_stack that we want to specify a prefix for the output image. Pictures that are not specified will overwrite the original picture. "-V" is the output of more information, which is of some use for us to understand what's going on in the align process, but if you don't care about that, you can do without this option. "-M" scales the image in addition to the first image, and considers translation and rotation only if you do not add this option.

In addition, some options are also useful.
* "-X" when aligned to consider the X-directional translation between these images, if we know only this translation between the images, then adding this option will greatly improve the processing speed.
* the "-Y" alignment considers the translation of the y-direction between these images.
* "-I" is aligned to consider the translation of the x and y directions between these images.
* "-D" when aligned to consider the radial distortion of the image, the most common type of radial distortion is barrel distortion (barrel distortion) and occipital distortion (pincushion distortion).
* the "-e" picture is taken with this option when the fish eye lens is shot.

Now let's look at T1.jpg and t2.jpg.


The t1.jpg is focusing on the speaker on the back. The first two bottles are fuzzy.


The t2.jpg is focusing on the bottle on the front. There is no scaling or panning between the two pictures. So the output is no different from the original picture.

The image is then fused to the image.

"Enfuse.exe"-o "result.jpeg"--compression=100--contrast-weight=1.00--exposure-weight=0.00--saturation-weight= 0.00--contrast-window-size=5--hard-mask--gray-projector=luminance align_0000.tif align_0001.tif

Most of the command-line arguments in this article are in my previous blog, and here are the values that are not introduced. "–contrast-window-size=5" is used to set the window size, which can only be an odd number, usually 5 or 7. The "–hard-mask" option is usually used only in Yu Jing deep synthesis, indicating that the synthesis is not done on an average, which is relative to the "–soft-mask". "–gray-projector" how to calculate the brightness information for each pixel of the picture, by default is the average of three values to RGB. "Luminance" represents the CIE conversion formula: Y = 0.3R + 0.59G + 0.11B

Let's look at the results after fusion.


It can be seen in the fusion of the picture behind the speakers and the front of the bottle is clear. The bottle in the middle is still vague, which means that our original picture is too small, if more than one clear picture of the synthesis of the effect will be better.

So far, the basic method of depth of Enfuse is introduced, the SOFTWARE function is very powerful, there are many other options, here can not be introduced, interested students to read the Enfuse help document.

Some tips for synthesizing the depth of the scene: light, reduce the amount of exposure, and make it easier to get a satisfactory synthetic picture. Light source, as far as possible with diffuse reflection light source illumination. Lens, try to choose a small distortion of the lens, so that the software easier on its pictures. To be patient, we use only two photos in our example to synthesize, but this is just an example, the real situation requires more pictures of the author. Usually, we need dozens of images to synthesize a satisfactory picture of the depth of the scene.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.