Before we discuss the future of panoramic video, let's start by figuring out how the panorama video is implemented [go]

Source: Internet
Author: User

Perhaps it is this two years, with the upsurge of VR boom, "panorama" This word has been moved on the table again and again labeled "Virtual reality", "3D Reality", "360 degrees", "720 degrees" and so on all kinds of names, so many people as the virtual reality content specific presentation form of the main pronoun.

Admittedly, the lack of VR content is now being watched by a growing number of developers and business groups, and panoramic photos and videos, as well as the infancy of VR movies, will undoubtedly be a good way of content landing. it provides viewers with a full immersive experience without the need for too much interaction and the resulting learning costs, as well as the extreme effects of offline rendering and photography.

So, what is the definition and implementation process of panorama, and how can people construct panoramic content? This article will try to tell from a number of different key points of view, hoping to help wave upon Wave's creators.

1. Projection mode

Panorama Photography is not so much a concept, in fact it can be traced back to the 12th century Han Xizai:

Of course it's not really an immersive experience, and even if we roll this long picture into a cylinder and then stand in the center to watch it, we'll still feel something missing, yes, a clear seam, and a gap between the top and bottom of the two areas.

The reason for this is very simple, because the Song Dynasty did not intend to make the painting into an immersive experience--of course, this is nonsense--the real reason is that the physical space corresponding to the visual field does not reach the full extent, that is, the horizontal direction (longitude) 360 degrees, vertical direction (latitude) 180 degrees. Yes, in this case, you must have thought of this picture:

A world map like this may have been on your wall for some years now, and maybe you've never ducked it since you went to college, but it fits all the requirements of a panoramic picture, and you put it in a variety of VR glasses to watch it, just like you're caught in the world.

This is called projection (projection), a mathematical process that correctly expands a real-world scene into a 2D image and can be restored to VR glasses for immersive viewing.

And that seemingly trivial world map, using a common projection method called Equirectangular, which is characterized by a horizontal view of the image size can be very good to maintain, and vertical angle of view, especially near the poles will occur when the infinite size stretching.

In the projection of this way of stretching phenomenon is more obvious, pay attention to look at the dome top of the grain changes, the closer to the top of the screen, the more the more severe distortion. Fortunately, the significance of VR helmets and applications is to restore these visibly distorted images to full-viewing content, which in turn gives the user an immersive sense of surround.

However, the projection of panoramic images is far more than this one, such as the recently released Ricoh Theta S and Insta360 panorama cameras, using another simpler and more effective projection strategy:

Through its two fisheye camera output of the screen, each covering 180 degrees of horizontal and vertical field of view, and then the two output results "buckle" together is the full-field of the immersive bounding body.

Of course, this projection, called Fisheye, produces a 2D picture that is actually more severe than the distorted distortion. And by the way of image re-projection processing, when it is displayed in VR glasses, it is limited by the sampling frequency of the image (or the limit of the pixel size in popular point), so that the distortion is reduced to a certain degree of image quality loss, so it may also cause the quality of the panorama content to degrade.

Thus, as an important bearing base for panoramic content, the projection image (or video) should not only contain the entire contents of the shot, but also avoid excessive distortion to prevent the loss of mass when projected into the VR glasses.

So, in addition to the above two projection methods, there are more options to choose? The answer is, of course, and plenty!

For example, the Mercator projection (Mercator), which along the axis of the tensile deformation than equirectangular smaller, corresponding to the actual scene of the ratio is more realistic, but the vertical direction can only express about 140 degrees of content;

Another example is the equisolid projection , also known as the "Asteroid" or "720-degree" panorama, it can even be the vertical direction of the 360 degree of sight, but only if the user does not care about the great distortion may result in the loss of quality:

So, is there any projection way to create a screen that is capable of covering at least 360 degrees horizontally and 180 degrees in the vertical direction without any distortion of the picture?

The answer is: There is no distortion of the single image projection method, there is no. However, if the result screen of the projection is not a single image, there are some methods:

If you happen to be a developer of graphic development or virtual reality software, this picture should be very familiar to you, this is Cubemap (cube image).

It is equivalent to a cube box composed of six images, and if the observer is in the center of the cube, then each image corresponds to a surface of the cube, and in the physical space it is equal to the horizontal and vertical 90 degrees of the viewshed range. And the observer is surrounded by such six images in the center, the final field of view can also reach a level of 360 degrees, vertical 360 degrees, and the picture is absolutely no distortion distortion.

As follows:

This is an ideal projection, and if you happen to know how to use some offline rendering software or plug-ins to produce and output panoramic content, this must be the most appropriate choice. However, it is almost impossible to use this cubic chart in actual filming, for the simple reason that our existing shooting equipment is difficult to achieve.

2, splicing and integration

If there are six cameras, their FOV angle is strictly limited to horizontal and vertical are 90 degrees, and then create a meticulous stent, the six cameras firmly and stably mounted on the stent, to ensure that their central point is strictly coincident with each other in one Direction-so, The output image may be exactly the same as the cubic Chart standard, and can be used directly.

However, regardless of the photosensitive area of the camera lens, the focal length parameter (and therefore the calculated FoV field of view), or the steel structure design and fabrication of the stent, there is no guarantee that the required parameters can be accurately achieved, and a few mm of optical or mechanical errors may seem innocuous, But for perfectly's cubic image, one or more obvious cracks will inevitably be left in the final immersive scene. What's more, there are vibration problems arising from the movement of the stent, and the problem of focus shift caused by the aging of the camera lens, all of which are enough to make the ideal physical model we just built to naught.

The gap between ideal and reality is so great, fortunately we have a solution-yes, if the stitching place left enough redundancy, and then correctly identify and handle the two cameras overlap area, so that can not do six pictures of the output and composition of the panorama content-and this is the panorama of the creation of another magic weapon , image stitching and edge fusion.

is a 360Heros series panoramic camera.

It uses 6 GoPro motion cameras as well as a stand to assist with the shooting, with six cameras facing different orientations, with a horizontal and vertical fov angle of about 122 degrees and 9 4 degrees if the 4x3 wide viewing angle is set .

read the input stream or video files of six cameras in the Panorama video stitching and output software, and set their actual orientation information on the brackets (or directly from the digital camera's own recorded posture information). This gives us enough video content to cover the full range of scopes.

As we described earlier, because the precise alignment is not possible, so we need to provide the necessary redundancy in the visual angle of each camera, so that the resulting video image will have a certain overlap with each other, the direct output of the panorama screen, there may be obvious superimposed area or wrong edge. Although there are several common panoramic video processing tools, such as Videostitch,kolor, which have some degree of automatic edge fusion function, but many times we still have to manually cut and adjust these edge areas (for example, the use of PTgui to modify the seam of each picture), playable the picture quality is higher or the distortion of small edge area, and ensure that the screen is strictly aligned.

This kind of work is time consuming, and there is an important premise that the screen as the input source must be able to cover 360 degrees of full view and there is redundancy.

As we calculated earlier, if the six camera assembly method, then the FOV angle of each camera should not be less than 90 degrees, for the GoPro Hero3 series camera, at this time must adopt 4x3 wide viewshed mode, if it is 16x9 aspect ratio setting, The vertical FoV would probably not be able to meet the required values, resulting in a "no-no-stitch" problem – of course, we can avoid this problem by adjusting the orientation angle of each camera on the bracket, or by increasing the number of cameras, but from any point of view, Wide-field cameras with a wide aspect ratio of close to 1x1 are an ideal choice.

If you just want to output a panoramic picture, the steps above are usually more than enough, and you don't need to think about more things. However, the image is difficult to put on the VR helmet to shout loudly, can see around the flames, or wild ghosts haunt the dynamic scene is more exciting. If you're thinking about how to make a VR movie like this, then there's a question that has to be raised, which is

Synchronicity--Simply put, it's how all the cameras in your hands are exactly guaranteed to start at the same time, and keep the frame rate consistent during the recording process.

This may not seem like a problem, but if the start time of the two cameras is inconsistent, it will directly affect their alignment and stitching results-even if there are a large number of dynamic elements in the scene or the camera position has changed in the process, the results might not be aligned at all. As a result, the need for simultaneous start-up and simultaneous recording becomes particularly important for panoramic shots that require the simultaneous participation of a large number of cameras.

to fundamentally solve this problem from the hardware, you can use the "Synchronous Phase Lock" (genlock) technique, which means that the time code is passed through the external device to control the synchronized operation of each camera (typically red one professional movie camera). Of course not all cameras have a dedicated Genlock interface, in this case, you can also consider some traditional or seemingly slightly "cottage" synchronization method, such as: road See uneven roar ...

At the beginning of the shooting, the actor roared loudly, or slapped hard. Then in the process of splicing, find each video roar corresponding time node, as the beginning of synchronization position, and then the Panorama video splicing. Although there is little precision in this approach, it does not cost any additional costs, but ensures that the basic synchronization starting position, and then the fine adjustment of the video and stitching work, but undoubtedly to a considerable extent simplifies the post-production difficulty.

A similar approach would be to have all the cameras covered with black cloth, then start shooting quickly, and so on. In short, the hardware conditions can not be fully equipped with the premise, is eight Immortals crossing recount time.

3. Stereoscopic and pseudo-stereoscopic

Careful you may have noticed that all the panoramic videos that were discussed before the shooting process overlooked one point: regardless of the projection method, generated is only a 360-degree panoramic content, put on the PC or Web page to watch of course there is no problem, but if you want to input such content on the VR helmet display, The result is probably not correct. In order to give the picture three-dimensional and present to the human eye, we provide the content must be in the left and right eye horizontally separated by the display mode:

This seems to be just a copy of the original Panorama picture, but carefully observed, near the edge of the screen will find that the content of the left and right picture has a certain offset. Because the human eye is a certain angle of view difference, the eyes each see the image has a certain difference, then through the brain's solution can get three-dimensional feeling. The closer the scene is to the human eye, the more obvious the disparity is, and the distant scenery is relatively without a strong three-dimensional sense.

And any kind of existing VR glasses, all need to through the design of the structure to ensure that the wearer's left and right eye can only see the actual screen half, that is to see the separation of the left and right eye picture content, thus simulating the human eye of the real operation mechanism.

In this case, the panoramic content of the shooting device also needs to make some corresponding changes, such as the original 6 cameras into 12 cameras, that is, each direction has left and right eye two cameras responsible for shooting, the scaffold is built in a manner that is very different from the original design (pictured in Heros3 Pro12, Use of 12 GoPro motion cameras).

For splicing and fusion software, there is nothing special need to do, just to two times to read six video streams, processing after the output of two different panoramic video, respectively, corresponding to the left and right eye of the screen content. You can then merge them into a single screen using a post-tool or application.

Of course, there are a lot of different ways, such as from the 2011 shook the Kickstarter, but until now VR Panorama application fire is still not issued on schedule Panono, its design principle is evenly distributed in the sphere of 36 cameras to shoot, Stitching and getting panoramic images of the left and right eyes.

This design, though it seems to be pulling up, is actually original aim: a picture taken from 36 cameras in different directions, overlaid together enough to cover horizontal 360-degree and vertical 360-degree scopes, and must be covered two times! Coupled with its own precise structure design and installation posture, so that can be accurately calculated from the interior of the Mosaic panorama image, and directly according to the right and left eye two images of the standard output video stream or file, its ability to output the actual resolution is considerable.

Similarly, there are bublcam (four oversized wide-angle lenses across the body), Nokia's Ozo (8 full-sized wide-angle lenses), and jaunt products in development. They all have the ability to directly output stereoscopic forms of panoramic content.

Of course, in the worst case, we have another option, which is to fake a stereoscopic pattern ...

Copy the original panorama to two copies, one offset to the left, the other to the right, and a slight perspective transform (to simulate the deflection of the line of sight). This constitutes the "stereoscopic" screen in most cases also has a certain three-dimensional deception effect, but for the near scenery, or left and right eye picture in the scene of the time (such as simulated face affixed to the door, an eye is blocked by the latch of the scene), there will be obvious flaws. Of course, this may not be a serious problem for enthusiasts who are still in the ignorant stage of VR panorama content.

Go to: Http://www.leiphone.com/news/201512/bMLT4bE88swBjG19.html?foxhandler=RssReadRenderProcessHandler

Before we discuss the future of panoramic video, let's start by figuring out how the panorama video is implemented [go]

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.