Using the NV 3D Vision "Turn" in the program

Source: Internet
Author: User

http://www.cnblogs.com/gongminmin/archive/2010/11/21/1883392.html

Many years ago Nvidia released 3D Vision Technology, which offers a variety of stereoscopic rendering effects. Do you want to add stereoscopic rendering to your own programs as the 2009 movie Avatar brings a global 3D frenzy?

The principle of 3D vision

According to Http://developer.nvidia.com/object/3d_stereo_dev.html,3D Vision, the principle is as follows:

Inside the drive, all 3D scenes are rendered two times -once with the left eye and once with the right eye. The driver will automatically "online" Modify the typical 3D game vertex shader, so the correct image can be produced during the execution period.

Note the information that is disclosed by a few words in bold. First, each of your draw call is driven into two draw call, second, the stereoscopic process is automatic, cannot be controlled freely, and thirdly, it can only handle typical vertex shader, not arbitrary vertex shader, such as Sky Box's vertex shader, is often a "atypical". Nvidia's idea is to encapsulate everything, with only individual parameters that can be adjusted by developers and users. The only thing the program can do is to give everything to the driver and pray that the end result is correct and extremely passive.

In fact, stereoscopic rendering is not so stupid. For example, the graphics engine can actively generate 2 images from the left and right eye, using the correct vertex shader, and then handing them over to the graphics API. This will ensure that the entire pipeline is supported by three-dimensional, including the frustum clipping, the result is that all objects can be 100% rendered correctly. Instead of the 3D vision approach, not only does the vertex shader have to be "typical", but it is not possible to handle objects that are cut off by the scene manager (for example, objects that can be seen in the left eye and not visible to the right). Another example of Crytek in the CryEngine3 of the three-dimensional rendering method, the resulting single image through the image warp way to get the results of the left and right eye, do not have to render 2 times, you can not reduce performance in the case of three-dimensional rendering. None of these methods can use the automatic 3D Vision, which must be achieved by generating 2 image images.

Now that we've given up on the automatic 3D Vision, we're going to start exploring how to manually submit the two graphs to the driver to control how the drive generates the stereoscopic rendering.

Try 1:nvapi

3D Vision does not provide a way to submit 2 images, then look at the nvapi of the teacher. Nvapi is an SDK provided by Nvidia that provides direct access to the GPU and driver capabilities. Originally I was full of joy to think that Nvapi must give a manual control of stereoscopic rendering method, the results found that the public version of the NVAPI at most also provides the ability to turn on and off the stereoscopic rendering, and can not achieve what we want. According to Http://en.wikipedia.org/wiki/Nvidia_3D_Vision, the proprietary version of Nvapi (that is, the signed NDA version) contains explicit control features. But this proprietary version is hard to apply to, for big companies, for small workshops, amateur developers of the stream, there is no chance (often submitted after the application of no response). For open source development is a nightmare, NDA things can not be released with open source software, so this road is a dead end. Is there no way out?

Try 2:opengl Quadbuffer

OpenGL itself provides Gl_left_back, Gl_right_back, Gl_left_front, and Gl_right_front four buffers, built-in three-dimensional support. But the drawbacks of this approach are also obvious:

    1. Only OpenGL is supported. The mainstream D3D of the game cannot use quad buffer.
    2. On Windows, the general OpenGL driver does not support quad buffer, only Quadro support.

It can be thought that the road has also failed. Is there really no way out?

Try 3:3d Video

Nvidia, while releasing 3D Vision, also released a software called 3D video player that can play stereoscopic movies and provide some sample downloads. These videos are just plain video formats, which are arranged in the left and right eye when playing with the general player, as shown in:

But when you play with 3D video player, you can show a stereoscopic effect. In the case of video playback, the driver has no three-dimensional information, no vertex shader, so it is certainly not used in the above-mentioned "automatic" way to get the stereoscopic effect. Although 3D Video player is very likely to use proprietary NVAPI, I still hope it is done in a general way. At GDC 2009, the NV speech mentions the 3D video display method, which has been tested successfully! That's a back door for NV, used to show the stereo data.

3D Video Details

According to the material found earlier, 3D video is handled as follows:

    1. The left and right eye images are copied into a large texture, the width of the large texture is w * 2, the height is H + 1 (W and H respectively are the original image width and height). Left eye on the left, right eye on the right.
    2. The last line of the large texture adds a special flag (which is the key).
    3. Use StretchRect to copy the large texture into the back buffer.
    4. When the back buffer is displayed, it is three-dimensional.

It seems that all the mystery is in the "special symbol". It is the symbol that allows the driver to recognize the texture as a stereoscopic image, and to do special processing when stretchrect and present. The definition of this flag is this:

Stereo Blit defines
#define Nvstereo_image_signature 0x4433564e//nv3d

typedef struct _NV_STEREO_IMAGE_HEADER
{
unsigned int dwsignature;
unsigned int dwwidth;
unsigned int dwheight;
unsigned int dwbpp;
unsigned int dwFlags;
} Nvstereoimageheader, *lpnvstereoimageheader;

ORed flags in the DwFlags fiels of the _nv_stereo_image_header structure above
#define Sih_swap_eyes 0x00000001
#define Sih_scale_to_fit 0x00000002

How to fill:

D3dlocked_rect LR;
Psurf->lockrect (&LR, NULL, 0);

Fill in the last line
Lpnvstereoimageheader Psih = reinterpret_cast<lpnvstereoimageheader> (static_cast<unsigned char *> ( lr.pbits) + (LR. Pitch * height));

Psih->dwsignature = nvstereo_image_signature;
PSIH->DWBPP = 32;
Psih->dwflags = Sih_swap_eyes;
Psih->dwwidth = gimagewidth*2;
Psih->dwheight = Gimageheight;

Psurf->unlockrect ();

This flag can be filled in when the texture is built, and then each frame needs to be copied into the left and right eye images.

Implementation of Direct3D 10/11

The previous example uses D3D 9, and in D3D10/11, what will happen? D3D10/11 without StretchRect, instead of Copyresource and copysubresourceregion, both have no ability to scale. Anyway, try it again:

D3d11_box BOX;
Box.left = 0;
Box.right = W;
box.top = 0;
Box.bottom = h;
Box.front = 0;
Box.back = 1;
D3d_imm_ctx->copysubresourceregion (backbuffer, 0, 0, 0, 0, surf, 0, &box);

The results were successful! The drive for NV will still specialize copysubresourceregion based on that flag.

At this point, under the D3d9/d3d10/d3d11/opengl can be 2 images to control the 3D Vision Display stereoscopic effect, your program can also use the same method to quickly enter the ranks of three-dimensional. In Klayge 3.11, the stereo mode also uses the methods described in this article.

Unresolved issues

In the process of obtaining the left and right eye images separately, it should be to close the 3D vision, otherwise the left and right eye images will become three-dimensional, performance greatly reduced. Nvapi_stereo_deactivate can really turn off the 3D vision through the NVAPI, but the stretchrect of the last 3D vision will fail. For some reason nvapi_stereo_activate this function appears in the header file and the library file, but does not appear in the document, the call to the function seems to need to wait until the next frame to enable 3D Vision. The result is still not correct. The temporary solution is to reduce the left and right eye image slightly, such as the height from H to h–2, driving the discovery of less than front buffer rendering will temporarily close the 3D Vision.

For more game engines, graphic programming, and industry information, see http://www.klayge.org

Using the NV 3D Vision "Turn" in the program

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.