Some VR rendering optimization methods

Source: Internet
Author: User

VR rendering needs to be left and right eye two different pictures, now most of the engine is the direct rendering of violence two times, so think of the performance also difficult to reach 75FPS (or 90FPS)

Take Oculus DK2 as an example, [email protected], plus super sampling becomes (UE4 default 135%) [email protected]

If it is Oculus's consumer version and HTC Vive, the resolution and refresh rate is increased to [email protected], the recommended configuration is GTX980

With 135% Super sampling as the standard, only the amount of data per second of the color buffer is 2160x1200x1.35x90x8 BYTE≈2.34GB

That's not counting the N-Multiple render target and deferred rendering's gbuffer, light Buffer in post processing


Performance is always the most challenging part of VR rendering, in line with the idea of saving a bit, VR rendering optimization I summed up a few:


Although VR rendering requires about two images, there are many effects that do not need to be painted two times:

    • Shadow Map
    • Part of the Reflection
    • Occlusion Query
    • Most post processing

API-Level optimization, there are a few ideas:

    • If you implement multi-threaded rendering, you will typically have a command Buffer, which is submitted two times, respectively, in different view.
    • Each object is submitted two times separately, saving a bit more than the state switching overhead
    • Use geometry shader direct mesh into the right and left eye, Drawcall will not double. But the pit Dad's GS performance is not well
    • Use instancing once Drawcall draw two viewport, similar to GS, but the performance is about 3 times times the GS
This only reduces the consumption of some API calls, state switching, and vertex processing, and how is the consumption of the pixel processing of the biggest bottleneck reduced?

Valve uses a stencil Mesh that rejects 17% of pixels


Nvidia's GameWorks also provides a method called multi-resolution shading, presumably the idea is that the edge of the pixel after deformation will be lost some, the other eye on the center of the line of sight pixels more sensitive, so around a circle can be reduced resolution to render. This way you can save 25% to 50 of pixels

On the hardware side, both Nvidia and AMD have introduced support for dual GPU rendering, which renders one eye screen per GPU. Well, it must be a conspiracy, they must be stealing music: This video card is not worried about selling


Sony's PS VR achieves 120FPS under the PS4 function. Sounds incredible, actually 60FPS through the reproject interpolation out of the intermediate frame, with Killzone temporal reprojection and Oculus almost the principle of timewrap


Resources:

Fast Stereo Rendering for Vr-google slides-google Docs

' Advanced VR Rendering ' by Alex Vlachos (Valve)-Steam

GameWorks VR Presentation-nvidia Developer

Asynchronous Timewarp examined


Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Some VR rendering optimization methods

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.