Some VR delay optimization methods

Source: Internet
Author: User

http://m.blog.csdn.net/article/details?id=50667507

The "delay" in VR, specifically "Motion-to-photon Latency", refers to the time taken from the beginning of the user movement to the display of the corresponding screen.

This is going through a couple of steps:

    1. Sensor captures motion input data
    2. The collected data is filtered and transmitted to the host via a cable
    3. Game engine updates logic and render viewports based on acquired input data
    4. Submitted to the driver and sent by the driver to the video card for rendering
    5. Submit the result of rendering to the screen, pixel to switch color
    6. The user sees the corresponding screen

Of course, there are actually a lot of details, such as the pixel on the screen is not the same time to switch, perhaps the line above the first switch, and then a row of updates to the bottom, where the details are not entangled.

Each of these steps has a certain delay, and the current accepted public acceptance of the delay is below 20ms, which can basically be used as a measure of whether a VR header is eligible for a standard. Although 20ms is very short time, but through hard work can still be achieved, there are so many ideas:

Hardware-Level optimization
    • Increase the sampling frequency of the sensor, reduce the synchronization wait time consumption of the refresh rate and the sensor frequency
    • Improves sensor accuracy and reduces latency from stability filtering of sampled data
    • The use of wired transmission is also partly due to latency considerations
    • The screen uses OLED instead of LCD to reduce the time of pixel color switching
    • Increase the screen refresh rate, the main screen is 60Hz, that each frame is 16.67ms; If you lift to 90Hz, that's 11.11ms per frame.

Most of the mobile VR products in the delay is not qualified, the most obvious performance is to turn the screen when the image is not continuous/jitter/residual shadow, etc.:

    • On the market of mobile phones using OLED screen or a few, such as the iphone with a VR shell that delay is very moving
    • The accuracy and frequency of the steering simulation of a gyroscope dependent on the phone is far from the requirement
    • The phone screen is currently 60Hz refresh rate, which is limited by itself in the delay.
Elevation of refresh rate

Assuming a refresh rate of 60Hz does not mean that there is a 16.67ms delay per frame, but that the screen image is updated every 16.67ms, the concept of "vertical synchronization" in rendering options is derived from this. This is a very high time for us to submit a render screen, such as:

For ease of calculation, first assume that the sensor, transmission, screen pixel switching delay is 0

    • Suppose we sampled the sensor data once at the beginning of each frame (last vertical sync) and committed before the vertical synchronization, the delay is 16.67ms
    • If the current frame cannot be rendered within 16.67ms, such as 17ms, then it will be dragged to the next frame for submission, the screen is still the last image, the delay becomes 16.67*2=33.33MS

This puts a very high demand on VR rendering:

    • FPS must meet the requirements of the refresh rate, 90Hz is 90Hz, 80FPS is not, will be vertical synchronization drag into 45FPS
    • FPS must be stable, occasionally dropped one or two frames in the VR sense is very obvious, maybe an object position has been poor dozens of pixels

Take Oculus Rift (consumer) as an example, 1080x1200x2 screen resolution, 90Hz refresh rate, plus because of the upsampling required for deformation, the actual rendering screen is [email protected], this performance pressure almost with [email Protected] quite. Therefore, the simple increase in the refresh rate and resolution, at present, rendering ability is still not up. However, since there is a performance requirement, hardware manufacturers have the power to move forward, for the entire industry ecosystem, is a good thing.

Engine-Level optimization

In addition to desperately optimizing the rendering time of each frame, the engine plane can also be optimized by some strategies, the key idea is: Can the sampling sensor data point of time as far as possible to delay, so that it and vertical synchronization of the point as close as possible?

Here we still assume 60Hz, each frame time 16.67ms (about 17ms), ignoring hardware delay

If the sensor data is sampled during the game Logic (1ms), the delay is about 16MS

If you re-sample the sensor data before the render thread is drawing (5ms), and fix the viewport information (without affecting the game logic), the delay is reduced to about 12ms

Everyone who has done the rendering optimization knows that after submitting the D3D command, it needs to wait for the GPU to finish, and this portion of the time is still fairly high over the full frame time. Is there a way to sample the sensor data again after the rendering is done before committing to the screen? If like that, the delay can be shortened to 3MS!!!

This is the main idea of timewarp, and let's see how it's achieved.

Timewarp

Anyone who has known about deferred rendering should know that we can use ZBuffer's depth data to reverse-deduce the world coordinates of each pixel on the screen.

This means that we can transform all the pixels into world space and then recalculate the screen coordinates of each pixel based on the new camera location, creating a new Image:

You can see that the pixels of the previously occluded area are missing because our camera position has changed. What if the camera's position is not changed, just the direction of the change? Thus there is no change in the visibility of the pixels:

Timewarp is the use of this feature, in the case of a constant position, the rendered screen according to the latest sensor to the information to calculate a new frame of the picture, and then submitted to the display. Because the angle changes are very small, the edges do not have a large area of pixel missing.

Oculus's demo can stop rendering a new screen, completely calculating the new images facing each other by a single frame image:

That is, as long as the angle change is not very large (in order to demonstrate the effect of the deflection angle is very large), the technology can be "rendered out of thin air" out of the next frame of the image, Sony's PSVR is using this, the 60FPS screen reproject into 120FPS.
Timewarp can only handle head steering, cannot handle head movement, and once the timing of vertical synchronization is missed, the same need to wait for the next vertical synchronization to be displayed. Can you force a timewarp before each vertical sync? Then you need a driver to open the back door ...

Drive-Level optimization

Assuming a vertical synchronization, the current frame is not finished rendering, if you want to timewarp, you need to drive a high-priority asynchronous call, this is the origin of asynchronous Timewarp: Timewarp operation and scene rendering parallel execution, if there is no new rendering screen, Continue using the previous frame of the screen for Timewarp.

This can to some extent compensate for the delay caused by FPS, GEARVR is the application of this technology, to ensure the mobile phone VR experience.
Of course, there are some limitations to using the technology on the PC:

    • Must be Fermi, Kepler, Maxwell (or newer) core GPU
    • The GPU is dispatched in Drawcall, so a drawcall that takes too long is not inserted into the Timewarp drawing operation.
    • Requires the latest Oculus and Nvidia driver support

Asynchronous Timewarp is not to say that FPS is less than standard and can run smoothly, this is only a remedial measure, so the optimization still need to do a good job-_-

There are other optimizations in the driver area, such as forcing the render queue to be committed:

If 3 frames are cached in the driver, the delay optimization is done in white ...

In addition, we are familiar with the back buffer (Double buffer Rendering), in fact, it will add a little delay, it is better to omit this step, that is, front Buffer Rendering, or called direct Mode:

Resources

What is Motion-to-photon Latency?
Optimizing VR Graphics with late latching
VR Direct:how NVIDIA Technology is improving the VR Experience
Virtual Reality with AMD liquidvr™technology
Lessons from integrating the Oculus Rift into Unreal Engine 4
Oculus rift-how Does time warping work?
Asynchronous Timewarp examined

Previous post next review (6)
6/F sat472291519

It feels good to hang.

13:37 reply the day before yesterday
5/F VVSXR

The first picture is so cute

2016-04-18 09:57 reply.
4/F feng1790291543

Smart, huh? A thorough analysis

2016-04-15 14:35 reply.
View all reviews post a review my top articles
    • I have to tell the story with SM (a)
    • The interaction of C + + with Flash
    • The powerful PropertyGrid
    • A simple and practical Ssao implementation
    • WPF Tools Development: Third Library selection
Related Posts
    • Thread intel®threading Building Blocks integration now Available to Unreal Engine 3 licensees
    • Unreal Unreal Engine3 Full parsing
    • Brief analysis of FName in Unreal Engine 3
    • A practical approach to the implementation of the game engine
    • A practical video game engine execution method translation
    • Unity3d developer Quick Start Unreal Engine 4 Guide
    • Game Engine Architecture by Jason GREGORY16 Real-time game engine Architecture 3
    • Proficient in Unreal game engine Interactive Publishing network
    • Refer to the popular game engine editor Unrealcryengineid_tek_5
    • 3D Game Engine FAQ 3 What is culling culling system and LOD system

Some VR delay optimization methods

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.