This section contains guidance on the unique areas of VR application development in mobile development.
Performance recommendations for early titles
Conservative performance. Although two threads are used for VR applications, many of the Android system's problems we can't control, performance has more statistical characteristics than we might like. Some background tasks occasionally use the GPU. Always approaching the limit does cause more dropped frames and makes the experience unpleasant.
You can't implement graphical effects under these performance constraints (people can't see it on other platforms a few years ago), so don't try to implement them. The VR experience comes from the interesting things that happen in the compositing scene, and the graphics should primarily try not to call their own attention.
Although you have been maintaining 60FPS, more high-quality drawing consumes more power, the title of the visual quality of the slight improvement will not consume 20 minutes of battery life.
Maintain direct rendering. Draw anything into a view, in a single fake grid of each. The technique of re-setting depth buffers and multi-camera layers for VR is not good, regardless of their performance problems. If the geometry is not working correctly-everything is rendered into a single view-this causes the perception problem in VR and you should fix the design.
You can't handle most of the mixes because of performance reasons. If you have a range limit on the title, make sure the effect doesn't cover the entire screen.
Do not use alpha test/pixel Discard transparency-alias usage can be scary and performance has always been problematic. Alpha overlays can help, but it's better to design a title without a lot of cutting geometry.
Most VR scenarios should be built with 16-bit depth buffer resolution and 2x MSAA. If your scene mainly uses pre-lit to compress the texture. Here's a little different in 16-bit and 32-bit color buffering.
In favor of moderate "scene" instead of "open scene". This is a theoretical and practical reason why you should, at least in the short term. The title of the first generation should be about the low hanging fruit, not the challenge.
The best-looking scene will be a unique texture pattern. You can load a lot of textures--128m texture is possible. And the global illumination is fired into the texture, or the data is actually a real-world sample, you can make a reasonable image-realistic scene (always running on a 60FPS stereoscopic view). Many of the lower fidelity contrast dynamic elements may be discordant, so it is important to make a decision on the format.
Panoramic photos create an excellent and efficient background for your scene. If you are not too picky about global illumination, it is often nice to allow them to be swapped out. Panorama-based picture light mode is not a practice performance for the entire scene, but is probably possible for attributes (cannot overwrite the screen).
Frame rate
Thanks to the asynchronous time Warp, the tour in Gearvr will always be smooth and trembling free at 60FPS, no matter how fast or how slow the app renders. This is not to imply that performance is no longer a concern, and that the experience of improving the app is not completely maintained at 60FPS.
If an application is not always running at 60FPS, the animated object moves undulating, quickly rotates the head to pull some black on the edge, the player moves without feeling smooth, and the joystick turns out to look particularly bad. However, asynchronous time warping does not require the GPU pipeline to be idle and it is easy to maintain a 60FPS ratio without using asynchronous time skew.
Drawing anything (stuck in the view) will look bad if the frame rate does not hold the value 60FPS, because it will only move on one eye frame rate update, replacing any video frame rate. Do not make your head appear upward. If something needs to stay in front of the player, like a floating GUI panel, most of the time it is stationary and, if necessary, quickly rushes back to the center, replacing the continuous traction with the head direction.
Scene
Each scene target:
1.5W to 10W Triangles
2.5W to 10W vertices
3.50 to 100 draw calls
An application may render more triangles by using very simple vertex and fragment programs, reducing overdrafts, and decreasing draw calls down to a dozen. However, many small details and contour edges can cause noticeable confusion despite MSAA.
It's good for conservatives! The quality of a virtual reality experience is not only determined by the quality of the rendered image. Low latency and high frame rates are equally important in quality, all-immersive experiences, if not more.
Keep one eye on the number of vertices because vertex processing is not free on the mobile GPU of a tile architecture. The number of vertices in a scene is expected to be the same as the number of triangles in the same baseball field. In a typical scenario, the number of vertices should not exceed twice times the number of triangles. Reducing the number of unique vertices, deleting vertex attributes is not necessary for rendering.
Textures are ideally stored in the ETC2 format with every texel4 bit to improve rendering performance and a 8 times times less storage space than 32-bit RGBA textures. Loading to 512M textures is possible, but limitations on the effective storage space for mobile devices need to be considered. For a unique texture environment in a mobility-constrained application, it is reasonable to load 128M textures.
The texture of the direct baking reflex and reflective work well for applications with mobile restrictions. Confusion from dynamic coloring based on reflection on the raised map surface is often a network negative in VR, but the simple, smooth shape has been beneficial from dynamic reflection in some cases.
Dynamic shading of dynamic lights is often not a good idea. Many good techniques require the use of deep buffers, which are particularly expensive on the mobile GPUs of a tile architecture. Rendering a shadow buffer for a single parallel light is possible in a scene, but baking lights and shadows usually produce better quality.
In order to render more triangles, it is important to reduce the overdraft as much as possible. In an overdraft scenario, it is important that the opaque geometry is rendered significantly from front to back to reduce the number of shapes operations. The scene (which is displayed only from a single viewport) can be statically sorted to ensure that the front-to-back rendering is based on a per-triangle basis. Scenarios (which can be viewed from multiple vantage points) may need to be broken in reasonable geometries (which will be dynamically sorted from front to back at runtime) block size.
Resolution
Due to the distorted theme, the perceived size of a pixel on the screen. Convenient, high resolution is at the center of the screen (it's good), even if it's a 2560*1440 screen, the pixel has been a big contrast to a traditional monitor or mobile device in a typical watch away.
The current screen and theme, the center pixel covers a visual radian of about 0.06 degrees, so you might want a 6000-pixel long band to wrap 360 degrees around a lens viewport. Away from the center, the gap between the samples may be larger than the one, so the frequency conversion decoding should be established and used to avoid confusion.
The usual purpose of rendering requires a 90 degree FOV at least 1500*1500 resolution of the eye buffer, the creation of additional variable frequency decoding. When the system is only capable of doing these with unimportant scenes at the maximum clock frequency, the heat constraint is that these cannot be maintained.
Many game style 3DVR content should target the 1024*1024 eye buffer. At this resolution, the pixel will be slightly stretched in the center, only compressed at the edge, so the frequency conversion decoding is unnecessary. A stake you have a lot of performance headroom, you can try to increase this bit to increase the good display resolution of advanced features, but this time consumes power and performance.
Focus on the "viewer" app (e-book Reader, picture Viewer, remote monitor, etc.) real want to focus on the highest quality should consider using time warp overlay plates to avoid resolution compromises and distortion of double sampling in a separate rendered eye view. Using an sRGB frame buffer and source textures is important to avoid "edge unevenness" effects in high contrast areas when the sample is very close to the best resolution.
Introducing MOBILEVR Design