VR render path and optimizations

Three.js is currently the most popular choice when developing WebVR, We, at mozilla, use it as base for our https://github.com/aframevr/aframe framework. But when it comes to performance in WebVR, threejs still have plenty of room for improvement.

I’ve been collecting a list of features that could be great to have implemented in three.js in order to deliver a much performant VR experience.
I wanted to have a place discuss about all of them as an overall vision of the engine modification. And later on we could keep creating an issue for each iteam and keep the discussion there.

I know three.js is not focused just on WebVR so I understand some of the proposal won’t be part of the main render path (let’s say foveated rendering), but modules that the user could enable or even automatically when a webvr project is detected. Still most of the proposals will help the main render path even if not not using webvr.

  • Scene render should accept array of cameras.(#10927)

  • Common frustum for both eyes to be used by the ArrayCamera for faster frustum culling (Based on the diagram by Cass Everitt. Already being proposed to be part of the WebVR API (Being discussed on the webvr spec https://github.com/w3c/webvr/issues/203)

  • Reduce drawcalls when rendering in stereo by using instancing to double the geometry with a single drawcall. We’ll need dynamic clipping planes on the shader or, ideally and hopefully, have an extension like the proposed https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiview available in WebGL.

  • Foveated rendering: The simplest approach is to render sharp where the user is looking and lowres/blurry on the periphery, but there’re plenty of ways to improve it, for example: Perceptually based (contrast preserving) rendering as exposed on: https://research.nvidia.com/publication/perceptually-based-foveated-virtual-reality.
    There’s an ongoing discussion (https://github.com/w3c/webvr/issues/205) on how to implement some of the lens matching/multires shading features directly on the browser within the WebVR API, so it could help improving the overall performance together with the changes in the engine.

  • Avoid using deferred rendering and go for a fully optimized forward rendering as MSAA>=x4 should be a must for VR. To deal with one of the biggest problem of forward vs deferred, the number of lights, we could go for a Clustered Forward rendered as used in many AAA engines nowadays with amazing results.

General optimizations/ideas not related directly to the engine itself but that could help improving the overall quality and performance of the VR experiences (And using the premise that if WebVR is available, WebGL2 will be available too):

  • Moving assets processing out of main thread (#11746). In WebVR every time that you don’t submit frames to the headset fast enough you’re kicked out to the vive/oculus lobby so if you’re loading assets at runtime when the experience is already presenting in VR you’ll annoy the user by jumping in/out of your experience when dropping frames.
    It’s also important to notice that with Link traversal this problem is also important at loading time, as when you enter a new website from a previously presenting WebVR page, the browser will wait for a small period of time before you send the first frame, and if you don’t do it fast enough the browser will stop presenting and return to 2D mode, requesting user intereaction again.
    So a good approach could be to try to move the assets parsing out from the main thread with the help of webworkers and serviceworkers and start presenting as soon as possible while the assets are being loaded in the background.

  • Use compressed textures WebGL2 support many compressed texture format available everywhere, we should take advantage of them to reduce bandwith and memory consumption.

  • Optimize for SIMD/WebAssembly Math Library or expensive functions: At first I thought about going SIMD with all the matrix functions using the new WebASM API, but it just doesn’t work as we expected from the C/ASM times where you didn’t have context switching and it was almost performance boost by just converting your functions to MMX/SSE/…
    Now with WebASM because of the context switching, type conversion and so on we need to be sure that we’re jumping to a optimized webasm function that it’s doing an enough hard work at once to get a real advantage. So it’s better suitable for doing things like a 4 step Catmullclark subdivision of a huge mesh, than optimize a vector multiplication even if used hundreds of times in a frame.

  • “Automatic” LOD: To improve framerate in big scenes LOD is desirable, but generating LOD models from existing one is a tedious task, it will consume more bandwith and it will add another step to handle the assets sets. So an “automatic” (configurable) LOD could help a lot here.

  • Lightmaps generator Probably offtopic here but it could be great to have a way to improve the overall aesthethics of the scene as we can’t now use things like SSAO, and this is a cheap and simple way to make it up.

Author: Fantashit

3 thoughts on “VR render path and optimizations

  1. @mrdoob Cool, great stuff ❤️ .

    I think we should gradually adjust the other VR examples and deprecate VRControls and VREffect.

  2. @mikearmstrong001 I agree with you that we must simplify and modularize the render itself. Probably we could start following a similar approach as the one discussed on #11475 for materials, so we could end up having a dictionary of render steps that we could easily replace by our own code.

    @takahirox context switching between WASM and JS is still expensive so I agree that we won’t get any benefits by just moving small functions to WASM, as you said, it must be something CPU intensive to get the real benefits maybe file format parser is something that could get benefits from both WASM and webworkers.
    As far as I know SIMD.js is going to be deprecated anytime soon, and it will be part of the WASM spec :/

    @mikaelgramont @takahirox as these two are quite extensive topics to discuss, what about moving the discussions to a new issue and link them here?

Comments are closed.