I have been following three.js for many years now, as well as using it in my own projects. I have watched it develop into the amazing library and community surrounding it that it is today. There are many amazing things in three.js, but overall, as far as state of the art for real-time rendering engines goes – three.js has remained relatively stagnant for about 4 years now. Therefore, i would like to list some of the features i’d like to see in three.js in the future, and the reasons I believe them to be necessary for continued success of this library.
Forward rendering doesn’t fit well with most advanced use cases, and when it does – it lacks performance. I want to see a Deferred, Tiled renderer as the main renderer in three.js, not something relegated to /examples folder.
Any kind of post processing you want to have today requires you to build part of the pipeline, depth buffer, normals and colour also. This results in duplicated effort for the hardware in most cases resulting in sub-optimal performance.
If you wish to have any kind of sub-system which relies on a G-buffer, such as G-buffer based decal system which writes into colour+normal buffers – you require what three.js doesn’t have.
More specifically, i want to see a better shadow-mapping implementation than PCF. Such as Variance shadowmaps. Second, i wish to see a cascading shadowmaps. They were once at the core of thee.js, but fell victim to bitrot and lack of support.
Currently shadows require a fair amount of artistry just to make them look okay. If we had a better shadowmap algorithm – a lot of existing problems with acne and filtering would go away. If we had a cascading shadowmap implementation – it would remove the need for artistry currently required to set up shadow camera for each shadow-casting light in the scene.
Spatial index is necessary when dealing with large scenes, such large scenes are very common in games for example.
If you want to do a raycast into the scene – currently you are stuck with a linear search, which is dominated by number of objects and polygons. A spatial index would enable a lot of internal optimizations, such as faster occlusion culling and sorting.
Good occlusion culling is required for good performance. Point above would help here. There are a lot of techniques that can be utilized further here.
Optimizations to animation engine
Currently animation engine chokes on some 500 bones being animated simultaneously, resulting in a very high CPU usage.
It is not uncommon to see 3-5 characters at the same time with 500+ bones each in modern games, with current CPU demand such fidelity is not achievable, instead you have to compromise to about 15 bones per character in order to achieve decent performance.
Compressed textures as a first-class citizen, along with tools for on-line compression
Compressed textures offer a great amount of extra detail requiring only a little space, for applications with large textures and/or large number of textures, this draws a line between interactive frame-rate and a slide-show, this point becomes more relevant for lower-end GPUs, as they tend to have less RAM, being able to draw 2024 compressed textures instead of 512 uncompressed ones is a extremely important, as they take up potentially the same amount of GPU RAM. Compressed textures take less time to load and put less stress on browser, since decompression is not done by default (unlike PNG).
Competent Particle Engine (nice to have)
Particles are the magic stuff of the real-time visualization. As such, i believe it is necessary for a comprehensive real-time visualization to have a good particle engine. Simulation aspect can be kept rudimentary, have a way to plug in simulation logic, but handle things like sorting particles in view-space according to depth, and offer solution for tracking lifecycle of a particle, such as spawning/dying. Things like self-shadowing could be amazing to have also.