I agree detecting which rendering botteneck is the culprit would be difficult. I guess I was hoping you guys might have experimented with the idea
Adapting scene rendering quality to performance conditions on the fly - yes this is definitely worth of considering and we will look at it.
Just like in the WebGL Aquarium example, I can imagine how I could have textures dynamically loaded from low to higher quality as long as the fps in under 60fps.
Texture resolution can kill performance when you hit video memory limit (especially on mobile). For this reason we support loading of both compressed (for now only DDS/s3tc format is supported) and halved ("min50") textures. These textures should be converted offline with our resource converter
The next step would be doing the same thing with the meshes. But with outrageous mesh file sizes this wont be as simple. Please let me know If my following thought process is correct:
Like your cartoon_interior demo I could load the scene (with low quality meshes) then manually raise the mesh poly count by loading and replacing the meshes with higher-poly ones. The problem here is there would be a delay between loading higher quality models especially if internet speeds are bad(not to mention the drop in performance). The other option would be to load all quality meshes at first, and start with the LO_Q with all the others hidden and call them up by un-hiding them.
Both options make sense. Of course it is not convenient to program such behavior in an application (Three.js style
). For now we can propose to use a LOD system
which is quite effective and is even supported in Blender viewport. This way small distant objects are not rendered at all, while other distant objects are rendered as low-poly with simple textures and materials.
The most efficient way is how its done in video games which is with tessellation and displacement maps, unless you have this kind of feature in the works, is there any other way you think this could be done?
Here's some three.js examples:
As consequence of this, displacement techniques will require loading both high-poly meshes and displacement textures, which is even less effective than loading just high-poly meshes.
In Blend4Web, we mimic tesselation for water rendering - its waves are high-poly close to the camera and low-poly at a distance. The same could be applied to terrain rendering when the surface is displaced with heightmaps (however we do not support such technique yet).
Sorry for the long post, I hope I articulated myself adequately [smiling-face-smiling-eyes]
We love discussion and original thoughts, so I must thank you for your feedback and suggestions.