tldr: Cryptovoxels runs with about 20x less lag than it did yesterday.
Loading the voxel content used to do be really slow:
- Decode voxel to ndarray on main thread
- Pass ndarray to worker twice
- Pass meshing data back to main thread (twice)
- Generate collision mesh on main thread
Now it’s crazy fast:
- Send compressed voxel data to worker
- Decode ndarray on worker and mesh
- Pass opaque, translucent and collision meshes in one message
I also added a Pump
:
Takes jobs into one big queue, processes them when we have downtime between calls to scene.render()
. Previously I had multiple pumps and they all conflicted causing resource contention (and massive lag). Now there are only two pumps (one for voxel fields and one for everything else), and hopefully soon there will be only one pump.
I also deleted jobs once they were processed. This was causing massive memory usage and probably crashing iOS devices, definitely causing issues in Safari.
Load .vox
models was also happening on the main thread and causing mad lag. Now vox models are loaded on a worker, all the geometry fiddling is done on the worker, and the vertex data passed really efficiently back to the main thread.
As such. Cryptovoxels runs so much more smoothly now. Barely any frame drops, even in heavy scenes like Frankfurt. I’ve got more that I want to do (merge the pumps, transcode .vox files on the server). But this is a great step forward.
I’m calling it Version 2.0.0, and it’s available in world now. 🦑