Point cloud visualizations?

I would like to make efficient visualizations of big point clouds (around a million points), similar to what is done in DSO visualizations, but in a web browser (compatibility with wasm).

I did not find a library for point cloud rendering and am not very familiar with graphics in general. Do you think this is doable? how hard? where should I start looking?

This is completely do-able. At this point, because you are entering a new territory (WASM + Rust + Gfx rendering), I would look for a rendering library. Or, write your own using vector calculus that handles matrices of data (point clouds in this case). I've used Potree before (js), so if you managed to write this in WASM + Rust, then it would be faster than some of the other renderers out there. You can do it!

1 Like

If you can find a #no-std rendering library, then it would be most compatible with WASM

Thanks for the supportive words :wink: I've first tried to use kiss3d which seemed aligned with my goals of a simple solution and has support for wasm. With the help of @sebcrozet who made an example persistent point cloud renderer, I could render my point cloud during the visual odometry computations. Below is a video showing the result on one sequence.

Unfortunately, the performances dropped when compiled to wasm and I am down to roughly 1fps on my laptop with chromium at a million points. I'm not sure what is the exact reason. Maybe it is due to some data marshaling at the wasm frontier, maybe it is something else. I think I might have a look at how to render point clouds with gfx / rendy / web-gpu.

2 Likes

PS: let me know if you know how to do that or you are aware of good reads regarding this.

This performance difference is quite surprising. Using Kiss3d, I did in the past some point cloud visualization for 3D streaming with millions of points (and a pretty complex vertex shader) and it ran at 60fps (though it will depend on your graphics card of course).

  • Did you compile in release mode when targeting WASM?
  • Does the framerate goes back to 60fps if you disable the point rendering (by making camera_and_renderer_and_effect return None instead of the points renderer)?
  • Did you try with another browser (firefox or google chrome)?
  • Are you certain your browser is using your GPU for rendering? You can check this by going to https://webglreport.com/

Yes, all green on webgl 1 and 2. I have intel HD 5500 integrated graphics. I've run cargo web start --release on the persistent_point_cloud.rs example with same setup as the wasm example. I've "manually" timed the loops to a million points. In native, all 100_000 chunks take roughly 3 seconds. In wasm with firefox and chromium, the timings are roughly:

0 -> 100_000: 3s
100_000 -> 200_000: 4s
...
900_000 -> 1_000_000: 12s

And when 1_000_000 points is reached, I have around 10 fps in firefox and 2 fps in chromium when rotating.

Yes, here is the perf profile of firefox. You see the fps go from 30, rapidly dropping to 10 and going back to 60 fps when it reaches 1_000_000 points and start returning None for the renderer in this function.

kiss3d-wasm-perf-firefox

I just run threejs interleaved buffergeometry example which displays a cube of 500_000 points. I'm getting 25 fps on this example. So I guess I'm doomed XD. What really surprised me is that there is such a big performance drop (one order of magnitude since the native build runs 60 fps smoothly up to 2 million points and the web build runs at 10 fps for 1 million points).

I have been exploring a few things and the best compromise of performance and flexibility in my opinion for now has been with wasm-bindgen on rust side and Three.js on WebGL side.

Demo, and source for generation of a cube with random points.

Basically, I do the computations on Rust + wasm-bindgen. Then share a pointer to an array buffer in the wasm memory with Three in order to build a BufferGeometry that wil get drawn in 1 call. Since the point cloud is growing, I've found that limiting the gpu buffer updates with BufferAttribute.updateRange helps a bit.

For an example of it working on a real visual odometry tracking, you can try this demo (source). It requires loading a dataset in the tar format. I've been working with the ICL nuim dataset. A tar file ready for use with this demo is available here (700 Mb, limited to 100 downloads). If you do run this example beware that you need 1.6 Gb of free ram. Close tab to recover memory.

1 Like

I'm trying to achieve something similar - but rendering a point cloud representing the depth of a 3D scene (similar to Camera to Geometry for Unity - Compute shader - YouTube). I guess a point cloud following whatever meshes are in the scene might work as well? Is this possible with Kiss3D? I've tried reading into it (and contacted the author of that youtube video), but it looks like it may involve shader programming, which is definitely beyond me at the moment - I've used Rust quite a bit, but am also pretty new to graphics.

I don't know how to render meshes sorry. If you can generate a dense point cloud for each frame (one point per pixel approximately) there should be no problem achieving this I think.

Actually my points are also initially obtained from depth info in an image so the same rendering code should work. If you want to do the same as the video you linked, you have to simulate a virtual camera moving around in your model and compute points reprojections.

    /// Collect 3d points of keyframe.
    pub fn points_3d(&self) -> Vec<Point3> {
        let intrinsics = &self.config.intrinsics;
        let extrinsics = &self.state.keyframe_pose;
        let camera = Camera::new(intrinsics.clone(), extrinsics.clone());
        let (coordinates, _z) = &self.state.keyframe_multires_data.usable_candidates_multires[0];
        coordinates
            .iter()
            .zip(_z.iter())
            .map(|(&(x, y), &_z)| camera.back_project(Point2::new(x as f32, y as f32), 1.0 / _z))
            .collect()
    }

In my case, as you can see in the example above, I am back projecting to 3D world coordinates from a triplet (x, y, _z) where (x,y) are pixel coordinates in my image, and _z is the inverse depth of the point. I'm using this camera module (https://github.com/mpizenberg/visual-odometry-rs/blob/wasm/src/core/camera.rs) from a project I'm working on in case it's useful (only dependency is nalgebra for this module)

In your case, you only need the extrinsic parameters (translation + rotation matrix) of your camera. Once you get the matrix, you just need to multiply its inverse by every 3D point in the field view of your camera like this (https://github.com/mpizenberg/visual-odometry-rs/blob/wasm/src/core/camera.rs#L70) to get its 3D coordinates relative to the camera.

If you are actually starting from (x,y,depth) frame triplets points, you also need the camera intrinsics (contains focal distances on each axis, and coordinates of focal points which is usually at the center of the image).

For the rendering then, very similar code should work. No need of additional shader in my opinion.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.