Helix and Kinect

Mar 23, 2012 at 1:45 PM

I am using Helix to display a point cloud from kinect it is working very well. I have 2 questions. First can I chnage the color for each point? do you have any registration tools? I want to combine multiple clouds to create a 3D solid. Thanks in advance.

Apr 3, 2012 at 12:34 AM

I have been thinking about creating a similar demo, showing the depth image as a mesh and using the rgb image for the texture map...

Are you using the PointsVisual3D to display the point cloud? This only supports a single color, but can be extended to support a material containing a 'palette' of colors (then you can set different colors for each point by texture coordinates). Setting different material to each point will be too slow.

Apr 3, 2012 at 1:26 AM

Yes I am using PointsVisual3D to display the point cloud and it is working well. I am very interested in the approach you are suggesting but I don't know where to start. I am kind of new to 3D but I am trying to learn from your very good samples. If you decide to create your sample I will be checking and if you think there is something I can help with please let me know.

If I display a single cloud, it works well but when I try to display all the frames coming from kinect it just takes over the CPU.

Apr 3, 2012 at 8:54 PM

I think you could create a subclass of PointsVisual3D to get what you need.

Set the Model.Material = new DiffuseMaterial(new ImageBrush(image)) where image is coming from the kinect.

Then override the UpdateGeometry method and set the Mesh.TextureCoordinates for all points in your cloud (remember to copy 6 times (number of vertices in 2 triangles) for each point).

Are you adding 640x480 = 307200 points to the PointsVisual3D? That will for sure be very cpu intensive!

Apr 8, 2012 at 12:22 PM

I added a small kinect example (Examples/Kinect/DepthSensorDemo). All code is in the MainWindow code-behind (should be easy to refactor to a view-model). The demo reads the depth and color data and creates an image material and a triangular mesh (triangles containing too far/too near points are not included, as well as triangles where the depth range exceeds a given limit). The transform from depth data to 3D points can be improved, it would be interesting to get the correct scale (e.g. in meter or millimeter). Do you have a better solution on how to transform the depth data to 3D?

I let the CompositionTarget.Rendering event control how often the depth and color data is updated, I am getting ok refresh rates even on the 640x480 depth mode.

Nov 6, 2012 at 4:52 AM
jmontoya wrote:

Yes I am using PointsVisual3D to display the point cloud and it is working well. I am very interested in the approach you are suggesting but I don't know where to start. I am kind of new to 3D but I am trying to learn from your very good samples. If you decide to create your sample I will be checking and if you think there is something I can help with please let me know.

If I display a single cloud, it works well but when I try to display all the frames coming from kinect it just takes over the CPU.

 

Hi,

Can you drop in some sample code how to display the point cloud using PointsVisual3D please? I am trying to display it using MeshBuilder.AddSphere however, it freezes the UI. I have the points in x,y,z and I just want to display them in helixviewport3d. How can I do it using PointsVisual3D? Please advise me.

Thanks