overlaying 2D controls on 3D (OverlayDemo) - where to transform and update coordinates

Aug 29, 2012 at 7:41 AM
Edited Aug 29, 2012 at 11:05 PM


I've created a matlab style 3D plot with a lot of help from Helix Toolkit. 

For axes labels, grid lines, tick marks etc, I used WPF 2D geometry and text objects, and transformed the 3D points to 2D in a similar way to the OverlayDemo. (These lines look cleaner than screen space 3D lines, especially since I can apply SnapsToDevicePixels).

In the OverlayDemo, the transformation of coordinates is performed in the window's CompositionTarget.Rendering event. In my case I have a serious amount of layout logic and coordinates to transform, and having all of this called every frame is wasting CPU. I'd like to just update the coordinates when something changes - the camera, the host layout control resizes etc.

So that's cool, I can update coordinates on CameraChanged event (and a few other places). That seems to work with normal mouse interaction (rotation, zooming etc). But when I try and set the camera view in another way (e.g. with HelixViewport3D.SetView()), the 3D to 2D transformation blows up and my 2D coords end up with infinity and NaN. I'm unsure, but I think what happens is one aspect of the camera is changed (e.g. lookDirection), and then this fires the camera changed event. Since coordinates are transformed inside this event handler, that code tries to execute but the transformation matrix isn't formed properly yet (still requires position and upDirection to be set?). Specifically the matrix offset data seems to be wrong. Anyway, this yields incorrect values and the program fails. 

(1) Is there a more sensible place to transform 3D to 2D coordinates?

(2) Why doesn't this fail with normal mouse interaction? Are all camera settings set before the CameraChanged event is fired?

(3) Is there some way to keep this logic in CompositionTarget.Rendering but unhook the event handler when nothing is happening and hook it back up when something does change? What should it depend on? - relying on CameraChanged would still cause the transformation to blow up.

Any ideas will be very much appreciated. I'm totally new at this stuff, so please excuse me.


Sep 2, 2012 at 9:02 PM

Sorry, I am not sure what the problem is caused by here. Can you create a small example application that reproduces this behaviour?

Sep 3, 2012 at 5:05 AM

Hi, thanks for your response. I've come up with a solution to get me by for now. I don't know where this falls on the scale of clunky, but time-constraints restrain me from too much elegance:

In my CompositionTarget.Rendering handler, I check to see if the camera has changed (LookDirection, Position & UpDirection) or if its parent panel has resized before allowing any coordinates to update. When nothing has changed, CompositionTarget.Rendering is doing almost nothing, so CPU usage is negligible.

I thought it would be required to ensure that LookDirection, Position and UpDirection have all changed collectively before allowing coordinates to update. This would ensure the transform matrix (Viewport3DHelper.GetTotalTransform()) would be correctly formed before any coordinates are transformed. However, this doesn't seem to matter when the update of coordinates happens inside CompositionTarget.Rendering - just checking to see that one aspect of the camera has changed (e.g. LookDirection) is enough. I don't know why, but not too concerned about this.

I hope this makes some amount of sense - its probably not relevant to most users of WPF 3D anyway...