Hi @jieying, sorry for the late answer.
I’m on a vacation as for now, with little to no access to my code.
I’ll be sure to send a snippet once I’m back, by next week.
Best regards,
Omar
@JieYing, Sofa doesn’t have a component that does exactly that (yet?)
Concerning the intrinsics parameters, they are camera specific, so I guess they’ll be the same as the realsense (in your case SR300)
Extrinsics can be handled with Sofa’s Transform engine. It binds to a mechanical object and applies scaling/rotation/translation on it on demand.
Or you can also just map them in a rotation/translation matrix and figure out your point cloud’s new position when reprojecting from 2D to 3D.
I’ll make sure to share a code snippet later if you want.
Hope this helps
Best regards
Omar
Hi
I currently work on interfacing realsense cameras with SOFA simulation, especially in the context of liver deformation tracking.
I’m not sure to understand your question. Do you want to reconstruct a point cloud from RGBD frames offline inside Sofa ?
That should be fairly easy to do. Mainly, one should save camera’s intrinsics and RGB and Depth streams in a video file. Then depending on the camera model you’re using, 2D-3D reprojection implementations tend to differ slightly.
Working exclusively on Intel’s Realsense cameras, I can only vouch for their C++ framework, which I find top notch. I can’t speak for other RGBD cameras manufacturers.
WARNING
The forum has been moved to GitHub Discussions.
Old topics and replies are kept here as an archive.