Home › Forum › SOFA › Programming with SOFA › [SOLVED] Saving images and groud truth using SOFA
Tagged: 64_bits, ground-truth, Linux_ubuntu, screenshot, SOFA_1608
- This topic has 4 replies, 2 voices, and was last updated 7 years, 6 months ago by Salman.
-
AuthorPosts
-
24 April 2017 at 08:37 #8990SalmanBlocked
Hello
I am a graduate student of Robotics from Pakistan and my lab is working on surgical robotics and computer vision. We have some abdominal scenes developed by seniors at our university which I need to work with. The ultimate aim is to save a screenshot of the scene at the current time instance, and a corresponding screenshot which depicts every organ/node in a different mono-color. E.g, the liver in red, the gall bladder in purple, the fatty deposits in yellow, the tissues in blue etc.
Currently, we are able to save the displayed scene using Shift + V. Additionally, we are able to extract the X, Y, and Z coordinates of each node of every object and are able to write them to a file. The screenshots and the position data are not synchronized, I guess. However, even given that, we are not sure how we may be able to translate the X, Y, Z positions to pixel coordinates and color those pixels accordingly.
I would appreciate any help on an easier way (or any way) to accomplish this since this would greatly accelerate our data collection efforts.
Regards
Salman Maqbool
24 April 2017 at 21:23 #8993HugoKeymasterDear Salman,
Thank you for your interest in SOFA and welcome into our open-source community!
About exporting the position of your object(s) during the simulation, you can use:- an mesh exporter (like the VTKExporter): with this exporter in the scene, you will be able to save a vtk file at each step of your simulation and replay it in paraview. Please find an example in : examples/Components/misc/VTKExporter.scn
- a State exporter : this is a SOFA export format saving the position as you do it
About saving screenshots more efficiently in SOFA, I do not have experience in this. Let’s see if anyone in the community has done this before.
Finally, this recently-published paper focuses on a similar challenge and could be of interest for you :
Segmentation and Labelling of Intra-operative Laparoscopic Images using Structure from Point CloudHope this helps,
Hugo
25 April 2017 at 19:07 #8999SalmanBlockedDear Hugo
Thank you for the kind response. We are actually interested in acquiring 2D Images of the scene and the corresponding Ground Truth in JPEG format. That’s why the VTKExporter is not really useful in our case. We have made some progress in this by using SofaPython. We set the view manually using the mouse and take a screenshot. Then using the onKeyPressed method, change the colors of all the objects/organs present in the scene. We have managed to do this, however, each organ has an associated texture file. We also want to disable that, because otherwise, we don’t get the colors we changed to. We are able to change the texturename node using the onKeyPressed method as well and the change reflects in the runSofa Graph tab as well. However, there is no change in visual appearance i.e the texture still remains. Is there any way we can overcome this limitation? It seems that even if we manually try to change the texture in the runSofa interface, the node gets updated but the visual scene remains the same. Can you kindly help us out with this?
Also, thanks a lot for sharing the paper. It is interesting and quite relevant as well.
Thank you
Salman
26 April 2017 at 19:23 #9005HugoKeymasterHi Salman,
Did you try to use two visual models, like :
<Node name="PhysicalObject" > ... <MechanicalObject/> ... <Node name="visuTexture" activate ="1"> <OglModel texturename="mytexture.jpg" /> <IdentityMapping/> </Node> <Node name="visuUnifiedColor" activate ="1"> <OglModel color="red" /> <IdentityMapping/> </Node> </Node>
And still using Python, you could then activate or de-activate the visualization nodes as you want (using the activate data in Node).
Let me know if it fixes your problem,Hugo
27 April 2017 at 16:50 #9017SalmanBlockedDear Hugo
Thank you so much for the reply. I tried this in the Python script and it works perfectly. I created a copy of the node and added all the OglModels to it expect the texture and with the desired colors. I programmed it so that the copy node (the one without the textures) gets deactivated when a particular key is pressed, and activated when another key is pressed. That works just fine.
Thank you so much. 🙂
Salman
-
AuthorPosts
- You must be logged in to reply to this topic.