Home › Forum › SOFA › Programming with SOFA › Recording images of scenes
- This topic has 30 replies, 12 voices, and was last updated 3 years, 7 months ago by Arthur.
-
AuthorPosts
-
27 August 2019 at 00:52 #14161JieYingBlocked
Hi all,
I would like to compare the behaviour of a simulated phantom and a real one. As such, I captured a RGBD images of interactions with the real camera and was wondering if I can put a virtual depth camera in the simulation. It seems like I can set the viewpoint using OpenGL viewer and SOFA’s GUI allows to recording the screenshot. Is it possible to capture screenshots from a Python script? Is there an existing method to estimate the depth map of the scene from a particular view point?
Thanks,
Jie Ying2 September 2019 at 16:55 #14175HugoKeymasterHi @jieying
Thank you very much for this interesting question!
Coupling the fields of simulation and computer vision has been investigated and is still active! I will ask the developers in this field to get you an answer and maybe an access to their code. With their work, it should be possible to include in SOFA simulation the cloud point coming from the RGBD camera and then make the comparison. Would this be what you need?I never captured a screenshot from Python, but if you mimic keyboard action : ALT+C it should work.
I’ll get back to you.
Hugo3 September 2019 at 04:19 #14180JieYingBlockedHi Hugo,
Thanks for the info! Yes, that sounds exactly like what I need. I’m currently using the MeshExporter as a workaround, but it would be interesting if SOFA can simulate the point cloud from a particular camera position.
Thanks!
Jie Ying5 September 2019 at 16:57 #14215HugoKeymasterHi @jieying
The developer of this plugin is @Omar from the Mimesis team. Omar could you explain a bit what you are currently doing with your RGBD camera?
Hugo
PS: for people simply looking for getting screenshots / videos of the simulation (regarding the title of your forum topic), you can use the VideoRecorder (Edit->Video Recorder Manager). To activate it, press “V” during the simulation.
6 September 2019 at 09:27 #14219omar_bo55BlockedHi
I currently work on interfacing realsense cameras with SOFA simulation, especially in the context of liver deformation tracking.
I’m not sure to understand your question. Do you want to reconstruct a point cloud from RGBD frames offline inside Sofa ?
That should be fairly easy to do. Mainly, one should save camera’s intrinsics and RGB and Depth streams in a video file. Then depending on the camera model you’re using, 2D-3D reprojection implementations tend to differ slightly.Working exclusively on Intel’s Realsense cameras, I can only vouch for their C++ framework, which I find top notch. I can’t speak for other RGBD cameras manufacturers.
6 September 2019 at 20:41 #14221JieYingBlockedThanks, Hugo for the introduction.
Hi Omar,
I’m also working with an Intel Realsense camera (SR300). I’m trying to check the validity of my simulation by comparing the output of the depth camera with what the simulation shows. Currently, I’m reading out the positions of my mesh and comparing that to the point cloud. This is fine, but I was wondering since SOFA does have all the information, whether it’s possible to stream out RGBD frames from a particular camera view point (given the camera’s intrinsic and extrinsic properties).
Jie Ying
10 September 2019 at 10:28 #14222HugoKeymasterI will let @omar guide you, he’ll have more accurate and up-to-date replies regarding his current work.
BestHugo
10 September 2019 at 11:00 #14224omar_bo55Blocked@JieYing, Sofa doesn’t have a component that does exactly that (yet?)
Concerning the intrinsics parameters, they are camera specific, so I guess they’ll be the same as the realsense (in your case SR300)
Extrinsics can be handled with Sofa’s Transform engine. It binds to a mechanical object and applies scaling/rotation/translation on it on demand.
Or you can also just map them in a rotation/translation matrix and figure out your point cloud’s new position when reprojecting from 2D to 3D.I’ll make sure to share a code snippet later if you want.
Hope this helpsBest regards
Omar18 September 2019 at 20:37 #14267JieYingBlockedThanks for the info. A code snippet would be useful if it’s not too much trouble.
Jie Ying
25 September 2019 at 11:20 #142802 October 2019 at 09:48 #14336omar_bo55BlockedHi @jieying, sorry for the late answer.
I’m on a vacation as for now, with little to no access to my code.
I’ll be sure to send a snippet once I’m back, by next week.Best regards,
Omar
30 January 2020 at 17:37 #15173RishabhBlockedHello @omar,
Good to see this thread. I am working on a similar robotics problem where I need the RGBD information from a virtual camera inside the simulation (possibly frame-by-frame while the simulation is running?)Is this possible? and can I change the view point as well?
Thanks,
Rishabh4 February 2020 at 18:45 #15176balazsBlockedI am very happy to have found this thread. I am trying to train machine learning models to assist doctors during surgery, and we are using SOFA to simulate the environment (i.e. the patient’s body and the organs inside it).
To do this, I would like to access the rendered simulation view from within Python3. According to my understanding, this is not currently possible with the existing bindings. Given this, I want to add my own pybind11 bindings to expose the scene in Python, but I am unsure what object/method in the SOFA framework contains this information (I have little experience in C++ but I am willing to get my hands dirty). Any advice is much appreciated.
Thanks,
Balazs7 February 2020 at 19:13 #15192HugoKeymasterhey @balazs
Welcome on the SOFA forum!
Machine learning trained with simulations, how trendy is it! Interesting topic!A plugin already exists allowing to use SOFA and interact with the simulation within a Python3 environment: SOFAPython3 plugin. In a python script, you can design, run and interact with a simulation and its parameters (and much more!).
Is this what you are looking for?
What do you mean exactly by “access the rendered simulation view from within Python3”?Best wishes,
Hugo
5 March 2020 at 10:59 #15287Damien MarchalBlockedHi all,
At DEFROST for grabbing of the screen we are using SofaPython3 and pygame. The rendering of Sofa is done in an openGL canvas using pygame. As we have full control of the simulation and rendering loop we can grab individual frames that are plug that into a machine learning algorithm.
I have not a lot of time to provide code for that but I think Pierre Sheggs may have some. So I poke him.
Damien.
9 March 2020 at 18:45 #15381balazsBlockedHello @hugo and @damien-marchaluniv-lille1-fr,
@hugo, sorry for the late reply. We have indeed found the SofaPython3 plugin, but it does not quite meet our needs (it seems to be in development). Essentially, I would like to have a Python variable that contains an image of what I would see if the SOFA GUI were open. As far as I can tell, this functionality does not exist, although I can take a screenshot of the simulation and save it to a file. This would work but would be extremely slow.
@damien-marchaluniv-lille1-fr, thanks for your reply as well. We have made some progress since my last message, but we are not C++ experts and our current solution is not very performant. We have copied and repurposed the HeadlessGUI to allow access to the rendered simulation view, and have defined a pybind buffer to allow reading the image from Python. I would be very glad for any code from Pierre!Balazs
17 March 2020 at 17:54 #15421HugoKeymasterhey @balazs
The SofaPython3 is no more in transient phase, it is stable. Obviously developments in this project continue but it’s only a sign of good health! The plugins are not all yet compatible with this SofaPython3 but it will come soon.
You would like to compare at each time step an image and the resulting point of view in the simulation. Is this correct?
Hugo
1 April 2020 at 11:39 #15619balazsBlockedHey @hugo,
Yes that is correct. Do you know what objects would be the most logical way to access these values? Right now we are using an OpenGL method called glReadPixels (https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glReadPixels.xhtml), but maybe there is a better way. Feel free to send me any resources that might help us along our way.
Thanks,
Balazs9 April 2020 at 16:01 #15682Damien MarchalBlockedHi all,
Few month ago I was doing screenshot using python3. For that I was using pygame to open a window with an opengl context then the SofaPython3 functions to build control the simulation and trigger the sofa rendering into this opengl context.
This was more or less looking like this:
def doRendering(display, t): glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45, (display[0]/display[1]), 0.1, 50.0) glMatrixMode(GL_MODELVIEW); glLoadIdentity(); cameraMVM = camera.getOpenGLModelViewMatrix() glMultMatrixd(cameraMVM) Sofa.Simulation.draw(scene) pygame.display.init() display = (800,600) pygame.display.set_mode(display, pygame.DOUBLEBUF|pygame.OPENGL) time = 0.0 while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() quit() Sofa.Simulation.glewInit() glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) glEnable(GL_LIGHTING) glEnable(GL_DEPTH_TEST) doRendering(display, time) # here you can use pygame https://stackoverflow.com/questions/17267395/how-to-take-screenshot-of-certain-part-of-screen-in-pygame makeAScreenShot() time+=0.01 pygame.display.flip()
I have no time right now to test the code so please don’t consider it as a working example,more as a source of inspiration.
20 April 2020 at 13:01 #15852PierreShgBlockedHi everyone
Sorry for the late reply, my code base was a mess and we had to fix an issue between OpenGL and SofaPython3.
I can provide you with a simple rendering function I use.
The code below will create a pygame display, fetch the opengl context and print it on the display. You can use the commented code and the PIL Image library to export the image to a file for example.I use this code while running the code with python3, not with runSofa but I don’t see why you couldn’t put it in a Sofa Controller.
I don’t know if that is what you were looking for. If not, please reply to this thread.
Also if you enhance the code, please share it, OpenGL code is very tricky and time consuming to build and debug
def simple_render(rootNode): """ Get the OpenGL Context to render an image (snapshot) of the simulation state """ pygame.display.init() display_size = (800, 600) pygame.display.set_mode(display_size, pygame.DOUBLEBUF | pygame.OPENGL) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) glEnable(GL_LIGHTING) glEnable(GL_DEPTH_TEST) Sofa.Simulation.glewInit() glMatrixMode(GL_PROJECTION) glLoadIdentity() gluPerspective(45, (display_size[0] / display_size[1]), 0.1, 50.0) glMatrixMode(GL_MODELVIEW) glLoadIdentity() cameraMVM = rootNode.camera.getOpenGLModelViewMatrix() glMultMatrixd(cameraMVM) # _, _, width, height = glGetIntegerv(GL_VIEWPORT) Sofa.Simulation.draw(rootNode) _, _, width, height = glGetIntegerv(GL_VIEWPORT) buff = glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE) image_array = np.fromstring(buff, np.uint8) if image_array != []: image = image_array.reshape(display_size[1], display_size[0], 3) else: image = np.zeros((display_size[1], display_size[0], 3)) np.flipud(image) ### Debug # from PIL import Image # # print("image", image) # img = Image.fromarray(image, 'RGB') # img.show() #display.flip() return image
20 April 2020 at 14:14 #15865HugoKeymasterThat’s great thanks @pierreshg for the input!
21 October 2020 at 03:56 #17431ArthurBlockedHi @Hugo,
How would one do this in python, say within an animate function:
“I never captured a screenshot from Python, but if you mimic keyboard action : ALT+C it should work. ”
I would like to record video using v20.06 (with python2 scripts). And if I can mimic keyboard input it should be as easy as mimicking a “v” so this sounds ideal.
Best,
ArthurEDIT:
Working with Sofa v20.12 and SofaPython3 I found a solution. (Currently the pygame example code isn’t compatible, but this worked for me).
# encoding: utf-8 # !/usr/bin/python3 import Sofa import SofaRuntime import Sofa.Gui class scene_interface: """Scene_interface provides step and reset methods""" def __init__(self, dt=0.01, max_steps=300): self.dt = dt # max_steps, how long the simulator should run. Total length: dt*max_steps self.max_steps = max_steps # root node in the simulator self.root = None # the current step in the simulation self.current_step = 0 # Register all the common component in the factory. SofaRuntime.importPlugin('SofaOpenglVisual') SofaRuntime.importPlugin("SofaComponentAll") self.root = Sofa.Core.Node("myroot") ### create some objects to observe self.place_objects_in_scene(self.root) # place light and a camera self.root.addObject("LightManager") self.root.addObject("SpotLight", position=[0,10,0], direction=[0,-1,0]) self.root.addObject("InteractiveCamera", name="camera", position=[0,10, 0], lookAt=[0,0,0], distance=37, fieldOfView=45, zNear=0.63, zFar=55.69) # start the simulator Sofa.Simulation.init(self.root) # start the gui Sofa.Gui.GUIManager.Init("Recorded_Episode", "qt") Sofa.Gui.GUIManager.createGUI(self.root, __file__) def place_objects_in_scene(self, root): ### these are just some things that stay still and move around # so you know the animation is actually happening root.gravity = [0, -1., 0] root.addObject("VisualStyle", displayFlags="showWireframe showBehaviorModels showAll") root.addObject("MeshGmshLoader", name="meshLoaderCoarse", filename="mesh/liver.msh") root.addObject("MeshObjLoader", name="meshLoaderFine", filename="mesh/liver-smooth.obj") root.addObject("EulerImplicitSolver") root.addObject("CGLinearSolver", iterations="200", tolerance="1e-09", threshold="1e-09") liver = root.addChild("liver") liver.addObject("TetrahedronSetTopologyContainer", name="topo", src="@../meshLoaderCoarse" ) liver.addObject("TetrahedronSetGeometryAlgorithms", template="Vec3d", name="GeomAlgo") liver.addObject("MechanicalObject", template="Vec3d", name="MechanicalModel", showObject="1", showObjectScale="3") liver.addObject("TetrahedronFEMForceField", name="fem", youngModulus="1000", poissonRatio="0.4", method="large") liver.addObject("MeshMatrixMass", massDensity="1") liver.addObject("FixedConstraint", indices="2 3 50") def step(self): # step through time # this steps the simulation Sofa.Simulation.animate(self.root, self.dt) # just to keep track of where we are self.current_step += 1 ### A better example would also show how to read and edit values through scripts # which would likely be useful if you are running without a normal gui # return true if done return self.current_step >= self.max_steps # save a screenshot from the position of where we set the camera above def record_frame(self, filename): Sofa.Gui.GUIManager.SaveScreenshot(filename) def main(): a = scene_interface() done = False while not done: factor = a.current_step done = a.step() a.record_frame(str(factor) + ".png") if __name__ == '__main__': main()
For reference I built with
SofaPython3 commit: 184206f126acf0c5d45416fc23cb37baf1971fa5
and Sofa commit:184206f126acf0c5d45416fc23cb37baf1971fa530 October 2020 at 14:47 #17493HugoKeymasterHi @amackeith
Would you like to take images at each time steps?
Or would you like your Python script to save an image only at very specific moments?I doubt that with Python 2.7 you will be able to trigger keyboard event, I never tried it myself. Could you give it a try?
The usual keyboard pressed “v” key does not suit your need I guess?
Hugo
18 November 2020 at 00:40 #17709trannguyenleBlockedHi all,
I just started with SOFA 2 days ago so totally new with this. I followed some tutorials to get to know SOFA, so far so good. My research is going to incorporate both vision and haptic for robotics application with deformable object.
I had a look at the chat above and noted that @omar is working on “work on interfacing realsense cameras with SOFA simulation, especially in the context of liver deformation tracking”. I want to do the same thing but with other object deformation tracking. Do you have some code or tutorial to get on board with this stuff? It would be really nice for me to have a starting point regarding this matter.
18 November 2020 at 08:22 #17713HugoKeymaster -
AuthorPosts
- You must be logged in to reply to this topic.