Forum Replies Created
-
AuthorPosts
-
PierreShgBlocked
Hi everyone
Sorry for the late reply, my code base was a mess and we had to fix an issue between OpenGL and SofaPython3.
I can provide you with a simple rendering function I use.
The code below will create a pygame display, fetch the opengl context and print it on the display. You can use the commented code and the PIL Image library to export the image to a file for example.I use this code while running the code with python3, not with runSofa but I don’t see why you couldn’t put it in a Sofa Controller.
I don’t know if that is what you were looking for. If not, please reply to this thread.
Also if you enhance the code, please share it, OpenGL code is very tricky and time consuming to build and debug
def simple_render(rootNode): """ Get the OpenGL Context to render an image (snapshot) of the simulation state """ pygame.display.init() display_size = (800, 600) pygame.display.set_mode(display_size, pygame.DOUBLEBUF | pygame.OPENGL) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) glEnable(GL_LIGHTING) glEnable(GL_DEPTH_TEST) Sofa.Simulation.glewInit() glMatrixMode(GL_PROJECTION) glLoadIdentity() gluPerspective(45, (display_size[0] / display_size[1]), 0.1, 50.0) glMatrixMode(GL_MODELVIEW) glLoadIdentity() cameraMVM = rootNode.camera.getOpenGLModelViewMatrix() glMultMatrixd(cameraMVM) # _, _, width, height = glGetIntegerv(GL_VIEWPORT) Sofa.Simulation.draw(rootNode) _, _, width, height = glGetIntegerv(GL_VIEWPORT) buff = glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE) image_array = np.fromstring(buff, np.uint8) if image_array != []: image = image_array.reshape(display_size[1], display_size[0], 3) else: image = np.zeros((display_size[1], display_size[0], 3)) np.flipud(image) ### Debug # from PIL import Image # # print("image", image) # img = Image.fromarray(image, 'RGB') # img.show() #display.flip() return image
PierreShgBlockedHi
I am working on controlling robots with RL and Sofa.
Concerning launching Sofa simulations with subprocess there is a nice launcher in sofa/tools/sofa-launcher which might be useful if you want to start several simulations maybe with different parameters (think A2C for instance).
As Bruno said, there is no nice way to advance the simulation step by step as of right now. We are working on it in the SofaPython3 plugin. I am using the hack he talked about, by placing a .recv() in the onBeginAnimationStep() of your controller the controller will get locked there and wait for your command to reach it.
For your second question my neural net is housed in another program in which i do the learning and i only pass the commands via socket to the PythonScriptController in Sofa. That way your net and your simulation are somewhat independant.
I have actually wrapped my simulation in gym and it works nicely.I don’t know how much that helps, maybe if you tell us in more detail what you are trying to do I may help you some more.
Cheers
PierrePierreShgBlockedHi
I see you solved your problem so this may not be such a useful comment but still…
I ran into the same problem a couple months ago with Sofa and an Image Processing plugin. Cmake was trying to import libpng from anaconda in one case and from the pip installation in the other case and that was messing things up.
I solved the problem by banning Anaconda altogether and using only pip. This might be quite an extreme solution but i never ran into that problem again.The problem was on Ubuntu 18.04 in case this is useful.
Cheers
Pierre -
AuthorPosts