Home › Forum › SOFA › Using SOFA › Using SOFA in the context of reinforcement leaening
- This topic has 6 replies, 6 voices, and was last updated 4 years, 6 months ago by Hugo.
-
AuthorPosts
-
1 July 2019 at 19:53 #13881AboveallinsanityBlocked
Dear SOFA community,
I just recently discovered this amazing project and I am trying to use it to train a soft robot controller via RL. (Apologies beforehand for the naivety in the questions….)
I am currently using SOFA with the soft robotics pulgin. I am running the executable as a subprocess in a python script, something like “subprocess.call(runSofa myscene.py -g batch)” I am using batch because I dont want to visualize the simulation. My goal to wrap the simulator in a python reinforcement learning environment like GYM. In the process, some things that I couldnt figure out are:
1) What is a good way for me to advance SOFA one step at a time programically without using the GUI? Currently the program quits after finishing the specified number of steps. I would like it to stand by after every 1 step until I advance manually.
2) Is there a way to modify the “python script controller” while I’m running the simulation? Currently my policy is hard coded in myscene.py, but I would like to have a neural network do the job, and the NN might be updated after each step. Say I already have the module that calculates the output from the network, how can I pass it to the python controller during run time?Sorry if these questions are somewhat trivial. I am just starting to use SOFA, and I’d be hugely grateful for any insight that you guys provide. Thank you very much..!
Thomas
2 July 2019 at 17:04 #13892Bruno MarquesBlockedHi and Welcome to Sofa!
Sadly, the runSofa GUI is not meant to allow stepping programmatically.
Luckily, there are some workarounds..:1. SofaPython3 is a plugin in development. It’s purpose is to provide python packages for SOFA, that will let you define your own simulation loop in python by calling SOFA’s animate / step methods manually from python. But I would not encourage you to use it for now as it is still in active development and the API is still likely to change a lot.
2. A (less elegant) alternative is to keep using SofaPython, place a PythonScriptController in your SOFA scene, and hook a socket on its onBeginAnimationStep() method.
This method is called at each step of the simulation, and while you can’t really pause the simulation, you could, through a socket, send a package (in localhost) to a separate process (your learning algorithm) with all the useful stuff from the current simulation step, wait for your other process to send an acknowledgement package before exiting the function.
# something like that (pseudo code) def onBeginAnimationStep(self, dt): sock.send(some_data) ack = sock.recv() return
——————
Concerning your second question, I’m not 100% sure I understood what you mean by “modifying the pythonScriptController” but I assume that your learning algorithm sends you updated data that you need to insert in your simulation (let’s say the young modulus of a FEM ForceField for instance).
Using the approach I suggested, your ACK package could contain the updated values you’d like to apply to your simulation, and your PythonScriptController could take those new values and set them in your scene’s components.
# something like that (pseudo-code) def onBeginAnimationStep(self, dt): sock.send(some_data) ack = sock.recv() y = parse(ack) self.rootNode.myForceField.youngModulus = y self.rootNode.myForceField.reinit() # might be necessary to call reinit() on the component to update the internal values... return
I hope my answer helps, good luck with SOFA, the first time is the worst 😉
5 July 2019 at 09:51 #13908faicheleBlockedHello!
As an alternative, you might want to consider using ROS to couple your RL environment with SOFA, using the SOFA ROS connector plugin. I have worked on the Neurorobotics Platform (https://neurorobotics.net/) in the past, where ROS served as middleware to integrate Spiking Neural Network simulators with robotics simulations. The ROS connector for SOFA offers a similar possibility, in that you have a convenient way to exchange data between a SOFA simulation and an external (Python-based) framework.
With best regards,
Fabian15 July 2019 at 16:31 #13956PierreShgBlockedHi
I am working on controlling robots with RL and Sofa.
Concerning launching Sofa simulations with subprocess there is a nice launcher in sofa/tools/sofa-launcher which might be useful if you want to start several simulations maybe with different parameters (think A2C for instance).
As Bruno said, there is no nice way to advance the simulation step by step as of right now. We are working on it in the SofaPython3 plugin. I am using the hack he talked about, by placing a .recv() in the onBeginAnimationStep() of your controller the controller will get locked there and wait for your command to reach it.
For your second question my neural net is housed in another program in which i do the learning and i only pass the commands via socket to the PythonScriptController in Sofa. That way your net and your simulation are somewhat independant.
I have actually wrapped my simulation in gym and it works nicely.I don’t know how much that helps, maybe if you tell us in more detail what you are trying to do I may help you some more.
Cheers
Pierre17 May 2020 at 08:23 #16302wrfBlockedHi,
I am also now using the SOFA and deep reinforcement learning to simulate the control of soft robot. In the SOFA scene, I use the multiprocessing in Python to receive data from other processes, the code can be seen as following:from multiprocessing.connection import Listener serv = Listener(address,authkey=authkey) clients = serv.accept() msg = clients.recv() ...
However, there are some problems with the clients.recv(). What I mean is that it can establish connection with other process, but it cannot receive.
17 May 2020 at 09:08 #16303wrfBlockedHi,
I have a question about SofaPython, is the SofaPython using python2?20 May 2020 at 22:47 #16332HugoKeymasterHi @ruofeng,
The SofaPython plugin is based on the python2.7 library. However, this plugin does NOT allow to use SOFA in a python (2.7) environment but only allows for writing python scripts which are run by SOFA, thus having more interactivity.
Not only does the SofaPython3 plugin to link against the python3 library but it allows for the first time to run SOFA simulations from a native python environment. This was not possible with the previous SofaPython(2.7) plugin.
I hope this answers your point.
Best,Hugo
-
AuthorPosts
- You must be logged in to reply to this topic.