Question about inverse kinematics in Mocaps in Mujoco #1965
Replies: 4 comments 2 replies
-
@kevinzakka did a bunch of work on this and might be best positioned to reply |
Beta Was this translation helpful? Give feedback.
-
Hello @Yingfan99327, I want to revive this conversation, since I had some similar thoughts not long ago. I am also trying to train a RL agent to control the end-effector position of a manipulator, and then I want to use this policy to control the actual hardware (so I am working on the sim2real transfer basically). Now I will try to answer your question, but keep in mind that I am a rookie myself. That being said:
|
Beta Was this translation helpful? Give feedback.
-
I have been however considering how to do a sim2real transfer of a policy using "Mocap control" like gymnasium robotics, since I believe that training would run faster than using a IK modul implemented from scratch. model = mujoco.load_from_xml("...")
data = mujoco.MjData(model)
# we instanciate some kind of entity/object that allows us to interact with the real robot
real_robot = create_real_robot()
real_robot.initialize() # Maybe ensure that a ROS2 communication is ready
# We load a trained/hard-coded policy
policy = load_policy(...)
while True:
# We read the current joint states of the real robot
# probably broadcasted in /joint_states if using ros2 and ros2_control
qpos, qvel = real_robot.read_joint_states()
# Based on the current joint state, we predict the next action of the policy
obs = make_obs(qpos, qvel)
action = policy.predict(obs)
# Then we advance the digital twin in simulation to generate a joint trajectory (basically solving the IK problem)
set_joint_state(model, data, qpos, qvel) # match the state of the simulation with the one of the real robot
# Advance the simulation based on the action of the robot
# and save the joint trajectory based on the movement generated by the mocap displacement in the future
joint_trajectory = get_joint_trajectory(model, data, action)
# Set the joint trajectory on the real robot, e.g. send a ros2 JointTrajectory msg or so
real_robot.set_joint_trajectory(joint_trajectory)
time.sleep(...depends on control frequency...) I think this could work as a solution to transfer a "mocap based controller" however I still have not been able to implement this properly. And I haven't found anyone out there doing something like this, so the idea might be wrong for a reason that I do not understand. Any feedback or further references are highly appreciated!! |
Beta Was this translation helpful? Give feedback.
-
There was actually an open discussion about this topic: #378 |
Beta Was this translation helpful? Give feedback.
-
Hi,
I'm a student and I'm trying to use MuJoCo for robot simulation
I'm looking for help with inverse kinematics in mujoco.
After reading some relevant issues and blogs about how mujoco handles inverse kinematics in mujoco, I have several questions as follows,
I read some projects on using Mocap body to solve the inverse kinematics problem. For instance, if you define a panda robot as a mocap body ("you could also weld end effectors to mocap bodies, or use other equality constraints to implicitly define the kinematic chain with relation to a reference") and using the APIs provided with mocap such as set_mocap_pos, you can let the agent follow the desired posture. So, does it mean that some inverse kinematics algorithms (I guess maybe some numerical algorithms) have been implemented inside these mocap functions?
Based on my first question, could the policy learned with mocap-based Cartesian motion be generalized to the real-world robot that should solve IK, namely, whether it could match the robot I use in real experiment? For example, if I'd like to realize a joint-space admittance controller or impedance controller of a 7-dof panda robot in mujoco, I need to solve the inverse kinematics of the robot arm and then input the solved qpos into the controllers, so can I use the joint position results calculated by using Mocap function as the input to the controller and then solve the joint torque or it's better that I build the KDL chain of the robot and implemented the inverse kinematics algorithms on my own to calculate the joint positions, which path is better?
Because "set_mocap_pos" function seems the API to help me do "inverse kinematics" in mujoco, does it mean that if I use mocap-based method, the actuator type I defined for my robot in xml file must be "position-type" actuator? If I hope to drive the robot directly with joint torque, namely, I define the robot with "motor"-type actuator, does it mean that in this case I cannot use Mocap-based method and I need to write my own inverse kinematics and thereafter to calculate the joint torque to control my motor-type robot in simulation?
For the actuator type, would it be possible to define both position-type and motor-type actuators for a robot, namely, the robot could both receive joint pos commands or joint torque commands in simulation? I just need to choose what kind of control signals I use in data.ctrl.
Sorry for my so many "rookie" questions about mujoco and really appreciate that if you could answer them. Thanks very much for your time and help.
Wishing you all the best.
Beta Was this translation helpful? Give feedback.
All reactions