You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In my research, I need an image of the goal. The current code for the environments generates goals randomly. To automatically get the agent to the goal, you can use the following example code:
classSettable(robotics.FetchPushEnv):
def__init__(self, reward_type='sparse'):
super().__init__(reward_type)
def_set_to_goal(self, goal):
""" Goals are always xyz coordinates, either of gripper end effector or of object """ifself.has_object:
object_qpos=self.sim.data.get_joint_qpos('object0:joint')
assertobject_qpos.shape== (7,)
object_qpos[:3] =goalself.sim.data.set_joint_qpos('object0:joint', object_qpos)
self.sim.data.set_mocap_pos('robot0:mocap', goal)
self.sim.forward()
for_inrange(100):
self.sim.step()
In general it only needs to inherit from FetchEnv, but for simplicity I inherit from a subclass here. This could also easily be implemented instead as a wrapper.
The text was updated successfully, but these errors were encountered:
PR #2762 is about to be merged, introducing V4 MuJoCo environments using new bindings and a dramatically newer version of the engine. If this issue still persists with the V4 ones, please create a new issue for it.
In my research, I need an image of the goal. The current code for the environments generates goals randomly. To automatically get the agent to the goal, you can use the following example code:
In general it only needs to inherit from
FetchEnv
, but for simplicity I inherit from a subclass here. This could also easily be implemented instead as a wrapper.The text was updated successfully, but these errors were encountered: