Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[python/gym] Add termination conditions and reward components toolboxes. #671

Closed
duburcqa opened this issue Dec 8, 2023 · 2 comments
Closed
Labels
enhancement New feature or request gym P1 Mid priority issue python

Comments

@duburcqa
Copy link
Owner

duburcqa commented Dec 8, 2023

Currently, there is no toolbox with various pre-implemented toolboxes for termination conditions and reward components. It would be nice to provide some highly optimized yet modular implementations of the most common cases. Here is a template of what would be done for the reward:

from typing import Sequence, Tuple, Dict, Union, Callable
from abc import ABCMeta, abstractmethod

import numpy as np

class AbstractRewardCatalog(ABCMeta):
   def __init__(self, env: BaseJiminyEnv, reward_mixture: Dict[str, float]) -> None:
       self.env = env
       self.reward_mixture = {getattr(self, reward_component): weight}

   @abstractmethod
   def _initialize_buffers(self) -> None:
       pass

   @abstractmethod
   def _refresh_buffers(self) -> None:
       pass

   def compute_reward(self,
                      terminated: bool,
                      truncated: bool,
                      info: InfoType) -> float:
       reward_total = 0.0
       self._refresh_buffers()
       for reward_fun, weight in self.reward_mixture.items():
           reward_total += reward_fun(terminated, truncated, info)
       return reward_total

class WalkerRewardCatalog(AbstractRewardCatalog):
   def _initialize_buffers(self) -> None:
       self.foot_placements: Sequence[Tuple[np.ndarray, np.ndarray]] = ()

   def _refresh_buffers(self) -> None:
       pass

   def foot_placement(self,
                      terminated: bool,
                      truncated: bool,
                      info: InfoType) -> float:
       (left_foot_pos, _), (right_foot_pos, _) = self.foot_placements
       return np.linalg.norm(left_foot_pos - right_foot_pos)
@duburcqa duburcqa added enhancement New feature or request gym python P1 Mid priority issue labels Dec 8, 2023
@duburcqa
Copy link
Owner Author

duburcqa commented Feb 17, 2024

After thinking twice, I think it makes more sense provide a QuantityManager. This quantity manager can then be forwarded to independent reward components satisfying some callable protocole (which means it would be a lambda, a function or a class defining __class__ aka functor). It would more modular and easier to extend this way. To be computationally efficient, this quantity manager should heavily rely on caching. This cache must be clear manually before computing any reward component, then quantities would be computed only the first time it is requested, or never if not used. Here is a snipped:

class QuantityManager:
    def __init__(self, robot: BaseJiminyRobot, quantities: Dict[str, Callable[[], Any]]) -> None:
        self.robot = env.robot
        self.quantities = quantities
        self._cache : Dict[str, Any]

    def __getattr__(self, name: str) -> Any:
        return self.__cache.setdefault(name, self.quantities.get(name))

    def __item__(self, name:str) -> Any:
        return getattr(self, name)

    def reset() -> None:
        self.__cache.clear()

class RelativePose:
    def __init__(self, robot: BaseJiminyRobot, first_name: str, second_name: str) -> None:
        first_index = robot.pinocchio_model.getFrameId(first_name)
        second_index = robot.pinocchio_model.getFrameId(second_name)
        self.first_pose = self.robot.oMf[first_index]
        self.second_pose = self.robot.oMf[second_index]

    def __call__(self) -> pin.SE3:
        return self.first_pose.actInv(self.second_pose)

def foot_placement_reward(quantities: QuantityManager) -> float:
    return np.linalg.norm(quantities.foot_pose_rel.translation)

[...]

foot_pose_rel_qty = RelativePose(env.robot, "LeftSole", "RightSole")
quantities = QuantityManager(env.robot, {"foot_pose_rel": foot_pose_rel_qty}))

There would be a reward manager, taking a quantity manager and a set of reward components as input. It would expose a single compute_reward method, that would first call reset on the quantity manager, and then all reward components individually. Eventually, it would also feature a reset method that would call the reset method of each reward components if any. It may be beneficial to keep termination conditions and reward evaluation together, to avoid computing quantities twice. If so, then it would be more tricky to determine when to reset cached quantities automatically.

@duburcqa
Copy link
Owner Author

duburcqa commented May 14, 2024

This issue has been addressed. Closing. #784 #786 #787 #792

@github-project-automation github-project-automation bot moved this from To do to Done in Jiminy 1.9 May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request gym P1 Mid priority issue python
Projects
Status: Done
Development

No branches or pull requests

1 participant