Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to customize the logger to record some specific parameters? #977

Open
5 of 8 tasks
zichunxx opened this issue Oct 24, 2023 · 4 comments
Open
5 of 8 tasks
Assignees
Labels
enhancement Feature that is not a new algorithm or an algorithm enhancement minor Requires small changes to be fixed
Milestone

Comments

@zichunxx
Copy link

zichunxx commented Oct 24, 2023

  • I have marked all applicable categories:
    • exception-raising bug
    • RL algorithm bug
    • documentation request (i.e. "X is missing from the documentation.")
    • new feature request
  • I have visited the source website
  • I have searched through the issue tracker for duplicates
  • I have mentioned version numbers, operating system and environment, where applicable:
    import tianshou, gymnasium as gym, torch, numpy, sys
    print(tianshou.__version__, gym.__version__, torch.__version__, numpy.__version__, sys.version, sys.platform)
0.5.1 0.29.1 1.12.1 1.24.4 3.9.17 | packaged by conda-forge | (main, Aug 10 2023, 07:02:31) 
[GCC 12.3.0] linux

Hi!

I want to customize a logger to record some desired parameters, like success rate.

I'm trying to get to know the Tianshou API, but I'm not sure if it is easy to implement. Or, are there any examples to be referred to?

I have trained some Mujoco example models. Only these parameters are recorded, such as env_step, length, length_std, test/train reward, etc.

If this is unavailable for the current version of Tianshou? Would it be considered a new feature to record the variation of the success rate during training?

Thanks in advance!

@MischaPanch
Copy link
Collaborator

This will likely be addressed by closing #895 and #933. Would you like to help with these? Otherwise, I think I will start working on them quite soon (maybe around end of November), as they are rather high on the priority list

@MischaPanch MischaPanch added the duplicate This issue or pull request already exists label Oct 25, 2023
@MischaPanch
Copy link
Collaborator

Could you specify what exactly you mean by variation of success rate? @STAY-Melody

@MischaPanch MischaPanch added enhancement Feature that is not a new algorithm or an algorithm enhancement blocked Can't be worked on for now minor Requires small changes to be fixed and removed duplicate This issue or pull request already exists labels Oct 25, 2023
@zichunxx
Copy link
Author

zichunxx commented Oct 26, 2023

I want to test the success rate of the current model at certain intervals, e.g., 1e4, during training, which can be achieved by several random test rollouts. In this way, variations in the success rate during training can be recorded. Here is the function that encapsulates what I mean.

@MischaPanch MischaPanch removed the blocked Can't be worked on for now label Jan 24, 2024
@MischaPanch
Copy link
Collaborator

We now have dataclasses as returns almost everywhere instead of the previous dicts, so now callbacks can be written. @maxhuettenrauch and I will have a look at this and likely implement soon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Feature that is not a new algorithm or an algorithm enhancement minor Requires small changes to be fixed
Projects
Development

No branches or pull requests

2 participants