You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your work on single-task reinforcement and transfer learning!
But I think the most meaningful part of this paper is transfer learning on multi-task reinforcement learning.
If the teacher models are in different envs(for example, one is in CartPole-v1,others is in Acrobot-v1), the state space and action space of them are totally different. It is necessary to pay attention to how to design the input layer and output layer of the student model.
"About 90% of parameters are shared, with only 3 small MLP “controllers” on top which are task specific and allow for different action sets between different games." It's in the paper. But I don't know the details
The text was updated successfully, but these errors were encountered:
Thank you for your work on single-task reinforcement and transfer learning!
But I think the most meaningful part of this paper is transfer learning on multi-task reinforcement learning.
If the teacher models are in different envs(for example, one is in CartPole-v1,others is in Acrobot-v1), the state space and action space of them are totally different. It is necessary to pay attention to how to design the input layer and output layer of the student model.
"About 90% of parameters are shared, with only 3 small MLP “controllers” on top which are task specific and allow for different action sets between different games." It's in the paper. But I don't know the details
The text was updated successfully, but these errors were encountered: