-
Notifications
You must be signed in to change notification settings - Fork 15
/
开始.md
38 lines (26 loc) · 1.34 KB
/
开始.md
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
- # Stable Baselines/用户向导/开始
- > Stable Baselines官方文档中文版 [Github](https://github.com/DBWangML/stable-baselines-zh) [CSDN](https://blog.csdn.net/The_Time_Runner/article/details/97392656)
大多数强化学习算法包都试图采用sklearn风格语法。
下面是一个简单的案例,展示如何在Cartpole环境中训练和运行PPO2.
```python
import gym
from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import DummyVecEnv
from stable_baselines import PPO2
env = gym.make('CartPole-v1')
env = DummyVecEnv([lambda: env]) # The algorithms require a vectorized environment to run
model = PPO2(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=10000)
obs = env.reset()
for i in range(1000):
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
env.render()
```
或者,如果环境已在Gym注册、策略也已注册,仅仅用liner训练一个模型:
```python
# 用一行代码定义并训练一个RL agent
from stable_baselines import PPO2
model = PPO2('MlpPolicy', 'CartPole-v1').learn(10000)
```
![](https://github.com/DBWangML/stable-baselines-zh/blob/master/%E7%94%A8%E6%88%B7%E5%90%91%E5%AF%BC/%E5%9B%BE%E7%89%87/RL%20agent.gif)