Skip to content

Lecture notes, tutorial tasks including solutions as well as online videos for the reinforcement learning course hosted by Paderborn University

License

Notifications You must be signed in to change notification settings

Pittmann-XIE/reinforcement_learning_course_materials

 
 

Repository files navigation

Reinforcement Learning Course Materials

Build Status DOI License made-with-python made-with-latex

Lecture notes, tutorial tasks including solutions as well as online videos for the reinforcement learning course hosted by Paderborn University. Source code for the entire course material is open and everyone is cordially invited to use it for self-learning (students) or to set up your own course (lecturers). Example

Lecture Content

  1. Introduction to Reinforcement Learning
  2. Markov Decision Processes
  3. Dynamic Programming
  4. Monte Carlo Methods
  5. Temporal-Difference Learning
  6. Multi-Step Bootstrapping
  7. Planning and Learning with Tabular Methods
  8. Function Approximation with Supervised Learning
  9. On-Policy Prediction with Function Approximation
  10. Value-Based Control with Function Approximation
  11. Stochastic Policy Gradient Methods
  12. Deterministic Policy Gradient Methods
  13. Further Contemporary RL Algorithms (TRPO, PPO)
  14. Outlook and Research Insights
  • Summary of Part One: Reinforcement Learning in Finite State and Action Spaces
  • Summary of Part Two: Reinforcement Learning Using Function Approximation
  • Full course slides

Exercise Content

All exercises are based on Python 3.9 and site-packages according to the requirements.txt:

>>> pip install setuptools==65.5.0
>>> pip install -r requirements.txt
  1. Basics of Python for Scientific Computing
  2. Manually Solving Basic Markov Chain, Reward and Decision Problems
  3. The Beer-Bachelor and Dynamic Programming (the Shortest Beer Problem)
  4. Drive Through the Race Track with Monte Carlo Learning
  5. Drive even Faster Using Temporal-Difference Learning
  6. Stabilizing the Inverted Pendulum by Tabular Multi-Step Methods
  7. Boosting the Inverted Pendulum by Integrating Learning & Planning (Dyna Framework)
  8. Predicting the Operating Behavior of a Real Electric Drive Systems with Supervised Learning
  9. Evaluate the Performance of Given Agents in the Mountain Car Problem Using Function Approximation
  10. Escape from the Mountain Car Valley Using Semi-Gradient Sarsa & Least Square Policy Iteration
  11. Landing on the Moon with REINFORCE and Actor-Critic Methods
  12. Shoot for the moon with DDPG & PPO

Contributions

We highly appreciate any feedback and input to the course material e.g.

  • typos or content-related discussions (please raise an issue)
  • adding new contents (please provide a pull request)

If you like to contribute to the repo to a larger extent, please do not hesitate to contact us directly.

Credits

The lecture notes are inspired by

The tutorials are partly using pre-packed environments from

  • Gymnasium (maintained branch of OpenAI's Gym)

Citation

See "Cite this repository" on top

About

Lecture notes, tutorial tasks including solutions as well as online videos for the reinforcement learning course hosted by Paderborn University

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 95.2%
  • TeX 3.5%
  • Other 1.3%