Fixed Wing Flight Simulation Environment for Reinforcement Learning
Report Bug
·
Request Feature
Table of Contents
This repository is being written as part of my masters thesis. I am trying to develop a fixed wing attitude control system using Reinforcement Learning algorithms. As of right now this code works with XPlane 11 and QLearning as well as Deep QLearning.
This project is built with these frameworks, libraries, repositories and software:
Simple clone this repository to your local filesystem:
git clone https://github.com/JDatPNW/QPlane
Tested and running with:
Software | Version |
---|---|
XPlane11 Version: | 11.50r3 (build 115033 64-bit, OpenGL) |
JSBSim Version: | 1.1.5 (GitHub build 277) |
Flightgear Version: | 2020.3.6 |
XPlaneConnect Version: | 1.3-rc.2 |
Python Version: | 3.8.2 |
numpy Version: | 1.19.4 |
tensorflow Version: | 2.3.0 |
Anaconda Version: | 4.9.2 |
Windows Version: | 1909 |
- Clone the repo
git clone https://github.com/JDatPNW/QPlane
- Install the above listed software (other versions might work)
- For JSBSim clone the JSBsim repo into
src/environments/jsbsim
- For visualizing JSBSim download the c172r plane model in the Flightgear Menu
- For JSBSim clone the JSBsim repo into
Once downloaded and installed, simply execute the QPlane.py
file to run and test the code.
- For the XPlane Environment, XPlane (the game) needs to run.
- For JSBSim with rendering, Flightgear needs to run with the following flags
--fdm=null --native-fdm=socket,in,60,localhost,5550,udp --aircraft=c172r --airport=RKJJ
This gif shows an attitude agent (using Q-Learning) in action and compares it to the baseline random agent.
Planned future features are:
- Double Deep Q Learning
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See misc/LICENSE
for more information.
Github Pages: JDatPNW
Please cite QPlane
if you use it in your research.
@inproceedings{richter2021qplane,
title={QPlane: An Open-Source Reinforcement Learning Toolkit for Autonomous Fixed Wing Aircraft Simulation},
author={Richter, David J and Calix, Ricardo A},
booktitle={Proceedings of the 12th ACM Multimedia Systems Conference},
pages={261--266},
year={2021}
}
or
Richter, D. J., & Calix, R. A. (2021, June). QPlane: An Open-Source Reinforcement Learning Toolkit for Autonomous Fixed Wing Aircraft Simulation. In Proceedings of the 12th ACM Multimedia Systems Conference (pp. 261-266).
- Readme Template
- Python Programming - DeepRL DQN
- Deeplizard - DeepRL DQN
- NeuralNetAI - DDQN (Video found on the linked YouTube, not on the site)
- Python Lessons - DeepRL PPO
- adderbyte
- XPlane Forum
- JSBSim