You will find here the code for a series of posts following up on running neural networks on an Arduino and pushing the concept furthermore by training first the neural network in a simulator then loading it to an Arduino and refining the training in vivo (my living room).
... to do ...
This project relies on the following ressources:
- http://deeplizard.com/learn/video/nyjbcRQ-uQ8
- https://github.com/keon/deep-q-learning
- https://github.com/harvitronix/reinforcement-learning-car
- https://www.youtube.com/playlist?list=PL1P11yPQAo7pH9SWZtWdmmLumbp_r19Hs
- https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781788834247
And use:
- https://github.com/viblo/pymunk
- https://bitbucket.org/pyglet/pyglet/wiki/Home
- https://github.com/keras-rl/keras-rl
- https://github.com/openai/gym
3D printed servo-mount for ultrasonic sensor and code to drive it.
I tried to control motor speed using a PID control. However, the mechanic is too wobbly to produce any quality output.
Create a simulator where a simulated robot evolves
Use the simulator and Keras-rl to train a neural network to drive the robot according to its environment.
Execute the training locally in the computer but use the real robot to take actions and observe result over Bluetooth.
- Write the readme description and add github tags to the project
- Add references to used source code and licence repository
- Organize folders (Python, Dataset, Tinn, Articles Steps, Pictures, etc...)
- Split python simulator for DQN code (moved into Gym env)
- Fine tune hyper parameters
- Regularly save models
- Implement training refinement in vivo
- Write how to use it
- Improve simulator
- Improve DQN algo (moved to Keras-rl with crazy built-in rl algo)
- Implement relu for Genann