MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.
-
Updated
Oct 6, 2024 - Julia
MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.
A JuMP extension for Stochastic Dual Dynamic Programming
Concise and friendly interfaces for defining MDP and POMDP models for use with POMDPs.jl solvers
Reinforcement Learning in Julia (Experimental)
Compressed belief-state MDPs in Julia compatible with POMDPs.jl
Grids, mountains, and mysterious problems. Solved with Partially-Observable Markov Decision Procesees. Created at Stanford University, by Pablo Rodriguez Bertorello
Compute value function for a shale driller's problem
A comparison of reinforcement learning approaches to the rock-paper-scissors game using different Markov decision processes.
Add a description, image, and links to the markov-decision-processes topic page so that developers can more easily learn about it.
To associate your repository with the markov-decision-processes topic, visit your repo's landing page and select "manage topics."