Generals-bots is a fast-paced strategy environment where players compete to conquer their opponents' generals on a 2D grid. While the goal is simple β capture the enemy general β the gameplay combines strategic depth with fast-paced action, challenging players to balance micro and macro-level decision-making. The combination of these elements makes the game highly engaging and complex.
Highlights:
- β‘ blazing-fast simulator: run thousands of steps per second with
numpy
-powered efficiency - π€ seamless integration: fully compatible with RL standards π€ΈGymnasium and π¦PettingZoo
- π§ extensive customization: easily tailor environments to your specific needs
- π effortless deployment: launch your agents to generals.io
- π¬ analysis tools: leverage features like replays for deeper insights
Note
This repository is based on the generals.io game (check it out, it's a lot of fun!). The one and only goal of this project is to provide a bot development platform, especially for Machine Learning based agents.
You can install the latest stable version via pip
for reliable performance
pip install generals-bots
or clone the repo for the most up-to-date features
git clone https://github.com/strakam/generals-bots
cd generals-bots
pip install -e .
Note
Under the hood, make install
installs poetry and the package using poetry
.
Creating an agent is very simple. Start by subclassing an Agent
class just like
RandomAgent
or ExpanderAgent
.
You can specify your agent id
(name) and color
and the only thing remaining is to implement the act
function,
that has the signature explained in sections down below.
The example loop for running the game looks like this
import gymnasium as gym
from generals.agents import RandomAgent, ExpanderAgent
# Initialize agents
agent = RandomAgent()
npc = ExpanderAgent()
# Create environment
env = gym.make("gym-generals-v0", agent=agent, npc=npc, render_mode="human")
observation, info = env.reset()
terminated = truncated = False
while not (terminated or truncated):
action = agent.act(observation)
observation, reward, terminated, truncated, info = env.step(action)
env.render()
Tip
Check out Wiki for more commented examples to get a better idea on how to start π€.
Grids on which the game is played on are generated via GridFactory
. You can instantiate the class with desired grid properties, and it will generate
grid with these properties for each run.
import gymnasium as gym
from generals import GridFactory
grid_factory = GridFactory(
min_grid_dims=(10, 10), # Grid height and width are randomly selected
max_grid_dims=(15, 15),
mountain_density=0.2, # Probability of a mountain in a cell
city_density=0.05, # Probability of a city in a cell
general_positions=[(0,3),(5,7)], # Positions of generals (i, j)
)
# Create environment
env = gym.make(
"gym-generals-v0",
grid_factory=grid_factory,
...
)
You can also specify grids manually, as a string via options
dict:
import gymnasium as gym
env = gym.make("gym-generals-v0", ...)
grid = """
.3.#
#..A
#..#
.#.B
"""
options = {"grid": grid}
# Pass the new grid to the environment (for the next game)
env.reset(options=options)
Grids are created using a string format where:
.
represents passable terrain#
indicates impassable mountainsA, B
mark the positions of generals- numbers
0-9
andx
, wherex=10
, represent cities, where the number specifies amount of neutral army in the city, which is calculated as40 + number
. The reason forx=10
is that the official game has cities in range[40, 50]
We can store replays and then analyze them in an interactive fashion. Replay
class handles replay related functionality.
import gymnasium as gym
env = gym.make("gym-generals-v0", ...)
options = {"replay_file": "my_replay"}
env.reset(options=options) # The next game will be encoded in my_replay.pkl
from generals import Replay
# Initialize Replay instance
replay = Replay.load("my_replay")
replay.play()
You can control your replays to your liking! Currently, we support these controls:
q
β quit/close the replayr
β restart replay from the beginningβ/β
β increase/decrease the replay speedh/l
β move backward/forward by one frame in the replayspacebar
β toggle play/pausemouse
click on the player's row β toggle the FoV (Field of View) of the given player
Warning
We are using the pickle module which is not safe! Only open replays you trust.
An agents observation contains a broad swath of information about their position in the game. Values are either numpy
matrices with shape (N,M)
, or int
constants:
Key | Shape | Description |
---|---|---|
armies |
(N,M) |
Number of units in a visible cell regardless of the owner |
generals |
(N,M) |
Mask indicating visible cells containing a general |
cities |
(N,M) |
Mask indicating visible cells containing a city |
mountains |
(N,M) |
Mask indicating visible cells containing mountains |
neutral_cells |
(N,M) |
Mask indicating visible cells that are not owned by any agent |
owned_cells |
(N,M) |
Mask indicating visible cells owned by the agent |
opponent_cells |
(N,M) |
Mask indicating visible cells owned by the opponent |
fog_cells |
(N,M) |
Mask indicating fog cells that are not mountains or cities |
structures_in_fog |
(N,M) |
Mask showing cells containing either cities or mountains in fog |
owned_land_count |
β | Number of cells the agent owns |
owned_army_count |
β | Total number of units owned by the agent |
opponent_land_count |
β | Number of cells owned by the opponent |
opponent_army_count |
β | Total number of units owned by the opponent |
timestep |
β | Current timestep of the game |
priority |
β | 1 if your move is evaluted first, 0 otherwise |
Actions are lists of 5 values [pass, cell_i, cell_j, direction, split]
, where
pass
indicates whether you want to1 (pass)
or0 (play)
.cell_i
is ani
index of the source cell (height)cell_j
is aj
index of the source cell (width)direction
indicates whether you want to move0 (up)
,1 (down)
,2 (left)
, or3 (right)
split
indicates whether you want to1 (split)
units and send only half, or0 (no split)
where you send all units to the next cell
A convenience function compute_valid_action_mask
is also provided for detailing the set of legal moves an agent can make based on its observation
. The valid_action_mask
is a 3D array with shape (N, M, 4)
, where each element corresponds to whether a move is valid from cell
[i, j]
in one of four directions: 0 (up)
, 1 (down)
, 2 (left)
, or 3 (right)
.
Tip
You can see how actions and observations look like by printing a sample form the environment:
print(env.observation_space.sample())
print(env.action_space.sample())
It is possible to implement custom reward function. The default reward is awarded only at the end of a game
and gives 1
for winner and -1
for loser, otherwise 0
.
def custom_reward_fn(observation, action, done, info):
# Give agent a reward based on the number of cells they own
return observation["owned_land_count"]
env = gym.make(..., reward_fn=custom_reward_fn)
observations, info = env.reset()
Complementary to local development, it is possible to run agents online against other agents and players.
We use socketio
for communication, and you can either use our autopilot
to run agent in a specified lobby indefinitely,
or create your own connection workflow. Our implementations expect that your agent inherits from the Agent
class, and has
implemented the required methods.
from generals.remote import autopilot
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--user_id", type=str, default=...) # Register yourself at generals.io and use this id
parser.add_argument("--lobby_id", type=str, default=...) # The last part of the lobby url
parser.add_argument("--agent_id", type=str, default="Expander") # agent_id should be "registered" in AgentFactory
if __name__ == "__main__":
args = parser.parse_args()
autopilot(args.agent_id, args.user_id, args.lobby_id)
This script will run ExpanderAgent
in the specified lobby.
You can contribute to this project in multiple ways:
- π€ If you implement ANY non-trivial agent, send it to us! We will publish it, so others can play against it.
- π‘ If you have an idea on how to improve the game, submit an issue or create a PR, we are happy to improve! We also have some ideas (see issues), so you can see what we plan to work on.
Tip
Check out wiki to learn in more detail on how to contribute.