Skip to content

Creating your first agent

Matej Straka edited this page Oct 21, 2024 · 5 revisions

🤖 Agent Implementation

Creating your first simple agent is very easy!

We recommend subclassing the Agent class, implementing the act method and we will do the rest for you! 🤗

You can also specify your id that will serve two purposes:

  1. our simulator calls for agent actions by id
  2. it is your display name that others see (choose wisely)

The act signature is

def act(self, observation: Observation) -> Action:

and you can read about type of Observation and Action in the README.md.

Both are of type dict, but you can make your own formats and submit them to us, so we can provide them too!

Note

Part of the observation is an action_mask, which is particularly useful for ML algorithms.

It is a 3D array, where each index i,j contains an array of four binary values indicating directions (up, down, left, right).

If for example obs["action_mask"][3,7,2] == 1, it means that in the cell [3,7] you have at least 2 army and you can move it to the left, meaning there is no mountain in the way.

Tip

For example implementations, check our current agents random and expander

🏃 Running your agent

Our simulator is compliant with two Reinforcement Learning (RL) API standards: 🤸Gymnasium and 🦁PettingZoo.

They specify how environments should be organized and how they should run. Complying with such standards ensures

  • experiment reproducibility,
  • developer familiarity and most importantly,
  • allows you to use RL frameworks such as stable baslines3 or RLlib for easy out-of-the-box access to RL algorithms.

You can choose either of them, based on your needs. Explanation follows..

🤸 Gymnasium

Gymnasium is intended for single-agent environment. In our case it means that

  1. you can play only against agents that are in this repository
  2. you submit actions only for one agent

This is useful if you want to beat one specific agent or want to control only one agent.

Code example

import gymnasium as gym
from generals.agents import RandomAgent, ExpanderAgent # also import your agent

# Initialize agents
agent = ... # call initializer of your agent (ideally subclassed from Agent class)
npc = ExpanderAgent() # initialize NPC you wish to play against

# Create environment with default setting and tell it who will play
# For RL training, we recommend render_mode=None to run much faster
env = gym.make("gym-generals-v0", agent=agent, npc=npc, render_mode="human")

observation, info = env.reset() # reset basically creates a new game
terminated = truncated = False
while not (terminated or truncated):
    action = agent.act(observation) # we send your agent what he sees and he should reply with its action
    observation, reward, terminated, truncated, info = env.step(action) # run simulator for one step
    env.render() # show what happened

If you want custom map properties, or even some specific map, you can do so easily! Check our README.md for more detail.

🦁 PettingZoo

PettingZoo is intended for multi-agent play. In our case it means that

  • you control all agents at once
  • you have easy setup for self-play

Code example

from generals.agents import RandomAgent, ExpanderAgent
from generals.envs import PettingZooGenerals

# Initialize agents
random = RandomAgent()
expander = ExpanderAgent()

# Store agents in a dictionary - they are called by id, which will come handy
agents = {
    random.id: random,
    expander.id: expander,
}

# Create environment
env = PettingZooGenerals(agents=agents, render_mode="human")
observations, info = env.reset()

done = False
while not done:
    actions = {}
    for agent in env.agents: # go over agent ids
        # Ask each agent for his action
        actions[agent] = agents[agent].act(observations[agent])
    # All agents perform their actions
    observations, rewards, terminated, truncated, info = env.step(actions) # perform all actions
    done = any(terminated.values()) or any(truncated.values())
    env.render()

Tip

Check here for more examples 🚀!