Skip to content
/ orra Public

Build production-ready multi-agent applications that handle complex real-world interactions - across any language, agent framework or deployment platform.

License

Notifications You must be signed in to change notification settings

orra-dev/orra

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿชก Orra (โœจAlphaโœจ)

Move beyond simple Crews and Agents. Use Orra to build production-ready multi-agent applications that handle complex real-world interactions.

Orra coordinates tasks across your existing stack, agents and any tools run as services using intelligent reasoning โ€” across any language, agent framework or deployment platform.

  • ๐Ÿง  Smart pre-evaluated execution plans
  • ๐ŸŽฏ Domain grounded
  • ๐Ÿ—ฟ Durable execution
  • ๐Ÿš€ Go fast with tools as services
  • โ†ฉ๏ธ Revert state to handle failures
  • โ›‘๏ธ Automatic service health monitoring
  • ๐Ÿ”ฎ Real-time status tracking
  • ๐Ÿช Webhook result delivery

Read the launch blog post.

Coming Soon

  • Agent replay and multi-LLM consensus planning
  • Continuous adjustment of Agent workflows during runtime
  • Additional language SDKs - Ruby, DotNet and Go very soon!

Table of Contents

Installation

Prerequisites

  • Docker and Docker Compose - For running the control plane server (powers the Plan Engine)
  • Set up Reasoning and Embedding Models to power task planning and execution plan caching/validation

Setup Reasoning Models

Select between Groq's deepseek-r1-distill-llama-70b model or OpenAI's o1-mini / o3-mini models.

Update the .env file with one of these:

Groq

# GROQ Reasoning
REASONING_PROVIDER=groq
REASONING_MODEL=deepseek-r1-distill-llama-70b
REASONING_API_KEY=xxxx

O1-mini

# OpenAI Reasoning
REASONING_PROVIDER=openai
REASONING_MODEL=o1-mini
REASONING_API_KEY=xxxx

O3-mini

# OpenAI Reasoning
REASONING_PROVIDER=openai
REASONING_MODEL=o3-mini
REASONING_API_KEY=xxxx

Setup Embedding Models

Update the .env file with:

# Execution Plan Cache and validation OPENAI API KEY
PLAN_CACHE_OPENAI_API_KEY=xxxx

1. Install Orra CLI

Download the latest CLI binary for your platform from our releases page:

# macOS
curl -L https://github.com/ezodude/orra/releases/download/v0.2.1/orra-macos -o /usr/local/bin/orra
chmod +x /usr/local/bin/orra

# Linux
curl -L https://github.com/ezodude/orra/releases/download/v0.2.1/orra-linux -o /usr/local/bin/orra
chmod +x /usr/local/bin/orra

# Verify installation
orra version

โ†’ Full CLI documentation

2. Get Orra Running

Clone the repository and start the control plane:

git clone https://github.com/ezodude/orra.git
cd orra/controlplane

# Start the control plane
docker compose up --build

How The Plan Engine Works

The Plan Engine powers your multi-agent applications through intelligent planning and reliable execution:

Progressive Planning Levels

1. Base Planning

Your agents stay clean and simple (wrapped in the Orra SDK):

Python

from orra import OrraAgent, Task
from pydantic import BaseModel

class ResearchInput(BaseModel):
    topic: str
    depth: str

class ResearchOutput(BaseModel):
    summary: str

agent = OrraAgent(
    name="research-agent",
    description="Researches topics using web search and knowledge base",
    url="https://api.orra.dev",
    api_key="sk-orra-..."
)

@agent.handler()
async def research(task: Task[ResearchInput]) -> ResearchOutput:
    results = await run_research(task.input.topic, task.input.depth)
    return ResearchOutput(summary=results.summary)

JavaScript

import { initAgent } from '@orra.dev/sdk';

const agent = initAgent({
  name: 'research-agent',
  orraUrl: process.env.ORRA_URL,  
  orraKey: process.env.ORRA_API_KEY
});

await agent.register({
  description: 'Researches topics using web search and knowledge base',
  schema: {
    input: {
      type: 'object',
      properties: {
        topic: { type: 'string' },
        depth: { type: 'string' }
      }
    },
    output: {
      type: 'object',
      properties: {
        summary: { type: 'string' }
      }
    }
  }
});

agent.start(async (task) => {
  const results = await runResearch(task.input.topic, task.input.depth);
  return { summary: results.summary };
});

Features:

  • AI analyzes intent and creates execution plans that target your components
  • Automatic service discovery and coordination
  • Parallel execution where possible

2. Production Planning with Domain Grounding

# Define domain constraints
name: research-workflow
domain: content-generation
use-cases:
  - action: "Research topic {topic}"
    capabilities: 
      - "Web search access"
      - "Knowledge synthesis"
constraints:
  - "Verify sources before synthesis"
  - "Maximum research time: 10 minutes"

Features:

  • Full semantic validation of execution plans
  • Capability matching and verification
  • Safety constraints enforcement
  • State transition validation

3. Reliable Execution

# Execute an action with the Plan Engine
orra verify run "Research and summarize AI trends" \
  --data topic:"AI in 2024" \
  --data depth:"comprehensive"

The Plan Engine ensures:

  • Automatic service health monitoring
  • Stateful execution tracking
  • Built-in retries and recovery
  • Real-time status updates
  • Webhook result delivery

Explore Examples

Docs and Guides

Self Hosting

  1. Storage: We use BadgerDB to persist all state
  2. Deployment: Single-instance only, designed for development and self-hosted deployments

Join Our Alpha Testing Community

We're looking for developers who:

  • Are building multi-agent applications
  • Want to help shape Orra's development
  • Are comfortable working with Alpha software
  • Can provide feedback on real-world use cases

Connect With Us:

  • GitHub Discussions - Share your experience and ideas
  • Office Hours - Weekly calls with the team

License

Orra is MPL-2.0 licensed.