Move beyond simple Crews and Agents. Use Orra to build production-ready multi-agent applications that handle complex real-world interactions.
Orra coordinates tasks across your existing stack, agents and any tools run as services using intelligent reasoning โ across any language, agent framework or deployment platform.
- ๐ง Smart pre-evaluated execution plans
- ๐ฏ Domain grounded
- ๐ฟ Durable execution
- ๐ Go fast with tools as services
- โฉ๏ธ Revert state to handle failures
- โ๏ธ Automatic service health monitoring
- ๐ฎ Real-time status tracking
- ๐ช Webhook result delivery
Read the launch blog post.
- Agent replay and multi-LLM consensus planning
- Continuous adjustment of Agent workflows during runtime
- Additional language SDKs - Ruby, DotNet and Go very soon!
- Docker and Docker Compose - For running the control plane server (powers the Plan Engine)
- Set up Reasoning and Embedding Models to power task planning and execution plan caching/validation
Select between Groq's deepseek-r1-distill-llama-70b model or OpenAI's o1-mini / o3-mini models.
Update the .env file with one of these:
Groq
# GROQ Reasoning
REASONING_PROVIDER=groq
REASONING_MODEL=deepseek-r1-distill-llama-70b
REASONING_API_KEY=xxxx
O1-mini
# OpenAI Reasoning
REASONING_PROVIDER=openai
REASONING_MODEL=o1-mini
REASONING_API_KEY=xxxx
O3-mini
# OpenAI Reasoning
REASONING_PROVIDER=openai
REASONING_MODEL=o3-mini
REASONING_API_KEY=xxxx
Update the .env file with:
# Execution Plan Cache and validation OPENAI API KEY
PLAN_CACHE_OPENAI_API_KEY=xxxx
Download the latest CLI binary for your platform from our releases page:
# macOS
curl -L https://github.com/ezodude/orra/releases/download/v0.2.1/orra-macos -o /usr/local/bin/orra
chmod +x /usr/local/bin/orra
# Linux
curl -L https://github.com/ezodude/orra/releases/download/v0.2.1/orra-linux -o /usr/local/bin/orra
chmod +x /usr/local/bin/orra
# Verify installation
orra version
Clone the repository and start the control plane:
git clone https://github.com/ezodude/orra.git
cd orra/controlplane
# Start the control plane
docker compose up --build
The Plan Engine powers your multi-agent applications through intelligent planning and reliable execution:
Your agents stay clean and simple (wrapped in the Orra SDK):
Python
from orra import OrraAgent, Task
from pydantic import BaseModel
class ResearchInput(BaseModel):
topic: str
depth: str
class ResearchOutput(BaseModel):
summary: str
agent = OrraAgent(
name="research-agent",
description="Researches topics using web search and knowledge base",
url="https://api.orra.dev",
api_key="sk-orra-..."
)
@agent.handler()
async def research(task: Task[ResearchInput]) -> ResearchOutput:
results = await run_research(task.input.topic, task.input.depth)
return ResearchOutput(summary=results.summary)
JavaScript
import { initAgent } from '@orra.dev/sdk';
const agent = initAgent({
name: 'research-agent',
orraUrl: process.env.ORRA_URL,
orraKey: process.env.ORRA_API_KEY
});
await agent.register({
description: 'Researches topics using web search and knowledge base',
schema: {
input: {
type: 'object',
properties: {
topic: { type: 'string' },
depth: { type: 'string' }
}
},
output: {
type: 'object',
properties: {
summary: { type: 'string' }
}
}
}
});
agent.start(async (task) => {
const results = await runResearch(task.input.topic, task.input.depth);
return { summary: results.summary };
});
Features:
- AI analyzes intent and creates execution plans that target your components
- Automatic service discovery and coordination
- Parallel execution where possible
# Define domain constraints
name: research-workflow
domain: content-generation
use-cases:
- action: "Research topic {topic}"
capabilities:
- "Web search access"
- "Knowledge synthesis"
constraints:
- "Verify sources before synthesis"
- "Maximum research time: 10 minutes"
Features:
- Full semantic validation of execution plans
- Capability matching and verification
- Safety constraints enforcement
- State transition validation
# Execute an action with the Plan Engine
orra verify run "Research and summarize AI trends" \
--data topic:"AI in 2024" \
--data depth:"comprehensive"
The Plan Engine ensures:
- Automatic service health monitoring
- Stateful execution tracking
- Built-in retries and recovery
- Real-time status updates
- Webhook result delivery
- ๐ E-commerce AI Assistant (JavaScript) - E-commerce customer service with a delivery specialized agent
- ๐ป Ghostwriters (Python) - Content generation example showcasing how to use Orra with CrewAI ๐๐
- ๐ฃ Echo Service (JavaScript) - Simple example showing core concepts using JS
- ๐ฃ Echo Service (Python) - Simple example showing core concepts using Python
- Rapid Multi-Agent App Development with Orra
- What is an Agent in Orra?
- Orchestrating Actions with Orra
- Domain Grounding Execution
- Core Topics & Internals
- Storage: We use BadgerDB to persist all state
- Deployment: Single-instance only, designed for development and self-hosted deployments
We're looking for developers who:
- Are building multi-agent applications
- Want to help shape Orra's development
- Are comfortable working with Alpha software
- Can provide feedback on real-world use cases
Connect With Us:
- GitHub Discussions - Share your experience and ideas
- Office Hours - Weekly calls with the team
Orra is MPL-2.0 licensed.