Skip to content
/ owl Public

πŸ¦‰ OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation

Notifications You must be signed in to change notification settings

camel-ai/owl

Repository files navigation

πŸ¦‰ OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation

Documentation Discord X Reddit Wechat Wechat Hugging Face Star Package License


πŸ† OWL achieves 58.18 average score on GAIA benchmark and ranks πŸ…οΈ #1 among open-source frameworks! πŸ†

πŸ¦‰ OWL is a cutting-edge framework for multi-agent collaboration that pushes the boundaries of task automation, built on top of the CAMEL-AI Framework.

Our vision is to revolutionize how AI agents collaborate to solve real-world tasks. By leveraging dynamic agent interactions, OWL enables more natural, efficient, and robust task automation across diverse domains.


πŸ“‹ Table of Contents

πŸ”₯ News

🌟🌟🌟 COMMUNITY CALL FOR USE CASES! 🌟🌟🌟

We're inviting the community to contribute innovative use cases for OWL!
The top ten submissions will receive special community gifts and recognition.

Learn More & Submit

Submission deadline: March 31, 2025

  • [2025.03.12]: Launched our Community Call for Use Cases initiative! See the highlighted announcement above.
  • [2025.03.11]: We added MCPToolkit, FileWriteToolkit, and TerminalToolkit to enhance OWL agents with MCP tool calling, file writing capabilities, and terminal command execution.
  • [2025.03.09]: We added a web-based user interface that makes it easier to interact with the system.
  • [2025.03.07]: We open-sourced the codebase of the πŸ¦‰ OWL project.
  • [2025.03.03]: OWL achieved the #1 position among open-source frameworks on the GAIA benchmark with a score of 58.18.

🎬 Demo Video

OWL.main.mp4
d106cfbff2c7b75978ee9d5631ebeb75.mp4

✨️ Core Features

  • Real-time Information Retrieval: Leverage Wikipedia, Google Search, and other online sources for up-to-date information.
  • Multimodal Processing: Support for handling internet or local videos, images, and audio data.
  • Browser Automation: Utilize the Playwright framework for simulating browser interactions, including scrolling, clicking, input handling, downloading, navigation, and more.
  • Document Parsing: Extract content from Word, Excel, PDF, and PowerPoint files, converting them into text or Markdown format.
  • Code Execution: Write and execute Python code using interpreter.
  • Built-in Toolkits: Access to a comprehensive set of built-in toolkits including ArxivToolkit, AudioAnalysisToolkit, CodeExecutionToolkit, DalleToolkit, DataCommonsToolkit, ExcelToolkit, GitHubToolkit, GoogleMapsToolkit, GoogleScholarToolkit, ImageAnalysisToolkit, MathToolkit, NetworkXToolkit, NotionToolkit, OpenAPIToolkit, RedditToolkit, SearchToolkit, SemanticScholarToolkit, SymPyToolkit, VideoAnalysisToolkit, WeatherToolkit, WebToolkit, and many more for specialized tasks.

πŸ› οΈ Installation

OWL supports multiple installation methods to fit your workflow preferences. Choose the option that works best for you.

Option 1: Using uv (Recommended)

# Clone github repo
git clone https://github.com/camel-ai/owl.git

# Change directory into project directory
cd owl

# Install uv if you don't have it already
pip install uv

# Create a virtual environment and install dependencies
# We support using Python 3.10, 3.11, 3.12
uv venv .venv --python=3.10

# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate

# Install CAMEL with all dependencies
uv pip install -e .

# Exit the virtual environment when done
deactivate

Option 2: Using venv and pip

# Clone github repo
git clone https://github.com/camel-ai/owl.git

# Change directory into project directory
cd owl

# Create a virtual environment
# For Python 3.10 (also works with 3.11, 3.12)
python3.10 -m venv .venv

# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate

# Install from requirements.txt
pip install -r requirements.txt

Option 3: Using conda

# Clone github repo
git clone https://github.com/camel-ai/owl.git

# Change directory into project directory
cd owl

# Create a conda environment
conda create -n owl python=3.10

# Activate the conda environment
conda activate owl

# Option 1: Install as a package (recommended)
pip install -e .

# Option 2: Install from requirements.txt
pip install -r requirements.txt

# Exit the conda environment when done
conda deactivate

Setup Environment Variables

OWL requires various API keys to interact with different services. The owl/.env_template file contains placeholders for all necessary API keys along with links to the services where you can register for them.

Option 1: Using a .env File (Recommended)

  1. Copy and Rename the Template:

    cd owl
    cp .env_template .env
  2. Configure Your API Keys: Open the .env file in your preferred text editor and insert your API keys in the corresponding fields.

    Note: For the minimal example (run_mini.py), you only need to configure the LLM API key (e.g., OPENAI_API_KEY).

Option 2: Setting Environment Variables Directly

Alternatively, you can set environment variables directly in your terminal:

  • macOS/Linux (Bash/Zsh):

    export OPENAI_API_KEY="your-openai-api-key-here"
  • Windows (Command Prompt):

    set OPENAI_API_KEY="your-openai-api-key-here"
  • Windows (PowerShell):

    $env:OPENAI_API_KEY = "your-openai-api-key-here"

Note: Environment variables set directly in the terminal will only persist for the current session.

Running with Docker

# Clone the repository
git clone https://github.com/camel-ai/owl.git
cd owl

# Configure environment variables
cp owl/.env_template owl/.env
# Edit the .env file and fill in your API keys


# Option 1: Using docker-compose directly
cd .container
docker-compose up -d
# Run OWL inside the container
docker-compose exec owl bash -c "xvfb-python run.py"

# Option 2: Build and run using the provided scripts
cd .container
chmod +x build_docker.sh
./build_docker.sh
# Run OWL inside the container
./run_in_docker.sh "your question"

For more detailed Docker usage instructions, including cross-platform support, optimized configurations, and troubleshooting, please refer to DOCKER_README.md.

πŸš€ Quick Start

After installation and setting up your environment variables, you can start using OWL right away:

python owl/run.py

Running with Different Models

Model Requirements

  • Tool Calling: OWL requires models with robust tool calling capabilities to interact with various toolkits. Models must be able to understand tool descriptions, generate appropriate tool calls, and process tool outputs.

  • Multimodal Understanding: For tasks involving web interaction, image analysis, or video processing, models with multimodal capabilities are required to interpret visual content and context.

Supported Models

For information on configuring AI models, please refer to our CAMEL models documentation.

Note: For optimal performance, we strongly recommend using OpenAI models (GPT-4 or later versions). Our experiments show that other models may result in significantly lower performance on complex tasks and benchmarks, especially those requiring advanced multi-modal understanding and tool use.

OWL supports various LLM backends, though capabilities may vary depending on the model's tool calling and multimodal abilities. You can use the following scripts to run with different models:

# Run with Qwen model
python owl/run_qwen_zh.py

# Run with Deepseek model
python owl/run_deepseek_zh.py

# Run with other OpenAI-compatible models
python owl/run_openai_compatiable_model.py

# Run with Ollama
python owl/run_ollama.py

For a simpler version that only requires an LLM API key, you can try our minimal example:

python owl/run_mini.py

You can run OWL agent with your own task by modifying the run.py script:

# Define your own task
question = "Task description here."

society = construct_society(question)
answer, chat_history, token_count = run_society(society)

print(f"\033[94mAnswer: {answer}\033[0m")

For uploading files, simply provide the file path along with your question:

# Task with a local file (e.g., file path: `tmp/example.docx`)
question = "What is in the given DOCX file? Here is the file path: tmp/example.docx"

society = construct_society(question)
answer, chat_history, token_count = run_society(society)
print(f"\033[94mAnswer: {answer}\033[0m")

OWL will then automatically invoke document-related tools to process the file and extract the answer.

Example Tasks

Here are some tasks you can try with OWL:

  • "Find the latest stock price for Apple Inc."
  • "Analyze the sentiment of recent tweets about climate change"
  • "Help me debug this Python code: [your code here]"
  • "Summarize the main points from this research paper: [paper URL]"
  • "Create a data visualization for this dataset: [dataset path]"

🧰 Toolkits and Capabilities

Important: Effective use of toolkits requires models with strong tool calling capabilities. For multimodal toolkits (Web, Image, Video), models must also have multimodal understanding abilities.

OWL supports various toolkits that can be customized by modifying the tools list in your script:

# Configure toolkits
tools = [
    *WebToolkit(headless=False).get_tools(),  # Browser automation
    *VideoAnalysisToolkit(model=models["video"]).get_tools(),
    *AudioAnalysisToolkit().get_tools(),  # Requires OpenAI Key
    *CodeExecutionToolkit(sandbox="subprocess").get_tools(),
    *ImageAnalysisToolkit(model=models["image"]).get_tools(),
    SearchToolkit().search_duckduckgo,
    SearchToolkit().search_google,  # Comment out if unavailable
    SearchToolkit().search_wiki,
    *ExcelToolkit().get_tools(),
    *DocumentProcessingToolkit(model=models["document"]).get_tools(),
    *FileWriteToolkit(output_dir="./").get_tools(),
]

Available Toolkits

Key toolkits include:

Multimodal Toolkits (Require multimodal model capabilities)

  • WebToolkit: Browser automation for web interaction and navigation
  • VideoAnalysisToolkit: Video processing and content analysis
  • ImageAnalysisToolkit: Image analysis and interpretation

Text-Based Toolkits

  • AudioAnalysisToolkit: Audio processing (requires OpenAI API)
  • CodeExecutionToolkit: Python code execution and evaluation
  • SearchToolkit: Web searches (Google, DuckDuckGo, Wikipedia)
  • DocumentProcessingToolkit: Document parsing (PDF, DOCX, etc.)

Additional specialized toolkits: ArxivToolkit, GitHubToolkit, GoogleMapsToolkit, MathToolkit, NetworkXToolkit, NotionToolkit, RedditToolkit, WeatherToolkit, and more. For a complete list, see the CAMEL toolkits documentation.

Customizing Your Configuration

To customize available tools:

# 1. Import toolkits
from camel.toolkits import WebToolkit, SearchToolkit, CodeExecutionToolkit

# 2. Configure tools list
tools = [
    *WebToolkit(headless=True).get_tools(),
    SearchToolkit().search_wiki,
    *CodeExecutionToolkit(sandbox="subprocess").get_tools(),
]

# 3. Pass to assistant agent
assistant_agent_kwargs = {"model": models["assistant"], "tools": tools}

Selecting only necessary toolkits optimizes performance and reduces resource usage.

🌐 Web Interface

OWL includes an intuitive web-based user interface that makes it easier to interact with the system.

Starting the Web UI

# Start the Chinese version
python run_app_zh.py

# Start the English version
python run_app.py

Features

  • Easy Model Selection: Choose between different models (OpenAI, Qwen, DeepSeek, etc.)
  • Environment Variable Management: Configure your API keys and other settings directly from the UI
  • Interactive Chat Interface: Communicate with OWL agents through a user-friendly interface
  • Task History: View the history and results of your interactions

The web interface is built using Gradio and runs locally on your machine. No data is sent to external servers beyond what's required for the model API calls you configure.

πŸ§ͺ Experiments

To reproduce OWL's GAIA benchmark score of 58.18:

  1. Switch to the gaia58.18 branch:

    git checkout gaia58.18
  2. Run the evaluation script:

    python run_gaia_roleplaying.py

This will execute the same configuration that achieved our top-ranking performance on the GAIA benchmark.

⏱️ Future Plans

We're continuously working to improve OWL. Here's what's on our roadmap:

  • Write a technical blog post detailing our exploration and insights in multi-agent collaboration in real-world tasks
  • Enhance the toolkit ecosystem with more specialized tools for domain-specific tasks
  • Develop more sophisticated agent interaction patterns and communication protocols
  • Improve performance on complex multi-step reasoning tasks

πŸ“„ License

The source code is licensed under Apache 2.0.

πŸ–ŠοΈ Cite

If you find this repo useful, please cite:

@misc{owl2025,
  title        = {OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation},
  author       = {{CAMEL-AI.org}},
  howpublished = {\url{https://github.com/camel-ai/owl}},
  note         = {Accessed: 2025-03-07},
  year         = {2025}
}

🀝 Contributing

We welcome contributions from the community! Here's how you can help:

  1. Read our Contribution Guidelines
  2. Check open issues or create new ones
  3. Submit pull requests with your improvements

Current Issues Open for Contribution:

To take on an issue, simply leave a comment stating your interest.

πŸ”₯ Community

Join us (Discord or WeChat) in pushing the boundaries of finding the scaling laws of agents.

Join us for further discussions!

❓ FAQ

Q: Why don't I see Chrome running locally after starting the example script?

A: If OWL determines that a task can be completed using non-browser tools (such as search or code execution), the browser will not be launched. The browser window will only appear when OWL determines that browser-based interaction is necessary.

Q: Which Python version should I use?

A: OWL supports Python 3.10, 3.11, and 3.12.

Q: How can I contribute to the project?

A: See our Contributing section for details on how to get involved. We welcome contributions of all kinds, from code improvements to documentation updates.

πŸ“š Exploring CAMEL Dependency

OWL is built on top of the CAMEL Framework, here's how you can explore the CAMEL source code and understand how it works with OWL:

Accessing CAMEL Source Code

# Clone the CAMEL repository
git clone https://github.com/camel-ai/camel.git
cd camel

⭐ Star History

Star History Chart

About

πŸ¦‰ OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published