Forge: AI-Enhanced Terminal Development Environment
Forge is a comprehensive coding agent that integrates AI capabilities with your development environment, offering sophisticated assistance while maintaining the efficiency of your existing workflow.
- Advanced AI coding assistant with comprehensive understanding, planning, and execution of complex development tasks
- Lightning-fast performance with sub-50ms startup times
- Seamless integration with existing Unix tools and workflows
- Context-aware assistance that understands your development environment and workflows
- Natural language interface to powerful system operations
- Enhanced security features with optional restricted shell mode
- Multi-agent architecture that orchestrates specialized AI agents to solve complex problems collaboratively
- Powered by Claude 3.7 Sonnet for state-of-the-art AI capabilities
Table of Contents
- Installation
- Get Started
- Features
- Custom Workflows and Multi-Agent Systems
- Provider Configuration
- Why Shell?
- Community
- Support Us
Using Homebrew (macOS package manager):
# Add Code-Forge's package repository to Homebrew
brew tap antinomyhq/code-forge
# Install Code-Forge
brew install code-forge
Choose either method to install:
# Using curl (common download tool)
curl -L https://raw.githubusercontent.com/antinomyhq/forge/main/install.sh | bash
# Or using wget (alternative download tool)
wget -qO- https://raw.githubusercontent.com/antinomyhq/forge/main/install.sh | bash
-
Create a
.env
file in your home directory with your API credentials:# Your API key for accessing AI models (see Environment Configuration section) OPENROUTER_API_KEY=<Enter your Open Router Key> # Optional: Set a custom URL for OpenAI-compatible providers #OPENAI_URL=https://custom-openai-provider.com/v1
You can get a Key at Open Router
-
Launch Code Forge:
Code Forge functions as a comprehensive development assistant with capabilities to:
- Write, refactor, and optimize code based on specifications
- Debug complex issues through systematic error analysis
- Generate test suites for existing codebases
- Document code and generate technical specifications
- Propose architectural improvements and optimizations
Transform your command-line experience with natural language interaction while maintaining the power and flexibility of traditional shell commands.
Code-Forge prioritizes security by providing a restricted shell mode (rbash) that limits potentially dangerous operations:
- Flexible Security Options: Choose between standard and restricted modes based on your needs
- Restricted Mode: Enable with
-r
flag to prevent potentially harmful operations - Standard Mode: Uses regular shell by default (bash on Unix/Mac, cmd on Windows)
- Security Controls: Restricted mode prevents:
- Changing directories
- Setting/modifying environment variables
- Executing commands with absolute paths
- Modifying shell options
Example:
# Standard mode (default)
forge
# Restricted secure mode
forge -r
Additional security features include:
- Direct API connection to Open Router without intermediate servers
- Local terminal operation for maximum control and data privacy
Forge offers several built-in commands to enhance your interaction:
/new
- Start a new task when you've completed your current one/info
- View environment summary, logs folder location, and command history/models
- List all available AI models with capabilities and context limits/dump
- Save the current conversation in JSON format to a file for reference/act
- Switch to ACT mode (default), allowing Forge to execute commands and implement changes/plan
- Switch to PLAN mode, where Forge analyzes and plans but doesn't modify files
Boost your productivity with intelligent command completion:
- Type
@
and press Tab for contextual file/path completion - Use Right Arrow to complete previously executed commands
- Access command history with Up Arrow
- Quick history search with Ctrl+R
Enhance your interactive shell experience with WYSIWYG (What You See Is What You Get) integration. 'forge' now visualizes each command executed, complete with colorful formatting, allowing you to see command outputs just as if you were typing them directly into your terminal. This feature ensures clarity and enhances interaction, making every command visible in rich detail.
Stay in control of your shell environment with intuitive command handling:
- Cancel with
CTRL+C
: Gracefully interrupt ongoing operations, providing the flexibility to halt processes that no longer need execution. - Exit with
CTRL+D
: Easily exit the shell session without hassle, ensuring you can quickly terminate your operations when needed.
Forge operates in two distinct modes to provide flexible assistance based on your needs:
In ACT mode, which is the default when you start Forge, the assistant is empowered to directly implement changes to your codebase and execute commands:
- Full Execution: Forge can modify files, create new ones, and execute shell commands
- Implementation: Directly implements the solutions it proposes
- Verification: Performs verification steps to ensure changes work as intended
- Best For: When you want Forge to handle implementation details and fix issues directly
Example:
# Switch to ACT mode within a Forge session
/act
In PLAN mode, Forge analyzes and plans but doesn't modify your codebase:
- Read-Only Operations: Can only read files and run non-destructive commands
- Detailed Analysis: Thoroughly examines code, identifies issues, and proposes solutions
- Structured Planning: Provides step-by-step action plans for implementing changes
- Best For: When you want to understand what changes are needed before implementing them yourself
Example:
# Switch to PLAN mode within a Forge session
/plan
You can easily switch between modes during a session using the /act
and /plan
commands. PLAN mode is especially useful for reviewing potential changes before they're implemented, while ACT mode streamlines the development process by handling implementation details for you.
Forge generates detailed JSON-formatted logs that help with troubleshooting and understanding the application's behavior. These logs provide valuable insights into system operations and API interactions.
Log Location and Access
Logs are stored in your application support directory with date-based filenames. The typical path looks like:
/Users/username/Library/Application Support/forge/logs/forge.log.YYYY-MM-DD
You can easily locate log files using the built-in command /info
, which displays system information including the exact path to your log files.
Viewing and Filtering Logs
To view logs in real-time with automatic updates, use the tail
command:
tail -f /Users/tushar/Library/Application Support/forge/logs/forge.log.2025-03-07
Formatted Log Viewing with jq
Since Forge logs are in JSON format, you can pipe them through jq
for better readability:
tail -f /Users/tushar/Library/Application Support/forge/logs/forge.log.2025-03-07 | jq
This displays the logs in a nicely color-coded structure that's much easier to analyze, helping you quickly identify patterns, errors, or specific behavior during development and debugging.
Forge supports multiple AI providers and allows custom configuration to meet your specific needs.
Forge automatically detects and uses your API keys from environment variables in the following priority order:
FORGE_KEY
- Antinomy's provider (OpenAI-compatible)OPENROUTER_API_KEY
- Open Router provider (aggregates multiple models)OPENAI_API_KEY
- Official OpenAI providerANTHROPIC_API_KEY
- Official Anthropic provider
To use a specific provider, set the corresponding environment variable in your .env
file.
# Examples of different provider configurations (use only one)
# For Open Router (recommended, provides access to multiple models)
OPENROUTER_API_KEY=your_openrouter_key_here
# For official OpenAI
OPENAI_API_KEY=your_openai_key_here
# For official Anthropic
ANTHROPIC_API_KEY=your_anthropic_key_here
# For Antinomy's provider
FORGE_KEY=your_forge_key_here
For OpenAI-compatible providers (including Open Router), you can customize the API endpoint URL by setting the OPENAI_URL
environment variable:
# Custom OpenAI-compatible provider
OPENAI_API_KEY=your_api_key_here
OPENAI_URL=https://your-custom-provider.com/v1
# Or with Open Router but custom endpoint
OPENROUTER_API_KEY=your_openrouter_key_here
OPENAI_URL=https://alternative-openrouter-endpoint.com/v1
This is particularly useful when:
- Using self-hosted models with OpenAI-compatible APIs
- Connecting to enterprise OpenAI deployments
- Using proxy services or API gateways
- Working with regional API endpoints
For complex tasks, a single agent may not be sufficient. Forge allows you to create custom workflows with multiple specialized agents working together to accomplish sophisticated tasks.
You can configure your own workflows by creating a YAML file and pointing to it with the -w
flag:
forge -w /path/to/your/workflow.yaml
A workflow consists of agents connected via events. Each agent has specific capabilities and can perform designated tasks.
Agents communicate through events which they can publish and subscribe to:
Built-in Events
user_task_init
- Published when a new task is initiateduser_task_update
- Published when follow-up instructions are provided by the user
Each agent needs tools to perform tasks, configured in the tools
field:
Built-in Tools
tool_forge_fs_read
- Read from the filesystemtool_forge_fs_create
- Create or overwrite filestool_forge_fs_remove
- Remove filestool_forge_fs_search
- Search for patterns in filestool_forge_fs_list
- List files in a directorytool_forge_fs_info
- Get file metadatatool_forge_process_shell
- Execute shell commandstool_forge_process_think
- Perform internal reasoningtool_forge_net_fetch
- Fetch data from the internettool_forge_event_dispatch
- Dispatch events to other agentstool_forge_fs_patch
- Patch existing files
id
- Unique identifier for the agentmodel
- AI model to use (from the\models
list)tools
- List of tools the agent can usesubscribe
- Events the agent listens toephemeral
- If true, agent is destroyed after task completiontool_supported
- (Optional) Boolean flag that determines whether tools defined in the agent configuration are actually made available to the LLM. When set tofalse
, tools are listed in the configuration but not included in AI model requests, causing the agent to format tool calls in XML rather than in the model's native format. Default:true
.system_prompt
- (Optional) Instructions for how the agent should behave. While optional, it's recommended to provide clear instructions for best results.user_prompt
- (Optional) Format for user inputs. If not provided, the raw event value is used.
Forge provides templates to simplify system prompt creation:
system-prompt-engineer.hbs
- Template for engineering taskssystem-prompt-title-generator.hbs
- Template for generating descriptive titlessystem-prompt-advocate.hbs
- Template for user advocacy and explanationpartial-tool-information.hbs
- Tool documentation for agentspartial-tool-examples.hbs
- Usage examples for tools
Use these templates with the syntax: {{> name-of-the-template.hbs }}
variables:
models:
advanced_model: &advanced_model anthropic/claude-3.7-sonnet
efficiency_model: &efficiency_model anthropic/claude-3.5-haiku
agents:
- id: title_generation_worker
model: *efficiency_model
tools:
- tool_forge_event_dispatch
subscribe:
- user_task_init
tool_supported: false # Force XML-based tool call formatting
system_prompt: "{{> system-prompt-title-generator.hbs }}"
user_prompt: <technical_content>{{event.value}}</technical_content>
- id: developer
model: *advanced_model
tools:
- tool_forge_fs_read
- tool_forge_fs_create
- tool_forge_fs_remove
- tool_forge_fs_patch
- tool_forge_process_shell
- tool_forge_net_fetch
- tool_forge_fs_search
subscribe:
- user_task_init
- user_task_update
ephemeral: false
tool_supported: true # Use model's native tool call format (default)
system_prompt: "{{> system-prompt-engineer.hbs }}"
user_prompt: |
<task>{{event.value}}</task>
This example workflow creates two agents:
- A title generation worker that creates meaningful titles for user conversations
- A developer agent that can perform comprehensive file and system operations
There's a reason why the shell has stood the test of time for development tools and remains a cornerstone of development environments across the globe: it's fast, versatile, and seamlessly integrated with the system. The shell is where developers navigate code, run tests, manage processes, and orchestrate development environments, providing an unmatched level of control and productivity.
Why a shell-based AI assistant like Code-Forge makes sense:
-
Rich Tool Ecosystem: The shell gives you immediate access to powerful Unix tools (grep, awk, sed, find) that LLMs already understand deeply. This means the AI can leverage
ripgrep
for code search,jq
for JSON processing,git
for version control, and hundreds of other battle-tested tools without reinventing them. -
Context is Everything: Your shell session already has your complete development context - current directory, project structure, environment variables, installed tools, and system state. This rich context makes the AI interactions more accurate and relevant.
-
Speed Matters: Unlike IDEs and Web UIs, Code Forge's shell is extremely lightweight. This exceptional speed unlocks powerful capabilities that directly enhance your productivity: seamlessly get in and out of workflows, manage multiple feature developments in parallel, effortlessly coordinate across git worktrees, and instantly access AI assistance in any directory.
-
Tool Composition: Unix philosophy teaches us to make tools that compose well. The AI can pipe commands together, combining tools like
find | xargs forge -p | grep "foo"
in ways that solve complex problems elegantly.
Join our vibrant Discord community to connect with other Code-Forge users and contributors, get help with your projects, share ideas, and provide feedback!
Your support drives Code-Forge's continued evolution! By starring our GitHub repository, you:
- Help others discover this powerful tool
- Motivate our development team
- Enable us to prioritize new features
- Strengthen our open-source community
Recent community feedback has helped us implement features like improved autocomplete, cross-platform optimization, and enhanced security features. Join our growing community of developers who are reshaping the future of AI-powered development!