A simple interface for interacting with multiple LLM providers during a single conversation.
Try the interactive demos to see Ensemble in action:
npm run demo
This opens a unified demo interface at http://localhost:3000 with all demos:
Navigate to http://localhost:3000 to access all demos through a unified interface.
See the demo README for detailed information about each demo.
- ๐ค Unified Streaming Interface - Consistent event-based streaming across all providers
- ๐ Model/Provider Rotation - Automatic model selection and rotation
- ๐ ๏ธ Advanced Tool Calling - Parallel/sequential execution, timeouts, and background tracking
- ๐ Automatic History Compaction - Handle unlimited conversation length with intelligent summarization
- ๐ค Agent Orientated - Advanced agent capabilities with verification and tool management
- ๐ Multi-Provider Support - OpenAI, Anthropic, Google, DeepSeek, xAI, OpenRouter, ElevenLabs
- ๐ผ๏ธ Multi-Modal - Support for text, images, embeddings, and voice generation
- ๐ Cost & Quota Tracking - Built-in usage monitoring and cost calculation
- ๐ฏ Smart Result Processing - Automatic summarization and truncation for long outputs
npm install @just-every/ensemble
Copy .env.example
to .env
and add your API keys:
cp .env.example .env
Available API keys (add only the ones you need):
# LLM Providers
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-key
GOOGLE_API_KEY=your-google-key
XAI_API_KEY=your-xai-key
DEEPSEEK_API_KEY=your-deepseek-key
OPENROUTER_API_KEY=your-openrouter-key
# Voice & Audio Providers
ELEVENLABS_API_KEY=your-elevenlabs-key
# Search Providers
BRAVE_API_KEY=your-brave-key
Note: You only need to configure API keys for the providers you plan to use. The system will automatically select available providers based on configured keys.
import { ensembleRequest, ensembleResult } from '@just-every/ensemble';
const messages = [
{ type: 'message', role: 'user', content: 'How many of the letter "e" is there in "Ensemble"?' }
];
// Perform initial request
for await (const event of ensembleRequest(messages)) {
if (event.type === 'response_output') {
// Save out to continue conversation
messages.push(event.message);
}
}
// Create a validator agent
const validatorAgent = {
instructions: 'Please validate that the previous response is correct',
modelClass: 'code',
};
// Continue conversation with new agent
const stream = ensembleRequest(messages, validatorAgent);
// Alternative method of collecting response
const result = await ensembleResult(stream);
console.log('Validation Result:', {
message: result.message,
cost: result.cost,
completed: result.completed,
duration: result.endTime
? result.endTime.getTime() - result.startTime.getTime()
: 0,
messageIds: Array.from(result.messageIds),
});
- Tool Execution Guide - Advanced tool calling features
- Interactive Demos - Web-based demos for core features
- Generated API Reference with
npm run docs
Run npm run docs
to regenerate the HTML documentation.
Define tools that LLMs can call:
const agent = {
model: 'o3',
tools: [{
definition: {
type: 'function',
function: {
name: 'get_weather',
description: 'Get weather for a location',
parameters: {
type: 'object',
properties: {
location: { type: 'string' }
},
required: ['location']
}
}
},
function: async (location: string) => {
return `Weather in ${location}: Sunny, 72ยฐF`;
}
}]
};
All providers emit standardized events:
message_start
/message_delta
/message_complete
- Message streamingtool_start
/tool_delta
/tool_done
- Tool executioncost_update
- Token usage and cost trackingerror
- Error handling
Configure agent behavior with these optional properties:
const agent = {
model: 'claude-4-sonnet',
maxToolCalls: 200, // Maximum total tool calls (default: 200)
maxToolCallRoundsPerTurn: 5, // Maximum sequential rounds of tool calls (default: Infinity)
tools: [...], // Available tools for the agent
modelSettings: { // Provider-specific settings
temperature: 0.7,
max_tokens: 4096
}
};
Key configuration options:
maxToolCalls
- Limits the total number of tool calls across all roundsmaxToolCallRoundsPerTurn
- Limits sequential rounds where each round can have multiple parallel tool callsmodelSettings
- Provider-specific parameters like temperature, max_tokens, etc.
- Parallel Tool Execution - Tools run concurrently by default within each round
- Sequential Mode - Enforce one-at-a-time execution
- Timeout Handling - Automatic timeout with background tracking
- Result Summarization - Long outputs are intelligently summarized
- Abort Signals - Graceful cancellation support
Generate natural-sounding speech from text using Text-to-Speech models:
import { ensembleVoice, ensembleVoice } from '@just-every/ensemble';
// Simple voice generation
const audioData = await ensembleVoice('Hello, world!', {
model: 'tts-1' // or 'gemini-2.5-flash-preview-tts'
});
// Voice generation with options
const audioData = await ensembleVoice('Welcome to our service', {
model: 'tts-1-hd'
}, {
voice: 'nova', // Voice selection
speed: 1.2, // Speech speed (0.25-4.0)
response_format: 'mp3' // Audio format
});
// Streaming voice generation
for await (const event of ensembleVoice('Long text...', {
model: 'gemini-2.5-pro-preview-tts'
})) {
if (event.type === 'audio_stream') {
// Process audio chunk
processAudioChunk(event.data);
}
}
Supported Voice Models:
- OpenAI:
tts-1
,tts-1-hd
- Google Gemini:
gemini-2.5-flash-preview-tts
,gemini-2.5-pro-preview-tts
- ElevenLabs:
eleven_multilingual_v2
,eleven_turbo_v2_5
# Install dependencies
npm install
# Run tests
npm test
# Build
npm run build
# Generate docs
npm run docs
# Lint
npm run lint
Ensemble provides a unified interface across multiple LLM providers:
- Provider Abstraction - All providers extend
BaseModelProvider
- Event Streaming - Consistent events across all providers
- Tool System - Automatic parameter mapping and execution
- Message History - Intelligent conversation management
- Cost Tracking - Built-in usage monitoring
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new features
- Submit a pull request
- Ensure API keys are set correctly
- Check rate limits for your provider
- Verify model names match provider expectations
- Tools must follow the OpenAI function schema
- Ensure tool functions are async
- Check timeout settings for long-running tools
- Verify network connectivity
- Check for provider-specific errors in events
- Enable debug logging with
DEBUG=ensemble:*
MIT