- Overview
- Features
- Installation
- Quick Start
- Core Concepts
- Supported Models
- Examples
- Documentation
- Articles & Tutorials
- Contributing
- License
PromptLab is a free, lightweight, open-source experimentation tool for Gen AI applications. It streamlines prompt engineering, making it easy to set up experiments, evaluate prompts, and track them in production - all without requiring any cloud services or complex infrastructure.
With PromptLab, you can:
- Create and manage prompt templates with versioning
- Build and maintain evaluation datasets
- Run experiments with different models and prompts
- Evaluate results using built-in or custom metrics
- Compare experiment results side-by-side
- Deploy optimized prompts to production
- Truly Lightweight: No cloud subscription, no additional servers, not even Docker - just a simple Python package
- Easy to Adopt: No ML or Data Science expertise required
- Self-contained: No need for additional cloud services for tracking or collaboration
- Seamless Integration: Works within your existing web, mobile, or backend project
- Flexible Evaluation: Use built-in metrics or bring your own custom evaluators
- Visual Studio: Compare experiments and track assets through a local web interface
- Multiple Model Support: Works with Azure OpenAI, Ollama, DeepSeek and more
- Version Control: Automatic versioning of all assets for reproducibility
pip install promptlab
It's recommended to use a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install promptlab
from promptlab import PromptLab
from promptlab.types import PromptTemplate, Dataset
# Initialize PromptLab with SQLite storage
tracer_config = {
"type": "sqlite",
"db_file": "./promptlab.db"
}
pl = PromptLab(tracer_config)
# Create a prompt template
prompt_template = PromptTemplate(
name="essay_feedback",
description="A prompt for generating feedback on essays",
system_prompt="You are a helpful assistant who can provide feedback on essays.",
user_prompt="The essay topic is - <essay_topic>.\n\nThe submitted essay is - <essay>\nNow write feedback on this essay."
)
pt = pl.asset.create(prompt_template)
# Create a dataset
dataset = Dataset(
name="essay_samples",
description="Sample essays for evaluation",
file_path="./essays.jsonl"
)
ds = pl.asset.create(dataset)
# Run an experiment
experiment_config = {
"model": {
"type": "ollama",
"inference_model_deployment": "llama2",
"embedding_model_deployment": "llama2"
},
"prompt_template": {
"name": pt.name,
"version": pt.version
},
"dataset": {
"name": ds.name,
"version": ds.version
},
"evaluation": [
{
"type": "custom",
"metric": "length",
"column_mapping": {
"response": "$inference"
}
}
]
}
pl.experiment.run(experiment_config)
# Start the PromptLab Studio to view results
pl.studio.start(8000)
Storage that maintains assets and experiments. Currently uses SQLite for simplicity and portability.
Immutable artifacts used in experiments, with automatic versioning:
- Prompt Templates: Prompts with optional placeholders for dynamic content
- Datasets: JSONL files containing evaluation data
Evaluate prompts against datasets using specified models and metrics.
A local web interface for visualizing experiments and comparing results.
- Azure OpenAI: Connect to Azure-hosted OpenAI models
- Ollama: Run experiments with locally-hosted models
- DeepSeek: Use DeepSeek's AI models
- Quickstart: Getting started with PromptLab
- Asset versioning: Versioning Prompts and Datasets
- Custom Metric: Creating custom evaluation metrics
For comprehensive documentation, visit our Documentation Page.
- Evaluating prompts locally with Ollama and PromptLab
- Creating custom prompt evaluation metrics with PromptLab
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.