An automated system that reviews GitHub pull requests with a focus on database performance and potential hotspots. The system uses a local LLM to analyze code changes and provide detailed feedback about database-related concerns.
- 🔄 Automated PR monitoring and review
- 💾 Database performance focused analysis
- 🤖 Local LLM processing (no API costs)
- 📊 Real-time metrics and visualization
- 🐳 Containerized deployment
- 🔍 Detailed performance tracking
- Docker and Docker Compose
- Git
- 8GB RAM
- 5GB free disk space
- MacOS (including M1/M2), Linux, or Windows with WSL2
RAM Available | Recommended Model | Memory Usage |
---|---|---|
8GB | TinyLlama (1.1B) or Phi-2 (2.7B) | 2-3GB |
16GB | Mistral 7B | 4-5GB |
- Clone the repository:
git clone https://github.com/mitchelldyer01/chad
cd chad
- Create environment file:
cp .env.example .env
- Configure
.env
with your settings:
GITHUB_TOKEN=your_github_token
REPO_PATH=/app/repos/your-repo
REPO_OWNER=your-github-username
REPO_NAME=your-repo-name
MODEL_PATH=/app/models/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf # Adjust based on your model choice
CHECK_INTERVAL=300
- Download the appropriate model based on your system:
For 8GB Systems:
# Option 1: TinyLlama (Recommended for 8GB RAM)
mkdir -p models
curl -L https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf -o models/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf
# Option 2: Phi-2
curl -L https://huggingface.co/TheBloke/phi-2-GGUF/resolve/main/phi-2.Q4_K_M.gguf -o models/phi-2.Q4_K_M.gguf
For 16GB Systems:
# Mistral 7B
curl -L https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf -o models/mistral-7b-instruct-v0.2.Q4_K_M.gguf
- Configure Docker resources in
docker-compose.yml
:
For 8GB Systems:
services:
pr-reviewer:
deploy:
resources:
limits:
memory: 4G
For 16GB Systems:
services:
pr-reviewer:
deploy:
resources:
limits:
memory: 8G
- Start the service:
docker compose up --build
chad/
├── Dockerfile
├── docker-compose.yml
├── requirements.txt
├── .env.example
├── .gitignore
├── README.md
├── models/ # LLM model storage
├── data/ # SQLite database storage
└── src/
├── reviewer.py # Main PR review logic
├── config.py # Configuration management
├── metrics_tui.py # Metrics visualization
└── utils/
├── github.py # GitHub API interactions
└── database.py # Database operations
Variable | Description | Default |
---|---|---|
GITHUB_TOKEN | GitHub Personal Access Token | Required |
REPO_PATH | Path to local repo clone | Required |
REPO_OWNER | GitHub repository owner | Required |
REPO_NAME | GitHub repository name | Required |
MODEL_PATH | Path to LLM model file | Required |
CHECK_INTERVAL | PR check interval (seconds) | 300 |
llm = Llama(
model_path=Config.MODEL_PATH,
n_ctx=512, # Smaller context window
n_batch=4, # Smaller batch size
n_threads=2 # Fewer threads
)
llm = Llama(
model_path=Config.MODEL_PATH,
n_ctx=2048, # Larger context window
n_batch=8, # Larger batch size
n_threads=4 # More threads
)
Run the TUI metrics dashboard:
# Inside the container
docker exec -it pr-reviewer-container python -m src.metrics_tui
# Or locally with direct database access
python -m src.metrics_tui --db-path ./data/pr_tracker.db
- Real-time updates (default: 5 seconds)
- Historical trend graphs
- PR processing statistics
- Token usage tracking
- Success/failure rates
- Processing time analysis
Arrow keys: Navigate panels 'q': Quit 'h': Show help
The system uses SQLite with the following main tables:
- processed_prs: Tracks processed pull requests
- review_history: Stores review feedback history
- pr_metrics: Performance metrics per PR
- llm_metrics: LLM usage statistics
- daily_metrics: Aggregated daily statistics
For local development without Docker:
Create a virtual environment:
python -m venv venv
source venv/bin/activate # or `venv\Scripts\activate` on Windows
Install dependencies:
pip install -r requirements.txt
Run the service:
python -m src.reviewer
- llama-cpp-python
- GitPython
- requests
- python-dotenv
- sqlite3
- textual
- pandas
- asciichartpy
- pytest
- black
- flake8