EduPlannerBotAI is a Telegram bot built with aiogram 3.x
and powered by a revolutionary multi-level LLM architecture. It generates personalized study plans, exports them to PDF/TXT, and sends reminders as Telegram messages. All data is stored using TinyDB (no other DBs supported).
Note: All code comments and docstrings are in English for international collaboration and code clarity. All user-facing messages and buttons are automatically translated to the user's selected language.
- π Multi-Level LLM Architecture: OpenAI β Groq β Local LLM β Fallback Plan
- π Local LLM Integration: TinyLlama 1.1B model for offline operation
- π Guaranteed Availability: Bot works even without internet connection
- π Enhanced Fallback System: Robust error handling and service switching
- π Improved Plan Quality: Professional-grade study plan templates
- π Offline Translation: Local LLM supports offline text translation
- π Multi-Level LLM Architecture: Generate personalized study plans using OpenAI, Groq, Local LLM, or fallback templates
- π Export Options: Save plans as PDF or TXT files
- β° Smart Reminders: Receive Telegram notifications for each study step
- ποΈ Lightweight Storage: TinyDB-based data storage (no SQL required)
- π Multilingual Support: English, Russian, Spanish with real-time LLM translation
- π·οΈ Reliable UI: All keyboards displayed with proper messages (no Telegram errors)
- π Smart Language Handling: Language selection works correctly without translation
- π€ Graceful Fallbacks: Original text sent if translation fails
- π§© Extensible Codebase: Clean, maintainable code ready for extensions
- π Offline Operation: Local LLM ensures 100% availability
- π Privacy First: Local processing keeps your data secure
The bot features a sophisticated 4-tier fallback system that ensures reliable service even during complete internet outages:
Priority | Service | Description | Use Case |
---|---|---|---|
1 | OpenAI GPT | Primary model for high-quality plans | Best quality, when available |
2 | Groq | Secondary model, OpenAI alternative | Fast fallback, reliable service |
3 | Local LLM | TinyLlama 1.1B local model | Offline operation, privacy |
4 | Fallback Plan | Predefined professional template | Guaranteed availability |
The bot automatically attempts to generate study plans using available services in order of priority:
- Primary: OpenAI API (if
OPENAI_API_KEY
is set and quota available) - Fallback 1: Groq (if
GROQ_API_KEY
is set) - Fallback 2: Local LLM (TinyLlama 1.1B model)
- Last Resort: Local plan generator (comprehensive template)
The same multi-level system applies to text translation:
- OpenAI for high-quality translations
- Groq as secondary translation service
- Local LLM for offline translation capability
- Original Text if all translation services fail
- π Offline Operation: Works without internet connection
- β‘ Fast Response: No network latency (0.5-2 seconds)
- π Privacy: All processing happens locally on your server
- π‘οΈ Guaranteed Availability: Always accessible as fallback
- π― High Quality: Professional-grade study plan generation
- π° Cost Effective: No API costs for local operations
Metric | OpenAI | Groq | Local LLM | Fallback |
---|---|---|---|---|
Response Time | 2-5s | 1-3s | 0.5-2s | 0.1s |
Availability | 99% | 99% | 100% | 100% |
Cost | Per token | Per token | Free | Free |
Privacy | External | External | Local | Local |
If the OpenAI API is unavailable, out of quota, or not configured, the bot automatically uses Groq as a fallback LLM provider.
- Fast and reliable generations
- No strict quotas for most users
- OpenAI-compatible API
- Always available fallback
- Register and get your API key at Groq Console
- Add to your
.env
file:GROQ_API_KEY=your_groq_api_key
- No other changes needed β automatic fallback enabled
Choose your preferred language for all bot interactions! Use /language
to select from:
Language | Code | Status |
---|---|---|
English | en |
β Primary |
Π ΡΡΡΠΊΠΈΠΉ | ru |
β Full support |
EspaΓ±ol | es |
β Full support |
- Real-time translation using multi-level LLM system
- Context-aware results for better accuracy
- Automatic fallback through available services
- Original text preservation if translation fails
git clone https://github.com/AlexTkDev/EduPlannerBotAI.git
cd EduPlannerBotAI
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
The bot includes a local TinyLlama 1.1B model for offline operation:
- Model: TinyLlama 1.1B Chat v1.0 (Q4_K_M quantized)
- Format: GGUF format
- Size: ~1.1GB
- Requirements: ~2GB RAM for optimal performance
Important: The model file is not included in the repository due to size limitations. You must download it separately:
# Download the model (choose one method)
# Option 1: Using wget
wget -O models/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf \
"https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
# Option 2: Using curl
curl -L -o models/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf \
"https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf"
See models/README.md for detailed download instructions and troubleshooting.
The model is automatically loaded at startup and provides offline fallback capability.
Create a .env
file in the root directory or rename .env.example
to .env
and fill in your tokens:
BOT_TOKEN=your_telegram_bot_token
OPENAI_API_KEY=your_openai_api_key
GROQ_API_KEY=your_groq_api_key
All environment variables are loaded from .env
automatically.
python bot.py
Run the bot in a container:
docker-compose up --build
Environment variables are loaded from .env
.
When you choose to schedule reminders, the bot sends a separate Telegram message for each study plan step, ensuring timely notifications directly in your chat.
- 100% test coverage for core logic and all handlers
- Pylint score: 10.00/10 (Perfect)
- Code style: PEP8 and pylint compliant
- Run tests:
pytest
EduPlannerBotAI/
βββ bot.py # Bot entry point
βββ config.py # Load tokens from .env
βββ handlers/ # Command and message handlers
β βββ __init__.py
β βββ start.py # /start and greeting
β βββ planner.py # Study plan generation flow
β βββ language.py # Language selection and filter
βββ services/ # Core logic and helper functions
β βββ llm.py # Multi-level LLM integration (OpenAI β Groq β Local LLM β Fallback)
β βββ local_llm.py # Local TinyLlama model integration
β βββ pdf.py # PDF export
β βββ txt.py # TXT export
β βββ reminders.py # Reminder simulation
β βββ db.py # TinyDB database
βββ models/ # Local LLM model storage
β βββ README.md # Model download instructions
β βββ .gitkeep # Preserve directory structure
βββ .env # Environment variables
βββ requirements.txt # Dependencies list
βββ README.md # Project documentation
Component | Purpose | Version |
---|---|---|
Python | Programming language | 3.10+ |
aiogram | Telegram Bot Framework | 3.x |
OpenAI API | Primary LLM provider | Latest |
Groq API | Secondary LLM provider | Latest |
Local LLM | TinyLlama 1.1B offline | GGUF |
llama-cpp-python | Local LLM inference | Latest |
fpdf | PDF file generation | Latest |
TinyDB | Lightweight NoSQL database | Latest |
python-dotenv | Environment variable management | Latest |
aiofiles | Asynchronous file operations | Latest |
- GitHub Actions: Automated Pylint analysis and testing
- Python Compatibility: 3.10, 3.11, 3.12, 3.13
- Code Quality: Custom
.pylintrc
configuration - Testing: pytest with 100% coverage
- Style: PEP8 compliant
- Multi-Level LLM Architecture: OpenAI β Groq β Local LLM β Fallback Plan
- Local LLM Integration: TinyLlama 1.1B model for offline operation
- Guaranteed Availability: Bot works even without internet connection
- Enhanced Fallback System: Robust error handling and service switching
- Improved Plan Quality: Professional-grade study plan templates
- Offline Translation: Local LLM supports offline text translation
- Performance Optimization: Efficient model loading and inference
- Comprehensive Logging: Detailed monitoring of LLM service transitions
- Eliminated Single Points of Failure: No more dependency on single API
- Reduced Response Times: Local operations provide instant results
- Better Resource Management: Optimized model loading and cleanup
- Production Ready: Enterprise-grade stability and monitoring
- Pylint Score: 10.00/10 (Perfect)
- Test Coverage: 100% for all core logic and handlers
- Style Compliance: PEP8 and pylint compliant
- Documentation: Comprehensive inline documentation
If you experience too many 429 Too Many Requests
errors:
- β± Increase delays: Adjust
BASE_RETRY_DELAY
andMAX_RETRIES
- π§ Use lighter models: Consider
gpt-3.5-turbo
instead ofgpt-4
- π³ Upgrade plan: Consider higher quota OpenAI plan
- π Automatic fallback: Bot will use Groq and Local LLM automatically
We welcome contributions! To improve this bot:
- Fork the repository
- Create a feature branch (
git checkout -b feature-name
) - Commit your changes (all code and comments must be in English)
- Push to your fork
- Submit a pull request
- Response Time: 0.1s - 5s depending on service used
- Uptime: 99.9%+ with fallback system
- Offline Capability: 100% (local LLM)
- Service Recovery: Automatic (intelligent fallback)
- Service Health: Real-time status tracking
- Performance Metrics: Response time monitoring
- Error Tracking: Comprehensive error logging
- Resource Usage: Memory and CPU monitoring
Created with β€οΈ. For feedback and collaboration:
- Telegram: @Aleksandr_Tk
- GitHub Issues: Report bugs
- Documentation: README.md
MIT License - see LICENSE file for details.
EduPlannerBotAI v4.0.0 represents a significant milestone, transforming the bot from a simple OpenAI-dependent service into a robust, enterprise-grade system with guaranteed availability and offline operation capabilities. This release sets the foundation for future enhancements while maintaining backward compatibility and improving overall user experience.