Here's features that you get out of the box:
- Fully dockerized bot
- Response streaming without ratelimit with SentenceBySentence method
- Mention [@] bot in group to receive answer
- Proper Docker Config
- Add more API-related functions [System Prompt Editor, Ollama Version fetcher, etc.]
- Redis DB integration
- Implement history [Bot can't remember more that 1 prompt]
- Install latest Python
- Clone Repository
git clone https://github.com/ruecat/ollama-telegram
- Install requirements from requirements.txt
pip install -r requirements.txt
-
Enter all values in .env.example
-
Rename .env.example -> .env
-
Launch bot
python3 run.py
- Clone Repository
git clone https://github.com/ruecat/ollama-telegram
-
Enter all values in .env.example
-
Rename .env.example -> .env
-
Run ONE of the following docker compose commands to start:
- To run ollama in docker container (optionally: uncomment GPU part of docker-compose.yml file to enable Nvidia GPU)
docker compose up --build -d
- To run ollama from locally installed instance (mainly for MacOS, since docker image doesn't support Apple GPU acceleration yet):
docker compose up --build -d ollama-telegram
Parameter | Description | Required? | Default Value | Example |
---|---|---|---|---|
TOKEN |
Your Telegram bot token. [How to get token?] |
Yes | yourtoken |
MTA0M****.GY5L5F.****g*****5k |
ADMIN_IDS |
Telegram user IDs of admins. These can change model and control the bot. |
Yes | 1234567890 OR 1234567890,0987654321, etc. |
|
USER_IDS |
Telegram user IDs of regular users. These only can chat with the bot. |
Yes | 1234567890 OR 1234567890,0987654321, etc. |
|
INITMODEL |
Default LLM | No | llama2 |
mistral:latest mistral:7b-instruct |