Kazushin, a fork of this project, is a Twitch Chat Bot that reads chat and generates text-to-speech responses using OpenAI API and Google Cloud API. It comes with profanity detection, and more built-in.
Explore the docs »
View Demo
·
Report Bug
·
Request Feature
Table of Contents
For a more comprehensive guide, check out the documentation
In order to install the prerequisites, you will need to run the following command in a command line:
- pip
pip install -r requirements.txt
- Clone the repo or fork it
git clone https://github.com/TheSoftDiamond/Kazushin.git
- Populate the creds.py file with your info
- Adjust the settings.py file as to your needs.
- Run main_usercontext.py
Make sure you have downloaded Ollama.
-
Create a
Modelfile
in your project, pointing it to your model gguf:FROM llama-2-7b.Q2_K.gguf
You can download model data from sites such as HuggingFace
-
Create the Ollama model from your existing template using poweshell/bash
❯ ollama create llama2 -f Modelfile transferring model data creating model layer using already created layer sha256:a630f354771cf25496e079a49656730858712315cc71aee4adf9b97aceb251f8 writing layer sha256:9d07cddc325f2abd269514a29cb3165eac0b06accd018a1b4da9982d6b986647 writing manifest success
-
Serve the Ollama instance
❯ ollama serve time=2024-07-10T20:20:46.364-04:00 level=INFO source=images.go:710 msg="total blobs: 0" time=2024-07-10T20:20:46.364-04:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0" time=2024-07-10T20:20:46.364-04:00 level=INFO source=routes.go:1021 msg="Listening on 127.0.0.1:11434 (version 0.1.28)" time=2024-07-10T20:20:46.364-04:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..." time=2024-07-10T20:20:47.967-04:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [rocm_v5 cpu_avx2 cpu rocm_v6 cpu_avx cuda_v11]" time=2024-07-10T20:20:47.967-04:00 level=INFO source=gpu.go:94 msg="Detecting GPU type" time=2024-07-10T20:20:47.967-04:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so" time=2024-07-10T20:20:47.980-04:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []" time=2024-07-10T20:20:47.980-04:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library librocm_smi64.so" time=2024-07-10T20:20:47.980-04:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []" time=2024-07-10T20:20:47.980-04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2" time=2024-07-10T20:20:47.980-04:00 level=INFO source=routes.go:1044 msg="no GPU detected"
This is foreground process, you will need to have this as a service in order to utilize it as a daemon.
For example, on Linux systems:
systemctl enable --now ollama
. This will enable Ollama on boot. -
In settings.py adjust the
localAI_ModelName
to match your model name.### Local AI SETTINGS ### # Model Name localAI_ModelName = "llama2"
- Separate conversations, prompts per user.
- Profanity Filter
- Detect Cheers, Keywords, andmore
- Have the bot speak out loud or/and post messages to chat
- and many more!
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License.