Skip to content

GiulioRusso/ChatLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ChatLLM

A streamlined and versatile chatbot interface supporting various language models such as OpenAI GPT, Ollama Llama, and others. Designed for flexibility and ease of use, the project provides a Gradio-based frontend for seamless interactions.



Features ⚙️

  • Multi-model support: Compatible with OpenAI GPT, Ollama Llama, Mistral, and Qwen models.
  • Dynamic model management: Automatically verifies and downloads models when needed.
  • Streamed responses: Real-time streamed output for enhanced user experience.
  • Custom prompts: System prompt management through system_prompt.txt.

Requirements 🛠️

Ensure you have the following installed:

  • Python 3.8+
  • Required Python libraries (see requirements.txt)
  • Ollama (if using Llama models): Download here

Installation 💻

  1. Clone the repository:

    git clone https://github.com/GiulioRusso/ChatLLM.git
    cd ChatLLM
  2. Install dependencies:

    pip install -r requirements.txt
  3. Ensure Ollama is installed and running if using Llama models.


Usage 🦾

  1. Open the Jupyter Notebook:

    jupyter-notebook main.ipynb
  2. Place your system prompt file in the prompts folder as system_prompt.txt.

  3. Run the notebook and interact with the chatbot interface.


To Do 📋

  • Add support for Claude models.
  • Implement a file upload section for interactive file processing.
  • Add functionality for image generation using integrated models.

About

Multi-model AI chat interface

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published