Skip to content

A powerful Flask-based web application that enables an LLM to interact with multiple tools, performing complex tasks through intelligent function calling.

Notifications You must be signed in to change notification settings

yossifibrahem/LLM-With-Tool-Calling-Web-Application

Repository files navigation

LLM Tool Calling Web Application πŸ€–πŸ’¬

Overview

This is a web application that enables a Large Language Model (LLM) to interact with multiple tools and perform tasks using function calling. Built with Flask and with LM Studio server, this application provides an interactive chat experience with multi-tool support.

🌟 Key Features

Multi-Tool Integration

  • 🐍 Python Code Execution: Run and analyze Python code directly in the chat
  • 🌐 Web Search: Perform real-time web searches and extract citations
  • πŸ“š Wikipedia Lookup: Instantly retrieve Wikipedia article introductions
  • πŸ–ΌοΈ Image Search: Find and display images related to your queries
  • πŸŽ₯ YouTube Search: Search and retrieve YouTube video information
  • πŸ”— Web Scraping: Extract content from specific websites

Advanced Conversation Management

  • πŸ’¬ Persistent Conversations: Save and load chat histories
  • ♻️ Conversation Regeneration: Easily regenerate or delete last messages
  • 🏷️ Automatic Conversation Naming: Smart, context-aware conversation titles
  • πŸ“œ Conversation Listing: Browse and manage your chat history

User Experience

  • 🎨 Responsive Design: Clean, modern UI with Tailwind CSS
  • </> Code Highlighting: Syntax highlighting with Prism.js
  • πŸ“Š Markdown & Math Support: Render markdown and mathematical equations
  • 🌈 Tool Results Visualization: Interactive, collapsible tool result sections

πŸ–₯️ Screenshots

python web and image YT urls wikipidia history

πŸ”§ Prerequisites

πŸ“¦ Installation

Clone the Repository

git clone https://github.com/yossifibrahem/LLM-Tool-Calling-Web-Application.git
cd LLM-Tool-Calling-Web-Application

πŸš€ Running the Application

🐳 Option 1: Using Docker

Install using Docker 1. Install Docker on your machine (if you haven't already)
  1. Ensure LM Studio is running on your machine with the server running.

  2. Build the docker container with

docker build -t llm_tool_app .
  1. If on Windows/MacOS run
docker run --name Tools-UI -p 8080:8080 --add-host=host.docker.internal:host-gateway -e LMSTUDIO_BASE_URL="http://host.docker.internal:1234/v1" -e LMSTUDIO_API_KEY="lm-studio" -e LMSTUDIO_MODEL="lmstudio-community/qwen2.5-7b-instruct" llm_tool_app

Otherwise if you are on Linux run:

docker run --name Tools-UI -p 8080:8080 -e LMSTUDIO_BASE_URL="http://host.docker.internal:1234/v1" -e LMSTUDIO_API_KEY="lm-studio" -e LMSTUDIO_MODEL="lmstudio-community/qwen2.5-7b-instruct" llm_tool_app

If your LMSTUDIO_BASE_URL, LMSTUDIO_API_KEY and/or LMSTUDIO_MODEL values are different on your setup, change that in the docker run command.

🐍 Option 2: Using Python

Run using Python 1. Install Dependencies ```bash pip install -r requirements.txt ```
  1. post-installation setup Set Environment Variables
# For Windows
set FLASK_ENV=production
set FLASK_DEBUG=0

# For Unix/MacOS
export FLASK_ENV=production
export FLASK_DEBUG=0

If you need to change the server host, port, LM Studio URL, API key, or model, set the following environment variables to the desired values like so:

# For Windows
set LMSTUDIO_BASE_URL=YOUR_VALUE_HERE
set LMSTUDIO_API_KEY=YOUR_VALUE_HERE
set LMSTUDIO_MODEL=YOUR_VALUE_HERE

# For Unix/MacOS
export LMSTUDIO_BASE_URL=YOUR_VALUE_HERE
export LMSTUDIO_API_KEY=YOUR_VALUE_HERE
export LMSTUDIO_MODEL=YOUR_VALUE_HERE

Setup crawl4ai

# Run post-installation setup
crawl4ai-setup

# Verify your installation
crawl4ai-doctor

Run the server using

python server.py

or Double-click serverstart.bat

Access the Application

Open your browser and navigate to http://localhost:8080

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

πŸ™ Acknowledgements

πŸ“ž Support

For issues or questions, please open a GitHub issue or contact the maintainer.

This README is written by Anthropic's Claude

About

A powerful Flask-based web application that enables an LLM to interact with multiple tools, performing complex tasks through intelligent function calling.

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published