This is a web application that enables a Large Language Model (LLM) to interact with multiple tools and perform tasks using function calling. Built with Flask and with LM Studio server, this application provides an interactive chat experience with multi-tool support.
- π Python Code Execution: Run and analyze Python code directly in the chat
- π Web Search: Perform real-time web searches and extract citations
- π Wikipedia Lookup: Instantly retrieve Wikipedia article introductions
- πΌοΈ Image Search: Find and display images related to your queries
- π₯ YouTube Search: Search and retrieve YouTube video information
- π Web Scraping: Extract content from specific websites
- π¬ Persistent Conversations: Save and load chat histories
- β»οΈ Conversation Regeneration: Easily regenerate or delete last messages
- π·οΈ Automatic Conversation Naming: Smart, context-aware conversation titles
- π Conversation Listing: Browse and manage your chat history
- π¨ Responsive Design: Clean, modern UI with Tailwind CSS
- </> Code Highlighting: Syntax highlighting with Prism.js
- π Markdown & Math Support: Render markdown and mathematical equations
- π Tool Results Visualization: Interactive, collapsible tool result sections






- Python 3.8+
- LM Studio
- Recommended Model: Qwen2.5 7B Instruct
git clone https://github.com/yossifibrahem/LLM-Tool-Calling-Web-Application.git
cd LLM-Tool-Calling-Web-Application
Install using Docker
1. Install Docker on your machine (if you haven't already)-
Ensure LM Studio is running on your machine with the server running.
-
Build the docker container with
docker build -t llm_tool_app .
- If on Windows/MacOS run
docker run --name Tools-UI -p 8080:8080 --add-host=host.docker.internal:host-gateway -e LMSTUDIO_BASE_URL="http://host.docker.internal:1234/v1" -e LMSTUDIO_API_KEY="lm-studio" -e LMSTUDIO_MODEL="lmstudio-community/qwen2.5-7b-instruct" llm_tool_app
Otherwise if you are on Linux run:
docker run --name Tools-UI -p 8080:8080 -e LMSTUDIO_BASE_URL="http://host.docker.internal:1234/v1" -e LMSTUDIO_API_KEY="lm-studio" -e LMSTUDIO_MODEL="lmstudio-community/qwen2.5-7b-instruct" llm_tool_app
If your LMSTUDIO_BASE_URL, LMSTUDIO_API_KEY and/or LMSTUDIO_MODEL values are different on your setup, change that in the docker run command.
Run using Python
1. Install Dependencies ```bash pip install -r requirements.txt ```- post-installation setup Set Environment Variables
# For Windows
set FLASK_ENV=production
set FLASK_DEBUG=0
# For Unix/MacOS
export FLASK_ENV=production
export FLASK_DEBUG=0
If you need to change the server host, port, LM Studio URL, API key, or model, set the following environment variables to the desired values like so:
# For Windows
set LMSTUDIO_BASE_URL=YOUR_VALUE_HERE
set LMSTUDIO_API_KEY=YOUR_VALUE_HERE
set LMSTUDIO_MODEL=YOUR_VALUE_HERE
# For Unix/MacOS
export LMSTUDIO_BASE_URL=YOUR_VALUE_HERE
export LMSTUDIO_API_KEY=YOUR_VALUE_HERE
export LMSTUDIO_MODEL=YOUR_VALUE_HERE
Setup crawl4ai
# Run post-installation setup
crawl4ai-setup
# Verify your installation
crawl4ai-doctor
Run the server using
python server.py
or Double-click serverstart.bat
Open your browser and navigate to http://localhost:8080
Contributions are welcome! Please feel free to submit a Pull Request.
- llama.cpp
- LM Studio
- Open-source libraries and tools used in this project
- crawl4ai
- azure-openai-function-calling
For issues or questions, please open a GitHub issue or contact the maintainer.
This README is written by Anthropic's Claude