Skip to content

leex279/bolt-diy-full-stack

Repository files navigation

Local AI Development Environment

This project sets up a local AI development environment with Ollama, OpenWebUI, and Bolt DIY, supporting both NVIDIA and AMD GPUs.

Getting Started

  1. Clone the repository:
git clone https://github.com/leex279/bolt-diy-full-stack.git
  1. Navigate to the project directory:
cd bolt-diy-full-stack

Prerequisites

Quick Start (Windows)

For Windows users, we provide an automated installation script:

  1. Double-click install.bat or run from command prompt:
.\install.bat

This script will:

  • Start the appropriate services based on your GPU (NVIDIA or AMD)
  • Pull the Qwen 7B model
  • Open your browser to the Bolt DIY interface

Note: The initial model download may take several minutes depending on your internet connection.

Project Structure

project/
├── docker-compose-amd.yml
├── docker-compose-nvidia.yml
├── Dockerfile
├── .env.local
└── README.md

Setup Instructions

1. Environment Setup

Create a .env.local file for Bolt DIY configuration (if needed).

2. Starting the Services

For NVIDIA GPUs:

docker compose -f docker-compose-nvidia.yml up -d

Note: This will automatically build the custom Ollama image on first run.

For AMD GPUs:

docker compose -f docker-compose-amd.yml up -d

Note: AMD version uses pre-built images.

3. Accessing the Services

  • Ollama API: http://localhost:11434
  • OpenWebUI: http://localhost:8080
  • Bolt DIY: http://localhost:3000

Common Commands

Stop services:

docker compose -f docker-compose-[nvidia/amd].yml down

View logs:

docker compose -f docker-compose-[nvidia/amd].yml logs -f

Rebuild and restart:

docker compose -f docker-compose-[nvidia/amd].yml up -d --build

Service Details

Ollama

  • Port: 11434
  • GPU-enabled for both AMD and NVIDIA
  • Persistent storage for models

OpenWebUI

  • Port: 8080
  • Web interface for Ollama
  • Persistent model storage

Bolt DIY

  • Port: 3000
  • Development environment
  • Requires .env.local configuration

Volumes

The following persistent volumes are created:

  • ollama: For Ollama model storage
  • openwebui-data: For OpenWebUI data
  • bolt-diy-data: For Bolt DIY data

Troubleshooting

  1. GPU Issues:

    • For NVIDIA: Run nvidia-smi to verify GPU detection
    • For AMD on Windows:
      • ROCm containers are not supported natively on Windows
      • Use WSL2 with Ubuntu for AMD GPU support
      • Or use CPU-only mode by removing GPU configurations
    • For AMD: Check ROCm installation and compatibility
  2. Container Issues:

    • Check logs: docker compose -f docker-compose-[nvidia/amd].yml logs [service-name]
    • Verify port availability
    • Ensure Docker has GPU access
  3. Network Issues:

    • Verify host.docker.internal resolution
    • Check if required ports are not in use
    • Ensure services are on the same network

Additional Resources

Using WSL2 for AMD GPU Support

1. Install and Setup WSL2

  1. Open PowerShell as Administrator and run:
wsl --install
  1. Restart your computer when prompted.

  2. Install Ubuntu from Microsoft Store or via PowerShell:

wsl --install -d Ubuntu

2. Setup ROCm in WSL2

  1. Open Ubuntu in WSL2:
wsl -d Ubuntu
  1. Update the system:
sudo apt update && sudo apt upgrade -y
  1. Install ROCm:

First, remove any existing ROCm installations:

sudo apt purge rocm-* hip-* rocminfo
sudo apt autoremove

Add the ROCm repository:

sudo mkdir --parents --mode=0755 /etc/apt/keyrings
wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | \
    gpg --dearmor | sudo tee /etc/apt/keyrings/rocm.gpg > /dev/null

echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/debian jammy main" | \
    sudo tee /etc/apt/sources.list.d/rocm.list

echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' | \
    sudo tee /etc/apt/preferences.d/rocm-pin-600

Install ROCm packages:

sudo apt update
sudo apt install rocm-hip-runtime rocm-hip-sdk
  1. Add user to video group:
sudo usermod -aG video $LOGNAME
sudo usermod -aG render $LOGNAME
  1. Set up environment variables:
echo 'export PATH=$PATH:/opt/rocm/bin' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/lib' >> ~/.bashrc
source ~/.bashrc
  1. Verify installation:
rocminfo

If successful, you should see information about your AMD GPU.

3. Running Docker Compose in WSL2

  1. Navigate to your project directory in WSL2:
cd /mnt/c/Users/YourUsername/Documents/GitHub/bolt-diy-full-stack
  1. Start the services:
docker compose -f docker-compose-amd.yml up -d

4. Accessing Services

The services will be available at the same ports as before:

  • Ollama API: http://localhost:11434
  • OpenWebUI: http://localhost:8080
  • Bolt DIY: http://localhost:3000

WSL2 Useful Commands

  • List installed WSL distributions:
wsl --list --verbose
  • Set Ubuntu as default WSL distribution:
wsl --set-default Ubuntu
  • Access WSL2 Ubuntu directly:
wsl
  • Shutdown WSL:
wsl --shutdown

Using Ollama Models

After starting the services, you can pull and use models through either the CLI or OpenWebUI.

Pull Models via CLI

docker exec -it ollama ollama pull qwen:7b

Note: Initial model download may take several minutes depending on your internet connection and hardware.

Alternative: Using OpenWebUI

  1. Open OpenWebUI in your browser: http://localhost:8080
  2. Click on "Create New Chat"
  3. Select "Download New Model"
  4. Search for "qwen" and select "qwen:7b"
  5. Click "Download"

Verify Model Installation

Check if the model was downloaded successfully:

docker exec -it ollama ollama list

You should see qwen:7b in the list of available models.

Start Chatting

  • Via OpenWebUI: Navigate to http://localhost:8080 and start a new chat with qwen:7b
  • Via Bolt DIY: Navigate to http://localhost:3000 and connect to your local Ollama instance

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published