This project sets up a local AI development environment with Ollama, OpenWebUI, and Bolt DIY, supporting both NVIDIA and AMD GPUs.
- Clone the repository:
git clone https://github.com/leex279/bolt-diy-full-stack.git
- Navigate to the project directory:
cd bolt-diy-full-stack
-
Docker and Docker Compose
-
For NVIDIA GPUs:
-
For AMD GPUs:
- Windows:
⚠️ Important Note: ROCm containers are currently not supported natively on Windows. You have two options:- Use WSL2 with Ubuntu and follow the ROCm Installation Guide for Linux
- Use the CPU-only version by removing the GPU-related configurations from the docker-compose file
- Linux:
- ROCm Installation Guide for Linux
- Follow the Quick Start guide for your specific Linux distribution
- Supported GPUs List
- Windows:
For Windows users, we provide an automated installation script:
- Double-click
install.bat
or run from command prompt:
.\install.bat
This script will:
- Start the appropriate services based on your GPU (NVIDIA or AMD)
- Pull the Qwen 7B model
- Open your browser to the Bolt DIY interface
Note: The initial model download may take several minutes depending on your internet connection.
project/
├── docker-compose-amd.yml
├── docker-compose-nvidia.yml
├── Dockerfile
├── .env.local
└── README.md
Create a .env.local
file for Bolt DIY configuration (if needed).
For NVIDIA GPUs:
docker compose -f docker-compose-nvidia.yml up -d
Note: This will automatically build the custom Ollama image on first run.
For AMD GPUs:
docker compose -f docker-compose-amd.yml up -d
Note: AMD version uses pre-built images.
- Ollama API:
http://localhost:11434
- OpenWebUI:
http://localhost:8080
- Bolt DIY:
http://localhost:3000
Stop services:
docker compose -f docker-compose-[nvidia/amd].yml down
View logs:
docker compose -f docker-compose-[nvidia/amd].yml logs -f
Rebuild and restart:
docker compose -f docker-compose-[nvidia/amd].yml up -d --build
- Port: 11434
- GPU-enabled for both AMD and NVIDIA
- Persistent storage for models
- Port: 8080
- Web interface for Ollama
- Persistent model storage
- Port: 3000
- Development environment
- Requires
.env.local
configuration
The following persistent volumes are created:
ollama
: For Ollama model storageopenwebui-data
: For OpenWebUI databolt-diy-data
: For Bolt DIY data
-
GPU Issues:
- For NVIDIA: Run
nvidia-smi
to verify GPU detection - For AMD on Windows:
- ROCm containers are not supported natively on Windows
- Use WSL2 with Ubuntu for AMD GPU support
- Or use CPU-only mode by removing GPU configurations
- For AMD: Check ROCm installation and compatibility
- For NVIDIA: Run
-
Container Issues:
- Check logs:
docker compose -f docker-compose-[nvidia/amd].yml logs [service-name]
- Verify port availability
- Ensure Docker has GPU access
- Check logs:
-
Network Issues:
- Verify
host.docker.internal
resolution - Check if required ports are not in use
- Ensure services are on the same network
- Verify
- Open PowerShell as Administrator and run:
wsl --install
-
Restart your computer when prompted.
-
Install Ubuntu from Microsoft Store or via PowerShell:
wsl --install -d Ubuntu
- Open Ubuntu in WSL2:
wsl -d Ubuntu
- Update the system:
sudo apt update && sudo apt upgrade -y
- Install ROCm:
First, remove any existing ROCm installations:
sudo apt purge rocm-* hip-* rocminfo
sudo apt autoremove
Add the ROCm repository:
sudo mkdir --parents --mode=0755 /etc/apt/keyrings
wget https://repo.radeon.com/rocm/rocm.gpg.key -O - | \
gpg --dearmor | sudo tee /etc/apt/keyrings/rocm.gpg > /dev/null
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/debian jammy main" | \
sudo tee /etc/apt/sources.list.d/rocm.list
echo -e 'Package: *\nPin: release o=repo.radeon.com\nPin-Priority: 600' | \
sudo tee /etc/apt/preferences.d/rocm-pin-600
Install ROCm packages:
sudo apt update
sudo apt install rocm-hip-runtime rocm-hip-sdk
- Add user to video group:
sudo usermod -aG video $LOGNAME
sudo usermod -aG render $LOGNAME
- Set up environment variables:
echo 'export PATH=$PATH:/opt/rocm/bin' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/rocm/lib' >> ~/.bashrc
source ~/.bashrc
- Verify installation:
rocminfo
If successful, you should see information about your AMD GPU.
- Navigate to your project directory in WSL2:
cd /mnt/c/Users/YourUsername/Documents/GitHub/bolt-diy-full-stack
- Start the services:
docker compose -f docker-compose-amd.yml up -d
The services will be available at the same ports as before:
- Ollama API:
http://localhost:11434
- OpenWebUI:
http://localhost:8080
- Bolt DIY:
http://localhost:3000
- List installed WSL distributions:
wsl --list --verbose
- Set Ubuntu as default WSL distribution:
wsl --set-default Ubuntu
- Access WSL2 Ubuntu directly:
wsl
- Shutdown WSL:
wsl --shutdown
After starting the services, you can pull and use models through either the CLI or OpenWebUI.
docker exec -it ollama ollama pull qwen:7b
Note: Initial model download may take several minutes depending on your internet connection and hardware.
- Open OpenWebUI in your browser:
http://localhost:8080
- Click on "Create New Chat"
- Select "Download New Model"
- Search for "qwen" and select "qwen:7b"
- Click "Download"
Check if the model was downloaded successfully:
docker exec -it ollama ollama list
You should see qwen:7b
in the list of available models.
- Via OpenWebUI: Navigate to
http://localhost:8080
and start a new chat with qwen:7b - Via Bolt DIY: Navigate to
http://localhost:3000
and connect to your local Ollama instance