This repository proposes a template to set up and build a GPU-accelerated RAG-API FastAPI and HuggingFace Transformers.
- Retrieval-Augmented Generation (RAG) API Template with Frontend and Backend
|-- LICENSE
|-- README.md
|-- backend
| |-- Dockerfile
| |-- app
| |-- requirements.txt
|-- docker-compose.yml
|-- frontend
| |-- Dockerfile
| |-- README.md
| |-- index.html
| |-- package-lock.json
| |-- package.json
| |-- public
| |-- src
| |-- tsconfig.app.json
| |-- tsconfig.json
| |-- tsconfig.node.json
| `-- vite.config.ts
|-- install-docker.sh
|-- install-nvidia-container-toolkit.sh
|-- models_loader.ipynb
`-- nginx
`-- nginx.conf
- File Upload: Easily upload PDF files to the server.
- Document Loading: Load and process documents using the langchain PyPDFLoader.
- Text Splitting: Split documents into manageable chunks using the CharacterTextSplitter.
- Vector Store: Create a FAISS vector store from document chunks for efficient retrieval.
- Embeddings: Use HuggingFace embeddings to transform text data.
- Text Generation: Generate answers to questions using a pre-trained language model.
- Asynchronous Streaming: Stream responses asynchronously for efficient and responsive querying.
- Customizable Pipeline: Easily customize the text generation pipeline with quantization and other settings.
- CORS Support: Full CORS support for cross-origin requests.
- Logging: Detailed logging for monitoring and debugging.
- Memory Management: Efficient GPU memory management with garbage collection.
- Operating System: Windows, macOS, Linux
- Minimum Disk Space: 10GB
- Minimum Memory: 4GB RAM
- GPU Characteristics: NVIDIA RTX A2000 GPU with 4096MiB total memory
- NVIDIA drivers
- Visual Studio Code (optional, for development)
- Jupyter Notebook (for downloading pre-trained models)
git clone https://github.com/Perpetue237/rag-api-template.git
After cloning the repository, follow these steps to set up the project:
- Install Docker and nvidia-container-toolkit.sh
Note: The execution of this shell files reboot the notebook. To avoid this you may want to comment the corresponding lines out.
Navigate to the project directory:
cd rag-api-template
-
Docker Desktop:
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu lunar stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo bash install-docker.sh
Reboot the system after a succesfull installation.
-
NVIDIA Container Toolkit:
sudo bash install-nvidia-container-toolkit.sh
- Create Directories Create directories to store the models, tokenizers, and data. You can create these directories anywhere on your file system. Here is an example of how to create them in the project's root directory:
mkdir -p ~/rag-template/models/models
mkdir -p ~/rag-template/models/tokenizers
mkdir -p ~/rag-template/rag-uploads
- Update
docker-compose.yml
Modify the docker-compose.yml file to mount these directories:
services:
backend:
...
volumes:
- /home/perpetue/rag-template/models/models:/app/models # Mount the models directory
- /home/perpetue/rag-template/models/tokenizers:/app/tokenizers # Mount the tokenizers directory
- /home/perpetue/rag-template/rag-uploads:/app/rag-uploads # Mount the uploads directory
...
Replace /home/perpetue/rag-template
with the path where you created the directories.
- Update
.devcontainer/devcontainer.json
If you are using VSCode for development, you need to mount these paths in the .devcontainer/devcontainer.json file:
...
"mounts": [
"source=/home/perpetue/rag-template/models/models,target=/app/models,type=bind,consistency=cached",
"source=/home/perpetue/rag-template/models/tokenizers,target=/app/tokenizers,type=bind,consistency=cached",
"source=/home/perpetue/rag-template/rag-uploads,target=/app/rag-uploads,type=bind,consistency=cached"
],
...
Replace /home/perpetue/rag-template
with the path where you created the directories.
- Download Pre-trained Models
Use the models_loader.ipynb notebook to download the pre-trained models you want to use. Open the notebook in Jupyter Notebook or JupyterLab and follow the instructions to download the necessary models. You can put your huggingface token and openai keys in an
.env
file in the projekt root folder, according to the sample in .env.sample.
```sh
cd rag-api-template
```
```sh
docker compose down
docker volume prune
docker system prune
docker-compose up --build -d
```
Once the API is successfully built, you can visit it at http://localhost/. You should see the following frontend:
docker-compose down
docker system prune
docker volume prune
This Docker Compose file sets up a multi-container application with three services: frontend
, backend
, and nginx
.
-
Frontend Service
- Build Context:
./frontend
- Port: 80
- Network:
app-network
- Build Context:
-
Backend Service
- Build Context:
./backend
- Port: 8000
- Network:
app-network
- Volumes: Mounts local directories for models and uploads
- GPU Configuration: Configured to use NVIDIA GPUs
- Build Context:
-
Nginx Service
- Image:
nginx:latest
- Port: 8081
- Configuration: Uses a custom Nginx configuration file
- Dependencies: Depends on
frontend
andbackend
- Network:
app-network
- Image:
Docker Compose builds, configures, and runs the specified services in isolated containers. Services can communicate over the defined app-network
, ensuring connectivity and proper resource allocation.
This Nginx configuration file sets up a basic web server with proxying capabilities and custom error handling.
-
Worker Processes
worker_processes 1;
- Uses one worker process.
-
Events
worker_connections 1024;
- Allows up to 1024 connections per worker.
-
HTTP Server
-
Port
listen 80;
- Listens on port 80 for HTTP requests.
-
Server Name
server_name localhost;
- Useslocalhost
as the server name.
-
Root Path (
/
)- Serves static files from
/usr/share/nginx/html
. - Defaults to
index.html
and falls back to it for single-page applications.
- Serves static files from
-
Proxy Endpoints
- /upload
- Forwards requests to
http://backend:8000/upload
. - Includes CORS headers for cross-origin requests.
- Forwards requests to
- /retrieve_from_path
- Forwards requests to
http://backend:8000/retrieve_from_path
. - Includes CORS headers.
- Forwards requests to
- /upload
-
Error Handling
error_page 500 502 503 504 /50x.html;
- Custom error page for server errors.
-
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the Apache License. See LICENSE for more information.