Skip to content

Commit

Permalink
feat: Enable GPU acceleration
Browse files Browse the repository at this point in the history
  • Loading branch information
maozdemir committed May 23, 2023
1 parent 573c436 commit 091c06e
Show file tree
Hide file tree
Showing 3 changed files with 100 additions and 27 deletions.
88 changes: 65 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,26 @@
# privateGPT

Ask questions to your documents without an internet connection, using the power of LLMs. 100% private, no data leaves your execution environment at any point. You can ingest documents and ask questions without an internet connection!

Built with [LangChain](https://github.com/hwchase17/langchain), [GPT4All](https://github.com/nomic-ai/gpt4all), [LlamaCpp](https://github.com/ggerganov/llama.cpp), [Chroma](https://www.trychroma.com/) and [SentenceTransformers](https://www.sbert.net/).

<img width="902" alt="demo" src="https://user-images.githubusercontent.com/721666/236942256-985801c9-25b9-48ef-80be-3acbb4575164.png">

# Environment Setup
## Environment Setup

In order to set your environment up to run the code here, first install all requirements:

```shell
pip3 install -r requirements.txt
```

Then, download the LLM model and place it in a directory of your choice:

- LLM: default to [ggml-gpt4all-j-v1.3-groovy.bin](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin). If you prefer a different GPT4All-J compatible model, just download it and reference it in your `.env` file.

Rename `example.env` to `.env` and edit the variables appropriately.
```

```ini
MODEL_TYPE: supports LlamaCpp or GPT4All
PERSIST_DIRECTORY: is the folder you want your vectorstore in
MODEL_PATH: Path to your GPT4All or LlamaCpp supported LLM
Expand All @@ -27,6 +31,7 @@ EMBEDDINGS_MODEL_NAME: SentenceTransformers embeddings model name (see https://w
Note: because of the way `langchain` loads the `SentenceTransformers` embeddings, the first time you run the script it will require internet connection to download the embeddings model itself.

## Test dataset

This repo uses a [state of the union transcript](https://github.com/imartinez/privateGPT/blob/main/source_documents/state_of_the_union.txt) as an example.

## Instructions for ingesting your own dataset
Expand All @@ -35,20 +40,20 @@ Put any and all your files into the `source_documents` directory

The supported extensions are:

- `.csv`: CSV,
- `.docx`: Word Document,
- `.doc`: Word Document,
- `.enex`: EverNote,
- `.eml`: Email,
- `.epub`: EPub,
- `.html`: HTML File,
- `.md`: Markdown,
- `.msg`: Outlook Message,
- `.odt`: Open Document Text,
- `.pdf`: Portable Document Format (PDF),
- `.pptx` : PowerPoint Document,
- `.ppt` : PowerPoint Document,
- `.txt`: Text file (UTF-8),
- `.csv`: CSV,
- `.docx`: Word Document,
- `.doc`: Word Document,
- `.enex`: EverNote,
- `.eml`: Email,
- `.epub`: EPub,
- `.html`: HTML File,
- `.md`: Markdown,
- `.msg`: Outlook Message,
- `.odt`: Open Document Text,
- `.pdf`: Portable Document Format (PDF),
- `.pptx` : PowerPoint Document,
- `.ppt` : PowerPoint Document,
- `.txt`: Text file (UTF-8),

Run the following command to ingest all the data.

Expand All @@ -75,7 +80,8 @@ If you want to start from an empty database, delete the `db` folder.

Note: during the ingest process no data leaves your local environment. You could ingest without an internet connection, except for the first time you run the ingest script, when the embeddings model is downloaded.

## Ask questions to your documents, locally!
## Ask questions to your documents, locally

In order to ask a question, run a command like:

```shell
Expand All @@ -94,40 +100,76 @@ Note: you could turn off your internet connection, and the script inference woul

Type `exit` to finish the script.


### CLI

The script also supports optional command-line arguments to modify its behavior. You can see a full list of these arguments by running the command ```python privateGPT.py --help``` in your terminal.

## How does it work?

# How does it work?
Selecting the right local models and the power of `LangChain` you can run the entire pipeline locally, without any data leaving your environment, and with reasonable performance.

- `ingest.py` uses `LangChain` tools to parse the document and create embeddings locally using `HuggingFaceEmbeddings` (`SentenceTransformers`). It then stores the result in a local vector database using `Chroma` vector store.
- `privateGPT.py` uses a local LLM based on `GPT4All-J` or `LlamaCpp` to understand questions and create answers. The context for the answers is extracted from the local vector store using a similarity search to locate the right piece of context from the docs.
- `GPT4All-J` wrapper was introduced in LangChain 0.0.162.

# System Requirements
## System Requirements

## Python Version

To use this software, you must have Python 3.10 or later installed. Earlier versions of Python will not compile.

## C++ Compiler

If you encounter an error while building a wheel during the `pip install` process, you may need to install a C++ compiler on your computer.

### For Windows 10/11

To install a C++ compiler on Windows 10/11, follow these steps:

1. Install Visual Studio 2022.
2. Make sure the following components are selected:
* Universal Windows Platform development
* C++ CMake tools for Windows
- Universal Windows Platform development
- C++ CMake tools for Windows
- Universal Windows Platform development
- C++ CMake tools for Windows
3. Download the MinGW installer from the [MinGW website](https://sourceforge.net/projects/mingw/).
4. Run the installer and select the `gcc` component.

## Mac Running Intel

When running a Mac with Intel hardware (not M1), you may run into _clang: error: the clang compiler does not support '-march=native'_ during pip install.

If so set your archflags during pip install. eg: _ARCHFLAGS="-arch x86_64" pip3 install -r requirements.txt_

# Disclaimer
## Using GPU acceleration

1. Install [NVidia CUDA 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive)
2. Build `llama-cpp-python` package with cuBLAS enabled. Run the code below in the directory you want to build the package in.
- Powershell:

```powershell
git clone https://github.com/abetlen/llama-cpp-python
cd llama-cpp-python/vendor
git clone https://github.com/ggerganov/llama.cpp
cd ../..
pip3 install scikit-build
$Env:CMAKE_ARGS="-DLLAMA_CUBLAS=on"; $Env:FORCE_CMAKE=1; py ./setup.py install
```

- Bash:

```bash
git clone https://github.com/abetlen/llama-cpp-python
cd llama-cpp-python/vendor
git clone https://github.com/ggerganov/llama.cpp
cd ../..
pip3 install scikit-build
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 python3 ./setup.py install
```

3. Enable GPU acceleration in `.env` file by setting `IS_GPU_ENABLED` to `True`
4. Run `ingest.py` and `privateGPT.py` as usual

## Disclaimer

This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vectorstores to improve performance.
3 changes: 2 additions & 1 deletion example.env
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,5 @@ PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2
MODEL_N_CTX=1000
MODEL_N_CTX=1000
IS_GPU_ENABLED=False
36 changes: 33 additions & 3 deletions privateGPT.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,27 +16,57 @@
model_type = os.environ.get('MODEL_TYPE')
model_path = os.environ.get('MODEL_PATH')
model_n_ctx = os.environ.get('MODEL_N_CTX')
is_gpu_enabled = os.environ.get('IS_GPU_ENABLED')

from constants import CHROMA_SETTINGS

import subprocess as sp


def get_gpu_memory() -> list:
"""
Returns the amount of free memory in MB for each GPU.
"""
command = "nvidia-smi --query-gpu=memory.free --format=csv"
memory_free_info = sp.check_output(command.split()).decode('ascii').split('\n')[:-1][1:]
memory_free_values = [int(x.split()[0]) for i, x in enumerate(memory_free_info)]
return memory_free_values

def calculate_layer_count() -> int | None:
"""
Calculates the number of layers that can be used on the GPU.
"""
if is_gpu_enabled == "False":
return None
LAYER_SIZE_MB = 120.6 # This is the size of a single layer on VRAM, and is an approximation.
LAYERS_TO_REDUCE = 6 # About 700 MB is needed for the LLM to run, so we reduce the layer count by 6 to be safe.
if (get_gpu_memory()[0]//LAYER_SIZE_MB) - LAYERS_TO_REDUCE > 32:
return 32
else:
return (get_gpu_memory()[0]//LAYER_SIZE_MB-LAYERS_TO_REDUCE)

def main():
# Parse the command line arguments
args = parse_arguments()
embeddings = HuggingFaceEmbeddings(model_name=embeddings_model_name)
embeddings_kwargs = {'device': 'cuda'} if is_gpu_enabled == "True" else {}
embeddings = HuggingFaceEmbeddings(model_name=embeddings_model_name, model_kwargs=embeddings_kwargs)
db = Chroma(persist_directory=persist_directory, embedding_function=embeddings, client_settings=CHROMA_SETTINGS)
retriever = db.as_retriever()
# activate/deactivate the streaming StdOut callback for LLMs
callbacks = [] if args.mute_stream else [StreamingStdOutCallbackHandler()]
# Prepare the LLM
match model_type:
case "LlamaCpp":
llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, callbacks=callbacks, verbose=False)
llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, callbacks=callbacks, verbose=False, n_gpu_layers=calculate_layer_count())
case "GPT4All":
if is_gpu_enabled == "True":
print("GPU is enabled, but GPT4All does not support GPU acceleration. Please use LlamaCpp instead.")
exit(1)
llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=False)
case _default:
print(f"Model {model_type} not supported!")
exit;
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents= not args.hide_source)
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents=True)
# Interactive questions and answers
while True:
query = input("\nEnter a query: ")
Expand Down

0 comments on commit 091c06e

Please sign in to comment.