This project provides a FastAPI interface to serve Ollama models. It allows other applications to send queries to the model and receive responses.
- LangChain: Manages and processes language models.
- Ollama: The core language model used for generating responses.
- RAG: RAG for pdf.
-
Clone the repository:
git clone https://github.com/ersinaksar/LangChain-Streamlit-RAG.git cd serve-ollama-models
-
Create a virtual environment and activate it:
python3 -m venv .venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install the dependencies:
pip install -r requirements.txt
-
Start the streamlit application:
streamlit run main.py
This project is licensed under the MIT License.