LocalAssistantPy is a simple Streamlit-based Python application that utilizes the GPT4All library to create a chat interface with a local language model. This interface allows users to interact with the model by entering messages and receiving responses.
To run the LocalAssistantPy application, make sure you have the required dependencies installed. You can install them using the following command:
pip install streamlit gpt4all
Clone this repository with git and navigate to the project directory:
git clone https://github.com/GrahamboJangles/LocalAssistantPy.git
cd LocalAssistantPy
Run the Streamlit app with the command below or run the batch file _run.bat
:
streamlit run main.py
Enter your message in the text area labeled "Enter your message below." Click the "Submit" button to send your message to the language model. The assistant's response will be displayed below the text area.
Clear Context: Use the "Clear Context" button to reset the conversation context to the initial state.
Make sure to set the correct path to your local language model by updating the model_name
variable in the script. My favorite model to use with it is Mistral 7b.
model_name = r"your/local/model/path"
- Streamlit
- GPT4All
- Code execution with output/error passed automatically to LLM
- AutoGPT; give it a goal and let it run until it figures it out
- Internet searching capabilities