Skip to content

anw10/memory-llama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

To Run this project install Ollama first and then install the Ollama python library to access the Ollama API using pip install ollama.

Next make sure to download your choice of model to run locally eg. ollama pull qwen3:14b. Remember to add this exact model name to the llama.py file.

About

Adding short-term memory to a local Ollama chat bot

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages