Local Agent platform with generative AI models and tools to make AI helpful for everyone.
When using LLM api calls, be free to download Argo.
When running LLM locally, make sure your machine meets the following minimum system requirements:
- CPU >= 2 Core
- RAM >= 16 GB
- Disk >= 50 GB
- GPU >= 8G(Mac M1 above、Windows 10 above)
Download, Click and Install.
-
Macos silicon(M1 and above):argo-0.1.5-osx-installer.dmg
-
Windows 64bit(Win 10 and above):argo-0.1.5-windows-x64-installer.exe
Quick start with Docker 🐳
- Extra Software requirements with Docker
- Docker >= 24.0.0
Warning
To enable CUDA in Docker, you must install the Nvidia CUDA container toolkit on your Linux/WSL system.
- If you are using Linux, Ollama included in the image by default.
- If you are using MacOS (Monterey or later), Ollama deployed on the host machine by default.
- If you are using Windows, please manually install Docker and WSL environment, then follow Linux instructions.
- We'll use brew to install docker and ollama, if something wrong, you can install docker and ollama yourself.
Tip
Recommend Ollama model: qwen2.5:7b
for chat, shaw/dmeta-embedding-zh
for Knowledge of chinese.
# Usage: {run [-n name] [-p port] | stop [-n name] | update}
# default name: argo
# default port: 38888
# Download image, create a container and start
sh argo_run_docker.sh run
# Stop the container (data will be retained)
sh argo_run_docker.sh stop
# Update the image to the latest version (the original image will be deleted)
sh argo_run_docker.sh update
Free to join us and talk: https://discord.gg/79AD9RQnHF
Wechat Group: