Agent that performs action on your system by executing code.
Emplode Agent performs actions on your system by executing code locally, It can also serve as an agentic framework for your disposable sandbox projects. You can chat with Emplode in your terminal by running emplode
after installing.
This provides a natural-language interface to your system's general-purpose capabilities:
- Create, edit and arrange files.
- Control a browser to perform research
- Plot, clean, and analyze large datasets
- ...etc.
pip install emplode
After installation, simply run emplode
:
emplode
import emplode
emplode.chat("Organize all images in my downloads folder into subfolders by year, naming each folder after the year.") # Executes a single command
emplode.chat() # Starts an interactive chat
For gpt-3.5-turbo
, use fast mode:
emplode --fast
In Python, you will need to set the model manually:
emplode.model = "gpt-3.5-turbo"
You can run emplode
in local mode from the command line to use Code Llama
:
emplode --local
Or run any Hugging Face model locally by using its repo ID (e.g. "tiiuae/falcon-180B"):
emplode --model nvidia/Llama-3.1-Nemotron-70B-Instruct
emplode --model meta-llama/Llama-3.2-11B-Vision-Instruct
Emplode allows you to set default behaviors using a .env file. This provides a flexible way to configure it without changing command-line arguments every time.
Here's a sample .env configuration:
EMPLODE_CLI_AUTO_RUN=False
EMPLODE_CLI_FAST_MODE=False
EMPLODE_CLI_LOCAL_RUN=False
EMPLODE_CLI_DEBUG=False
You can modify these values in the .env file to change the default behavior of the Emplode
Emplode equips a function-calling model with an exec()
function, which accepts a language
(like "Python" or "JavaScript") and code
to run.