LLM-Evaluation-s-Always-Fatiguing
Pinned Loading
Repositories
Showing 10 of 14 repositories
- smolagents Public Forked from huggingface/smolagents
🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.
LLM-Evaluation-s-Always-Fatiguing/smolagents’s past year of commit activity - aider-solver-template Public
LLM-Evaluation-s-Always-Fatiguing/aider-solver-template’s past year of commit activity - open-webui Public Forked from open-webui/open-webui
User-friendly WebUI for LLMs (Formerly Ollama WebUI)
LLM-Evaluation-s-Always-Fatiguing/open-webui’s past year of commit activity - leaf-playground Public
A framework to build scenario simulation projects where human and LLM based agents can participant in, with a user-friendly web UI to visualize simulation, support automatically evaluation on agent action level.
LLM-Evaluation-s-Always-Fatiguing/leaf-playground’s past year of commit activity - ChainForge Public Forked from ianarawjo/ChainForge
An open-source visual programming environment for battle-testing prompts to LLMs.
LLM-Evaluation-s-Always-Fatiguing/ChainForge’s past year of commit activity - node-event-source Public
A better API for making Event Source requests (SSE) in Node.js, with all the features of axios.
LLM-Evaluation-s-Always-Fatiguing/node-event-source’s past year of commit activity - leaf-eval-tools Public
LLM-Evaluation-s-Always-Fatiguing/leaf-eval-tools’s past year of commit activity