TensorZero creates a feedback loop for optimizing LLM applications — turning production data into smarter, faster, and cheaper models.
-
Updated
Mar 6, 2025 - Rust
TensorZero creates a feedback loop for optimizing LLM applications — turning production data into smarter, faster, and cheaper models.
A git prepare-commit-msg hook for authoring commit messages with GPT-3.
A fast Rust based tool to serialize text-based files in a repository or directory for LLM consumption
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
Terminal UI to chat with large language models (LLM) using different model backends, and integrations with your favourite editors!
Production-ready Inference, Ingestion and Indexing built in Rust 🦀
Edge full-stack LLM platform. Written in Rust
open source tooling for AI search and understanding
Simple, Composable, High-Performance, Safe and Web3 Friendly AI Agents for Everyone
The project covers various aspects, from machine learning advancements to decentralized file sharing, and aims to transform the accessibility and management of files while contributing to the progress of artificial intelligence.
A project written in the Rust language with the goal of offline load of small LLM Model, specifically RAG (Retrieval Augmented Generation) on mobile devices.
MilkyTeadrop: Voice-Activated IoT with a touch of tea magic. Enjoy IoT interactions infused with tea while conversing with a charming voice assistant.
AskUgin is a tool used to understand and interpret Magic: The Gathering ruling for players new and old.
Build Simple, Composable and High-Performance On-Chain AI Agents
Add a description, image, and links to the large-language-models topic page so that developers can more easily learn about it.
To associate your repository with the large-language-models topic, visit your repo's landing page and select "manage topics."