Skip to content

A self-reflective LLM agent with tools, memory, and reasoning built using LangChain + ReAct + Reflexion. Modular FastAPI backend + Streamlit UI.

Notifications You must be signed in to change notification settings

rtj1/reflexive-llm-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 Autonomous Reflexive LLM Agent

An end-to-end autonomous agent system built using LangChain, FastAPI, Streamlit, and ChromaDB. The agent uses the ReAct pattern to reason and take actions, stores lessons in long-term memory, and applies the Reflexion pattern to self-evaluate and improve over time.

🚀 Features

  • 🔄 ReAct Agent Loop: Thought → Action → Observation → Repeat
  • 🧠 Short-term Memory: Tracks conversation context
  • 📚 Long-term Memory (ChromaDB): Stores reflections for future use
  • 🪞 Reflexion Pattern: Self-evaluation after task execution
  • 🔍 Web Search Tool: Real-time search with Tavily API
  • Calculator Tool: Simple math solver
  • 🧾 FastAPI: /agent/chat endpoint for agent queries
  • 📊 Streamlit Dashboard: Submit queries, view reflections & traces
  • 🧠 Structured Trace Logs: Full run history in JSONL files

🧰 Tech Stack

  • Python 3.12
  • LangChain
  • FastAPI
  • Streamlit
  • ChromaDB
  • SentenceTransformers
  • Tavily API
  • LLaMA 3 via DeepInfra

📂 Project Structure

📂 Project Structure

.
├── agents/
│   └── react_agent.py
├── reflexion/
│   ├── evaluator.py
│   ├── reflector.py
│   ├── reflection_manager.py
│   ├── embedding.py
│   └── vector_store.py
├── tools/
│   ├── calculator.py
│   └── web_search.py
├── memory/
│   ├── short_term.py
│   └── trace_logger.py
├── api/
│   └── routes.py
├── logs/
│   └── trace_*.jsonl
├── dashboard/
│   └── app.py           # Streamlit dashboard
├── main.py              # FastAPI entrypoint
├── config.py            # API key loading
├── .env.example         # Env vars sample
├── requirements.txt
└── Dockerfile           # (optional, for deployment)

🔧 Setup Instructions

1. Clone the repo

git clone https://github.com/your-username/autonomous-agent.git
cd autonomous-agent

2. Create and activate virtual environment

python3 -m venv venv
source venv/bin/activate  # or .\venv\Scripts\activate on Windows

3. Install dependencies

pip install -r requirements.txt

4. Configure environment variables

Create a .env file based on .env.example and add:

DEEPINFRA_API_KEY=your_deepinfra_key
TAVILY_API_KEY=your_tavily_key

5. Start the backend

uvicorn main:app --reload

6. Start the Streamlit dashboard

streamlit run app.py

📈 Example Query

“What’s the latest earnings report for Apple?”

The agent will reason, use tools, reflect, and show a structured result with its own self-improvement.

🧠 Reflection Example

The agent evaluates itself post-task and stores this insight in ChromaDB for reuse in future queries.


📄 License

MIT


By Tharun Jagarlamudi (https://github.com/rtj1)

About

A self-reflective LLM agent with tools, memory, and reasoning built using LangChain + ReAct + Reflexion. Modular FastAPI backend + Streamlit UI.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages