Open-source AI-powered search engine.
- 🛠️ Tech Stack
- 🏃🏿♂️ Getting Started
- 🚀 Deploy
- Add support for local LLMs
- Docker deployment setup
- Frontend: Next.js
- Backend: FastAPI
- Search API: Tavily
- Logging: Logfire
- Rate Limiting: Redis
- Components: shadcn/ui
git clone git@github.com:rashadphz/farfalle.git
cd farfalle/src/frontend
pnpm install
cd farfalle/src/backend
poetry install
Create a .env
file in the root of the project and add these variables:
TAVILY_API_KEY=...
OPENAI_API_KEY=...
GROQ_API_KEY=...
# Everything below is optional
# Logfire
LOGFIRE_TOKEN=
# (True | False)
RATE_LIMIT_ENABLED=
# Redis URL
REDIS_URL=
cd farfalle/src/frontend
pnpm dev
cd farfalle/src/backend
poetry shell
uvicorn backend.main:app --reload
Visit http://localhost:3000 to view the app.
After the backend is deployed, copy the web service URL to your clipboard. It should look something like: https://some-service-name.onrender.com.
Use the copied backend URL in the NEXT_PUBLIC_API_URL
environment variable when deploying with Vercel.
And you're done! 🥳