A real-time AI chat example that leverages Server-Sent Events (SSE) to stream responses from OpenAI's API. Built with Hono framework and powered by Bun runtime.
- 🚀 Real-time streaming responses using Server-Sent Events (SSE)
- ⚡️ High-performance backend powered by Bun runtime
- 🔄 Seamless integration with OpenAI API
- 🛠️ Built with Hono - a lightweight, ultrafast web framework
- 📱 Modern and responsive chat interface
Before you begin, ensure you have the following installed:
- Bun (latest version)
- An OpenAI API key
bun install
Create a .env
file in the project root and add your OpenAI API key:
OPENAI_BASE_URL=openai_base_url
OPENAI_API_KEY=your_api_key_here
bun run dev
Open your browser and navigate to http://localhost:3000
project-root/
├── src/
│ └── index.ts # Entry point
├── index.html # Website template
└── package.json
Initiates a chat session with the AI model.
{
"prompt": "Your prompt here"
}
Server-Sent Events stream with AI responses.