A simple, clean chat application that integrates with the IO Intelligence API, compatible with OpenAI API format. Built for easy deployment on Vercel.
- Clean, modern chat interface
- Real-time messaging with AI responses
- Loading indicators and error handling
- Mobile-responsive design
- RESTful API endpoint for chat functionality
- Vercel-ready deployment configuration
- Frontend: Vanilla HTML, CSS, JavaScript
- Backend: Node.js serverless functions
- AI Model: meta-llama/Llama-3.3-70B-Instruct via IO Intelligence API
- Deployment: Vercel
ioint-chat-vercel/
├── api/
│ └── chat.js # API endpoint for chat requests
├── public/
│ ├── index.html # Main chat interface
│ ├── style.css # Styling
│ └── script.js # Frontend functionality
├── package.json # Dependencies and configuration
├── vercel.json # Vercel deployment settings
├── .gitignore # Git ignore rules
└── README.md # This file
POST /api/chat
Content-Type: application/json
x-api-key: your_api_key_here (optional if using environment variable)
{
"prompt": "Hello AI"
}
{
"response": "AI response text here"
}
{
"error": "Error message"
}
- Node.js (version 18.x or higher)
- npm or yarn
- Vercel CLI (optional but recommended)
-
Clone the repository
git clone <your-repo-url> cd ioint-chat-vercel
-
Install dependencies
npm install
-
Set up environment variables
Create a
.env
file in the root directory:IO_INTELLIGENCE_API_KEY=your_actual_api_key_here
-
Start local development server
vercel dev
The app will be available at
http://localhost:3000
If you prefer not to use Vercel CLI locally, you can test the API using:
# Test with curl (replace YOUR_API_KEY)
curl -X POST http://localhost:3000/api/chat \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{"prompt": "Hello AI"}'
-
Push to GitHub
git add . git commit -m "Initial commit" git push origin main
-
Connect to Vercel
- Go to vercel.com
- Import your GitHub repository
- Vercel will automatically detect the configuration
-
Set Environment Variables
- In Vercel dashboard, go to Project Settings → Environment Variables
- Add:
IO_INTELLIGENCE_API_KEY
=your_actual_api_key_here
- Set for all environments (Production, Preview, Development)
-
Deploy
- Vercel will automatically deploy on every push to main branch
-
Install Vercel CLI
npm i -g vercel
-
Login to Vercel
vercel login
-
Deploy
vercel --prod
-
Set Environment Variables
vercel env add IO_INTELLIGENCE_API_KEY
- Create a new POST request
- URL:
http://localhost:3000/api/chat
(local) orhttps://your-app.vercel.app/api/chat
(deployed)
Content-Type: application/json
x-api-key: your_api_key_here
{
"prompt": "Explain quantum computing in simple terms"
}
Variable | Description | Required |
---|---|---|
IO_INTELLIGENCE_API_KEY |
Your IO Intelligence API key | Yes |
The API is configured to use:
- Model:
meta-llama/Llama-3.3-70B-Instruct
- Max Tokens: 1000
- Temperature: 0.7
- Endpoint:
https://api.intelligence.io.solutions/api/v1/chat/completions
-
"API key is required" error
- Ensure
IO_INTELLIGENCE_API_KEY
is set in environment variables - Or pass the key in
x-api-key
header
- Ensure
-
"Method not allowed" error
- Ensure you're using POST method for
/api/chat
- Ensure you're using POST method for
-
Recursive invocation error
- Don't include
"dev": "vercel dev"
in package.json scripts
- Don't include
-
CORS issues
- The API automatically handles CORS for the frontend
- Use
vercel dev
for the most accurate local environment - Check browser console for JavaScript errors
- Verify API responses in Network tab
- Fork the repository
- Create a feature branch
- Make your changes
- Test locally
- Submit a pull request
This project is open source and available under the MIT License.