This Go project empowers you to interact with an AI assistant in a chat thread format. It seamlessly integrates the OpenAI API to provide chat completion, fostering engaging and informative conversations.
Steps:
-
Clone the Repository:
git clone git@github.com:pi-prakhar/go-openai.git
-
Navigate to the Project Directory:
cd go-openai
Running the Project
A. Using Docker Image
docker pull 16181181418/go-openai:latest
docker run -d -p 8000:8000 -e "OPENAI_API_KEY={YOUR_KEY}" 16181181418/go-openai:latest
B. Running Locally
Prerequisites:
- Ensure you have Git and Golang installed on your system.
- Replace
OPENAI_API_KEY
with your actual OpenAI API key in the project's configuration file (refer to OpenAI documentation for obtaining an API key). - Consider using a
.env
file to securely store sensitive information like your API key, preventing accidental exposure in version control. - You can customize the chat experience by configuring the OpenAI model used for chat completion and other parameters available in the OpenAI API.
1. Using Docker
- Build and start the application in detached mode:
docker-compose up --build -d
2. Without Docker
-
Build the executable:
go build -o main.exe ./cmd/go-openai
-
Run the application:
./main.exe
Once the application is running, launch it in your web browser using the following URL:
http://localhost:8000/
For detailed instructions on configuration and further customization, consult the project's source code and any additional documentation provided within the repository.
-
/api/chat/send
(POST): Send messages to the AI assistant. The request body must be in JSON format with amessage
field. -
/api/chat/messages
(GET): Retrieve all conversation messages exchanged between you and the AI assistant during the current session.
The application provides an API endpoint for sending messages to the AI assistant:
Endpoint: /api/chat/send
Method: POST
Body: JSON (containing a message
field)
JSON Example:
{
"message": "Hi! How can I help you today?"
}
The response from the AI assistant will be displayed in the chat thread format using:
Endpoint: /api/chat/messages
Method: POST
Body: JSON (containing a message
field)
API Response Formats
-
Success Response (All Messages):
{ "code": 200, // HTTP status code for success "message": "Successfully fetched all messages", "data": [ // Required data in the response { "role": "user", "content": "What new model of chat gpt is coming?" }, { "role": "assistant", "content": "OpenAI recently announced the release of GPT-4, the latest iteration of their chatbot model. GPT-4 is said to have even more advanced capabilities, including improved natural language processing and the ability to generate more coherent and contextually relevant responses. This new model is expected to further push the boundaries of AI-powered conversational agents." } ] }
-
Error Response Schema:
{ "code": 400, // Example error code (can vary) "message": "Bad request" // Human-readable error description }
Please refer to the project's source code for the exact error response format and details on the specific error codes used.