How difficult would it be to use another open source project as a chatbot front end? #2409
Replies: 2 comments
-
I wasn't aware of the project you mentioned earlier, but if it involves handling API requests to a server and maintaining certain middlewares, then integrating or replacing any functionality should not be a problem. I'm not sure what you need, but in any case, you will require three things:
Your link has been broken. Here is the correct one: https://github.com/mckaywrigley/chatbot-ui |
Beta Was this translation helpful? Give feedback.
-
I install chatbot-ui using Docker and asked some questions. https://github.com/mckaywrigley/chatbot-ui/blob/main/pages/api/chat.ts The answers provided by ChatGPT may not cover everything, but I think they serve as a good starting point. If you use ChatGPT-4 and ask ================== GPT answers below this line ================== If your own server is responsible for handling all the functions previously performed by the tiktoken library, then you wouldn't need to use tiktoken in this case. Instead, you would need to implement an appropriate tokenization method on your server, and then send the tokenized input to your AI model. You would receive a tokenized response from your AI model, which you could then decode and format to match the expected output format. The specifics of how you implement tokenization on your server will depend on the technology stack you're using, but one common approach is to use a natural language processing (NLP) library that supports tokenization, such as the Natural Language Toolkit (NLTK) in Python. In this case, you would send the user input text to your server, where it would be tokenized using whatever method you have chosen. The resulting tokenized text would then be sent to your AI model, which would generate a tokenized response. This tokenized response would be sent back to your server, where it would be decoded and formatted as needed, and then sent back to the client. By implementing your own tokenization method and using your own server to handle input and output for the ChatGPT model, you can have full control over the entire process and ensure that it meets your specific needs and requirements. To use your own server instead of OpenAI's, you will need to make some changes to the existing code. Here's how you can do it:
That's it! With these changes, your ChatGPT code will now use your own server instead of OpenAI's API to generate responses. Remember to test your updated code thoroughly to make sure everything is working as expected. |
Beta Was this translation helpful? Give feedback.
-
This project is picking up some steam and it's easy to see why. It replicates the ChatGPT interface nicely, has a couple other cool features and has a low barrier to entry:
Chatbot UI
What kinda obstacles would there be to leveraging this as a front-end experience for users with langchain opening up opportunities to add basically everything else?
I'm knew to langchain, but could you run it on a headless server and connect it to a Chatbot UI instance for the front end?
edit: fixed repo link for chabot ui
Beta Was this translation helpful? Give feedback.
All reactions