Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exposed API Routes #12

Open
Patrity opened this issue May 6, 2023 · 12 comments
Open

Exposed API Routes #12

Patrity opened this issue May 6, 2023 · 12 comments

Comments

@Patrity
Copy link

Patrity commented May 6, 2023

While routing all of the requests through the nitro server does a great job of keeping our API keys private, it also has a potentially unintended effect for some users.

If your intended Chat GPT user interface component is behind authentication (because who wants to expose unlimited use of your api key to the public?) the routes that are exposed with this module are always public.

I could be missing some intended configuration for this use-case, but as far as I can tell the routes /api/chat and /api/chat-completion are always exposed to the internet. Again, this could be intended functionality, I just wanted to make sure that this was not a bug. For my particular uses, I will have to remove this module for this reason.

Thank you for the great work and contribution nonetheless!

@dosstx
Copy link

dosstx commented May 12, 2023

Thanks for the comment, @Patrity . Would be interesting if others have any thoughts on this? Make sure you at least set up your billing budget to cut off the API after a certain X of money is hit.

@Patrity
Copy link
Author

Patrity commented May 12, 2023

Make sure you at least set up your billing budget to cut off the API after a certain X of money is hit.

Certainly! That would be best practice, regardless. However, a bad actor could expend your budget in a matter of minutes or seconds with something as simple as Postman or any other HTTP client.
Just something to think about.

@dosstx
Copy link

dosstx commented May 12, 2023

Yes, it's the risk you take. I bet 80% of the apps out there are not secure, but then those folks would never have gotten their app out in the first place!

@Patrity
Copy link
Author

Patrity commented May 12, 2023

I don't know that, "Most apps aren't secure, so mine doesn't need to be" is very good practice.
Again - I thoroughly appreciate your contribution to open source and think it's great for some users, but maybe just add a disclaimer to your read me that it exposes use of your API key to the internet.

Thank you for your replies!

@dosstx
Copy link

dosstx commented May 12, 2023

Oh I am not the creator of this library, I was just browsing the issues and saw your comment.. I agree with it, but there are tradeoffs to make if you want to get an app out. I've had several app ideas never see the light of day because of the "what if this happens" mindset. I think as long as you got the budget set and the apiKey on the server side, you should be fine. If someone is going to rate limit your app then you can deal with it at that point. @danielroe What you think? I think in one of the OPEN AI chatbot VueForge courses that was released, Daniel showed something about setting up a sessionstorage option for the backend to prevent this (but I admit I did not watch the entire course, just skimmed over it).

@zenflow
Copy link

zenflow commented May 27, 2023

If someone is going to rate limit your app then you can deal with it at that point

@dosstx It wouldn't be a bad thing if we could avoid that situation altogether.

Really not sure why you are so dismissive of a security issue, when the whole value-add of this package is to hide your api key for security. If you don't care about security then you can just use the openai package directly and expose your api key.

You can work on getting your app out and still recognize this is an issue worth fixing. You don't have to stop working on your app to fix this issue.

@Patrity
Copy link
Author

Patrity commented May 28, 2023

If someone is going to rate limit your app then you can deal with it at that point

@dosstx It wouldn't be a bad thing if we could avoid that situation altogether.

Really not sure why you are so dismissive of a security issue, when the whole value-add of this package is to hide your api key for security. If you don't care about security then you can just use the openai package directly and expose your api key.

You can work on getting your app out and still recognize this is an issue worth fixing. You don't have to stop working on your app to fix this issue.

He is not the library creator, just another user giving some other perspective. But yes, I agree with you thoroughly.
I've gone ahead and just created my own solution for my app with a backend system that passes a JWT for verification.

But I think this issue should be addressed, fixed, or at the bare minimum - a disclaimer added to the readme...

@kyng-cytro
Copy link

I've gone ahead and just created my own solution for my app with a backend system that passes a JWT for verification.

Hi. any way you could share your solution?

@Patrity
Copy link
Author

Patrity commented Jul 27, 2023

I've gone ahead and just created my own solution for my app with a backend system that passes a JWT for verification.

Hi. any way you could share your solution?

I am using Supabase in my project, but even if you are using another BaaS or even custom auth, the solution would be similar.
There is also some stuff that you may not want/need such as custom responses to unauthorized users and a 1s delay when the user is unauthorized.

Let me know if you have any questions.

https://gist.github.com/Patrity/d84e7ebe02ca24824cbd4d0505baadbd

@kyng-cytro
Copy link

I've gone ahead and just created my own solution for my app with a backend system that passes a JWT for verification.

Hi. any way you could share your solution?

I am using Supabase in my project, but even if you are using another BaaS or even custom auth, the solution would be similar.
There is also some stuff that you may not want/need such as custom responses to unauthorized users and a 1s delay when the user is unauthorized.

Let me know if you have any questions.

https://gist.github.com/Patrity/d84e7ebe02ca24824cbd4d0505baadbd

I use superbase too. I'll take a look at the gist

@ishaan-jaff
Copy link

@kyng-cytro @Patrity @zenflow

You can use LiteLLM to fix your issue: https://github.com/BerriAI/litellm

LiteLLM - allows you to use any LLM as a drop in replacement for gpt-3.5-turbo + You can set a $ budget per user or per session

from litellm import BudgetManager, completion 
budget_manager = BudgetManager(project_name="test_project")
user = "1234"

# create a budget if new user user
if not budget_manager.is_valid_user(user):
    budget_manager.create_budget(total_budget=10, user=user)

# check if a given call can be made
if budget_manager.get_current_cost(user=user) <= budget_manager.get_total_budget(user):
    # call gpt-3.5
    response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hey, how's it going?"}])
    budget_manager.update_cost(completion_obj=response, user=user)
else:
    response = "Sorry - no budget!"

@mubaidr
Copy link

mubaidr commented May 7, 2024

But is this issue solvable by writing a custom server middleware? Where you can verify user account:

  • For spamming (by using rate limiter)
  • for verification of account limit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants