Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I use get method for http request, currently seems only post is supported #569

Closed
SeaOfOcean opened this issue Jul 30, 2020 · 9 comments
Assignees
Labels
triaged_wait Waiting for the Reporter's resp

Comments

@SeaOfOcean
Copy link

Is your feature request related to a problem? Please describe.

Can I use get method for http request, currently seems only post is supported

Describe the solution

Describe alternatives solution

@harshbafna harshbafna self-assigned this Jul 30, 2020
@harshbafna
Copy link
Contributor

@SeaOfOcean: Could you please elaborate, for which API you are trying out the GET method request?

You can use the API descriptions to view a full list of inference/management APIs and their respective details

@harshbafna harshbafna added the triaged_wait Waiting for the Reporter's resp label Jul 30, 2020
@SeaOfOcean
Copy link
Author

@SeaOfOcean: Could you please elaborate, for which API you are trying out the GET method request?

You can use the API descriptions to view a full list of inference/management APIs and their respective details

Inference API, for example call by

curl http://localhost:8080/predictions/pipeline?arg1=1&arg2=2

pass argument arg1 and arg2 by get method, and handle it in user defined processor

@harshbafna
Copy link
Contributor

@SeaOfOcean: Could you please elaborate, for which API you are trying out the GET method request?
You can use the API descriptions to view a full list of inference/management APIs and their respective details

Inference API, for example call by

curl http://localhost:8080/predictions/pipeline?arg1=1&arg2=2

pass argument arg1 and arg2 by get method, and handle it in user defined processor

Unfortunately prediction/inference API currently only supports the POST method.

@SeaOfOcean
Copy link
Author

@SeaOfOcean: Could you please elaborate, for which API you are trying out the GET method request?
You can use the API descriptions to view a full list of inference/management APIs and their respective details

Inference API, for example call by

curl http://localhost:8080/predictions/pipeline?arg1=1&arg2=2

pass argument arg1 and arg2 by get method, and handle it in user defined processor

Unfortunately prediction/inference API currently only supports the POST method.

Do you have any plan for supporting GET method? Most serving framework like flask/tornado support GET

@harshbafna
Copy link
Contributor

@SeaOfOcean: Could you please elaborate, for which API you are trying out the GET method request?
You can use the API descriptions to view a full list of inference/management APIs and their respective details

Inference API, for example call by

curl http://localhost:8080/predictions/pipeline?arg1=1&arg2=2

pass argument arg1 and arg2 by get method, and handle it in user defined processor

Unfortunately prediction/inference API currently only supports the POST method.

Do you have any plan for supporting GET method? Most serving framework like flask/tornado support GET

There is no immediate plan for this. The serving framework (netty) supports all HTTP methods. It's the inference API which only supports the POST method.

@dhaniram-kshirsagar
Copy link
Contributor

There is no direct relation between flask and torchserve except the fact that both support handling http request however using different engines [netty or python apis]. Hence naturally, they can support all HTTP methods.
In torchserve context, we need input for inference hence POST method is needed and hence GET cannot be supported unless you have some model that doesn't need any input!!

@SeaOfOcean
Copy link
Author

There is no direct relation between flask and torchserve except the fact that both support handling http request however using different engines [netty or python apis]. Hence naturally, they can support all HTTP methods.
In torchserve context, we need input for inference hence POST method is needed and hence GET cannot be supported unless you have some model that doesn't need any input!!

we can also get input from arguments of Get method. e.g. curl http://localhost:8080/predictions/pipeline?arg1=1&arg2=2

@MichaelMMeskhi
Copy link

MichaelMMeskhi commented Aug 12, 2020

@SeaOfOcean One method I am working on is doing the following:

curl http://localhost:8080/predictions/model -F "imgid=555" -F "image=@file" -F "task=classify" -F "result=prob"

This way I can handle specific conditions that I need inside handler.py. I don't know if this is the "correct" way, but it works for my needs.

@SeaOfOcean
Copy link
Author

@SeaOfOcean One method I am working on is doing the following:

curl http://localhost:8080/predictions/model -F "imgid=555" -F "image=@file" -F "task=classify" -F "result=prob"

This way I can handle specific conditions that I need inside handler.py. I don't know if this is the "correct" way, but it works for my needs.

thanks, it is a great workaround

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged_wait Waiting for the Reporter's resp
Projects
None yet
Development

No branches or pull requests

4 participants