-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
503 when uploading a lot of small files through transfer_manager.upload_many_from_filenames() #1205
Comments
This is an issue in the latest release of google-auth v2.26.0. There is a fix up at googleapis/google-auth-library-python#1447 We'll try to cut a new release ASAP. In the mean time you can pin the dependency to the previous release. |
Hi @JeremyKeusters depending on your workload, there are some ways to trigger an exponential retry that is supported by the underlying upload methods. I'm linking the documentation on upload_from_filename and Retry Strategy for more details. In general, two main things worth pointing out are
For this current workload, are ALL the destination blobs new objects that do not yet exist? If so, you could modify your code to something like
However, if the destination objects already exist or is a mix of new and existing objects, you would not be able to easily utilize the generation-match precondition in this case. Setting |
Hope this clarifies your question. Closing due to inactivity. Please feel free to reopen if you have further questions. |
Thank you both for your replies @cojenco and @tritone . So if I understand correctly:
Please correct me if my assumptions above are wrong. Two follow-up questions:
|
Environment details
3.9.15
23.3.1
google-cloud-storage
version:2.14.0
Steps to reproduce
When uploading a lot of small files through the new
transfer_manager.upload_many_from_filenames()
function, a 503 error is thrown. I would expect the function to keep into account the rate limits, or at least use an exponential retry.Code example
Stack trace
Thanks!
The text was updated successfully, but these errors were encountered: