-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multipart uploads of large files produces partially corrupted data when upload chunk size #190
Comments
A possible solution would be for LeoFS to refuse multipart uploads that do not match the chunk size. |
Thank you for your report. We'll check this issue. related issue: #177 |
We're planning to fix this issue with v1.2 later because we need to improve the internal architecture. |
This issue is related to range get queries, issue #376 #382 File MD5 (50M /dev/urandom)
Multipart Upload (1MB Chunk Size)
OT: |
Hi, what is the maximum number of parts in a upload job? |
@shooding you can configure the max number of parts at leo_gateway.conf as below. |
Related Report from Google Group https://groups.google.com/forum/#!topic/leoproject_leofs/k7jAppwuovs |
From the ML, I recognized we still have an issue with retrieving high-range of an object as below:
|
I've checked this issue with this script, then I could not face the same situation. It seems version of Leo's library(s) is not correct.
|
To make sure that covering the case for Vansh, |
We've checked this issue with the current development version, 1.2.18-dev. |
LeoFS is configured with 5MB chunks.
Upload a file with multipart upload put into 1MB chunks
Upload a file with 5MB chunks.
Uploading:
Checking md5:
Chross check with s3cmd (S3cmd does not use multipart):
[1] fifo_s3: https://github.com/project-fifo/fifo_s3
The text was updated successfully, but these errors were encountered: