Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3Store: Concurrently write upload parts to S3 while reading from client #402

Merged
merged 22 commits into from
Jul 29, 2020
Merged
Changes from 1 commit
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
87045a8
Allow empty metadata values
JenSte Jun 20, 2020
6c45ff2
Make tests less fragile by allowing loose call ordering
acj Feb 28, 2020
2b3fb49
Add s3ChunkProducer
acj Mar 16, 2020
54819d8
Integrate s3ChunkProducer to support chunk buffering
acj Mar 16, 2020
b72a4d4
Remove completed chunk files inline to reduce disk space usage
acj Mar 16, 2020
8d8046c
Add tests for chunk producer
acj Mar 20, 2020
ec5f500
docs: Use value from Host header to forward to tusd
Acconut Jun 24, 2020
08a72a5
Use int64 for MaxBufferedParts field
acj Jul 14, 2020
c51afa1
Default to 20 buffered parts
acj Jul 14, 2020
ff63ab4
Rename s3ChunkProducer -> s3PartProducer
acj Jul 14, 2020
46e0e9c
Document s3PartProducer struct
acj Jul 14, 2020
5dd7c3b
Clarify misleading comment
acj Jul 14, 2020
be6cf54
Revert "Remove completed chunk files inline to reduce disk space usage"
acj Jul 14, 2020
74c5c0c
Remove redundant seek
acj Jul 14, 2020
d014a3e
Clean up any remaining files in the channel when we return
acj Jul 14, 2020
9cb1385
Make putPart* functions responsible for cleaning up temp files
acj Jul 14, 2020
59c3d42
Merge branch 'metadata' of https://github.com/JenSte/tusd
Acconut Jul 15, 2020
26b84bc
handler: Add tests for empty metadata pairs
Acconut Jul 15, 2020
a23a1af
Merge branch 's3store-buffered-chunks' of https://github.com/acj/tusd…
Acconut Jul 18, 2020
b79c64f
Factor out cleanUpTempFile func
acj Jul 20, 2020
2cb30a4
Merge branch 's3store-buffered-chunks' of https://github.com/acj/tusd…
Acconut Jul 22, 2020
6984744
Add test to ensure that temporary files get cleaned up
Acconut Jul 22, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions pkg/s3store/s3store.go
Original file line number Diff line number Diff line change
Expand Up @@ -396,8 +396,7 @@ func (upload s3Upload) WriteChunk(ctx context.Context, offset int64, src io.Read
// we may leak file descriptors. Let's ensure that those are cleaned up.
defer func() {
for file := range fileChan {
os.Remove(file.Name())
file.Close()
cleanUpTempFile(file)
}
}()

Expand Down Expand Up @@ -445,9 +444,13 @@ func (upload s3Upload) WriteChunk(ctx context.Context, offset int64, src io.Read
return bytesUploaded - incompletePartSize, partProducer.err
}

func cleanUpTempFile(file *os.File) {
file.Close()
os.Remove(file.Name())
}

func (upload *s3Upload) putPartForUpload(ctx context.Context, uploadPartInput *s3.UploadPartInput, file *os.File) error {
defer os.Remove(file.Name())
defer file.Close()
defer cleanUpTempFile(file)

_, err := upload.store.Service.UploadPartWithContext(ctx, uploadPartInput)
return err
Expand Down Expand Up @@ -724,9 +727,7 @@ func (upload *s3Upload) concatUsingDownload(ctx context.Context, partialUploads
if err != nil {
return err
}
fmt.Println(file.Name())
Acconut marked this conversation as resolved.
Show resolved Hide resolved
defer os.Remove(file.Name())
defer file.Close()
defer cleanUpTempFile(file)

// Download each part and append it to the temporary file
for _, partialUpload := range partialUploads {
Expand Down Expand Up @@ -902,8 +903,7 @@ func (store S3Store) getIncompletePartForUpload(ctx context.Context, uploadId st
}

func (store S3Store) putIncompletePartForUpload(ctx context.Context, uploadId string, file *os.File) error {
defer os.Remove(file.Name())
defer file.Close()
defer cleanUpTempFile(file)

_, err := store.Service.PutObjectWithContext(ctx, &s3.PutObjectInput{
Bucket: aws.String(store.Bucket),
Expand Down