* Add first draft of parallel upload queue
* s3store: Use queue for parallel uploads
* Revert "Add first draft of parallel upload queue"
This reverts commit 86a329cef2.
* Revert "s3store: Use queue for parallel uploads"
This reverts commit 29b59a2c90.
* s3store: Cache results from listing parts and checking incomplete object
* s3store: Remove debugging output`
* s3store: Make requests for fetching info concurrently
* s3store: Make parallel uploads work and tests pass
* s3store: Add semaphore package
* s3store: Add comments to semaphore package
* s3store: Encapsulate more logic into s3PartProducer
* s3store: Refactor WriteChunk
* s3store: Remove TODO
* s3store: Acquire lock before uploading
* cli: Add flag for setting concurrency limit
* s3store: One more comment
* Allow empty metadata values
* Make tests less fragile by allowing loose call ordering
* Add s3ChunkProducer
* Integrate s3ChunkProducer to support chunk buffering
* Remove completed chunk files inline to reduce disk space usage
* Add tests for chunk producer
* docs: Use value from Host header to forward to tusd
* Use int64 for MaxBufferedParts field
* Default to 20 buffered parts
* Rename s3ChunkProducer -> s3PartProducer
* Document s3PartProducer struct
* Clarify misleading comment
* Revert "Remove completed chunk files inline to reduce disk space usage"
This reverts commit b72a4d43d6.
* Remove redundant seek
This is already being done in s3PartProducer.
* Clean up any remaining files in the channel when we return
* Make putPart* functions responsible for cleaning up temp files
* handler: Add tests for empty metadata pairs
* Factor out cleanUpTempFile func
* Add test to ensure that temporary files get cleaned up
Co-authored-by: Jens Steinhauser <jens.steinhauser@gmail.com>
Co-authored-by: Marius <marius@transloadit.com>