Squashed commit of the following:
commit e8b5b3751a86d86cae10e0bcf89caa481e5c3de6
Author: Marius <marius@transloadit.com>
Date: Sun Jun 19 12:15:22 2022 +0200
Fix generated mocks
commit 736e2e7bb6
Merge: 9d7096f1e69d9b
Author: Stefan Scheidewig <stefan.scheidewig@staffbase.com>
Date: Sat Jun 18 07:53:29 2022 +0200
Merge branch 'v2' into readcloser_in_getreader
commit 9d7096fcb3
Author: Stefan Scheidewig <stefan.scheidewig@staffbase.com>
Date: Tue May 24 14:16:01 2022 +0200
Return ReadCloser in getReader
* ci: Remove plugin hook handler
* Rework error type from interface to struct
* Avoid writing to http.ResponseWriter directly
* Allow hooks to modify response
* Add example for HTTP hooks using Python
* Implement new plugin system using Hashicorp/go-plugin
* Enable returning partial HTTPResponses
* Remove some (unnecessary) error handling
* Forward stdout and stderr from plugin to tusd
* docs: Update examples
* cli: Update filehooks to new system
* cli: Renovate gRPC hooks
* docs: Correct casing of gRPC
* misc: Documentation, better examples, and code structure
* Add first draft of parallel upload queue
* s3store: Use queue for parallel uploads
* Revert "Add first draft of parallel upload queue"
This reverts commit 86a329cef2.
* Revert "s3store: Use queue for parallel uploads"
This reverts commit 29b59a2c90.
* s3store: Cache results from listing parts and checking incomplete object
* s3store: Remove debugging output`
* s3store: Make requests for fetching info concurrently
* s3store: Make parallel uploads work and tests pass
* s3store: Add semaphore package
* s3store: Add comments to semaphore package
* s3store: Encapsulate more logic into s3PartProducer
* s3store: Refactor WriteChunk
* s3store: Remove TODO
* s3store: Acquire lock before uploading
* cli: Add flag for setting concurrency limit
* s3store: One more comment
* Allow empty metadata values
* Make tests less fragile by allowing loose call ordering
* Add s3ChunkProducer
* Integrate s3ChunkProducer to support chunk buffering
* Remove completed chunk files inline to reduce disk space usage
* Add tests for chunk producer
* docs: Use value from Host header to forward to tusd
* Use int64 for MaxBufferedParts field
* Default to 20 buffered parts
* Rename s3ChunkProducer -> s3PartProducer
* Document s3PartProducer struct
* Clarify misleading comment
* Revert "Remove completed chunk files inline to reduce disk space usage"
This reverts commit b72a4d43d6.
* Remove redundant seek
This is already being done in s3PartProducer.
* Clean up any remaining files in the channel when we return
* Make putPart* functions responsible for cleaning up temp files
* handler: Add tests for empty metadata pairs
* Factor out cleanUpTempFile func
* Add test to ensure that temporary files get cleaned up
Co-authored-by: Jens Steinhauser <jens.steinhauser@gmail.com>
Co-authored-by: Marius <marius@transloadit.com>
* Add MetadataObjectPrefix field to S3Store
* Add metadataKeyWithPrefix helper function
* Use metadataKeyWithPrefix for .info and .part operations
* Add s3store tests for metadata object prefixes
* Clarify ObjectPrefix docs