* feat: separate test job into separate workflow
* feat: add dependabot for github actions, docker and go
* feat: new releaes workflow
* feat: add docker build step
* feat: add docker build step
* chore: remove unused steps in the Dockerfile
Since it is multistage, we don't need to remove stuff like `git` since we only copy the binary
* refactor: Dockerfile now cache better
The layers that have a less chance of being updated, is added earlier
* build: use golang `1.17.2` as builder stage
* build: use alpine `3.14.2` as runtime stage
* chore: remove `gcc` from runtime stage
* feat: add heroku step
* chore: remove `main.yaml` workflow
* fix: remove `latest` flavor, the action handles it
See https://github.com/tus/tusd/pull/480
Squashed commit of the following:
commit 7439fd84a6103afdedaf94701a65ce4376789380
Author: Marius <marius@transloadit.com>
Date: Mon Oct 18 00:27:12 2021 +0200
Docs and test
commit 16d9dc67e8c8eefc328b1ce12d7e7ca01a49f9f6
Merge: bae0ffbbea5183
Author: Marius <marius@transloadit.com>
Date: Mon Oct 18 00:23:13 2021 +0200
Merge branch 'head_header_check' of https://github.com/s3rius/tusd into s3rius-head_header_check
commit bea5183ec3
Author: Pavel Kirilin <win10@list.ru>
Date: Thu May 20 19:53:36 2021 +0400
Fixed "Tus-Resumable" header check for HEAD request.
Signed-off-by: Pavel Kirilin <win10@list.ru>
* Only determine object type based on name after last separator
* modify test to keep in mind directory prefixes with underscores in them
* update documentation to reflect support of underscores in gcs object prefix
The current version Go 1.12 is too old, so Heroku does not compile tusd anymore: https://github.com/tus/tusd/runs/3886723478
We use Go 1.16 and not 1.17 because we always support the two latest major releases.
* Add azure-storage-blob-go dependency
* Implement Azure BlobStorage store
* Add AzureStore Mock test
* Refactor Blob interfaces to use uppercase fields
* Refactor and remove the Create function
When getting the offset, and we get the status code BlobNotFound, we can say the offset is 0, and start from the beginning
* Update the mock
* Refactor error checking of GetOffset to actually check the service code
* Begin testing azurestore
* Write more tests
* New feature allows to set access type on new containers and blob access tier
* Write more docs
* Upgrade azure-storage-blob-go to v0.13.0
* Remove AzError, not needed
* Update link to container access type information
* Remove ?toc from link in comments
* Remove trailing spaces from workflow
* Run tests with go1.15 and 1.16
* Don't fail fast
This lets all other tests complete, and makes it easier to see if it's just a one-off fail, or on different OSes and versions
* Remove darwin 386 from `build_all.sh` script
Removed in go1.15 https://github.com/golang/go/issues/37610
* Update go version in `Dockerfile`
* Compile for Apple Silicone (darwin arm64)
Only go1.16 supports it
* Add TLS support to tusd
* Adds `-tls-certificate`, `-tls-key`, and `-tls-mode` flags
* Alter printed URL to reflect protocol in use
* For non-TLS, do an eary-exit if http.Serve() returns
* Configure TLS for the following modes
* TLSv1.3-only
* (default) TLSv1.3+TLSv1.2, with recommended ciphersuites from Mozilla + matching RSA key transport modes
* TLSv1.2 only, with only 256-bit AES ciphersuites
* All modes disable HTTP/2, given that it’s not supported in non-TLS mode
* Update documentation
* * Remove RSA-based key transport ciphersuites as they don’t support forward secrecy
* Improve the TLS/HTTPS example in the usage documentation
Signed-off-by: Joey Coleman <joey.coleman@kirasystems.com>
* Update docs further to a) record that the key file must be unencrypted, and b) clean up the RSA-based ciphersuite comments.
* Enable S3 transfer acceleration in SDK
* Format better
* Setup for flag changes
* Place feature behind new flag
* Fix a docs issue
Co-authored-by: Cloud User <centos@ip-10-0-0-184.us-west-2.compute.internal>
* Allow empty metadata values
* Make tests less fragile by allowing loose call ordering
* Add s3ChunkProducer
* Integrate s3ChunkProducer to support chunk buffering
* Remove completed chunk files inline to reduce disk space usage
* Add tests for chunk producer
* docs: Use value from Host header to forward to tusd
* Use int64 for MaxBufferedParts field
* Default to 20 buffered parts
* Rename s3ChunkProducer -> s3PartProducer
* Document s3PartProducer struct
* Clarify misleading comment
* Revert "Remove completed chunk files inline to reduce disk space usage"
This reverts commit b72a4d43d6.
* Remove redundant seek
This is already being done in s3PartProducer.
* Clean up any remaining files in the channel when we return
* Make putPart* functions responsible for cleaning up temp files
* handler: Add tests for empty metadata pairs
* Factor out cleanUpTempFile func
* Add test to ensure that temporary files get cleaned up
Co-authored-by: Jens Steinhauser <jens.steinhauser@gmail.com>
Co-authored-by: Marius <marius@transloadit.com>