* Use GetObject instead of HeadObject to locate incomplete part
This avoids confusion around the errors that are returned by HeadObject, especially when the request has limited permissions for the bucket.
* Remove unused HeadObject function
* Add DeleteObject to S3API interface
* Use DeleteObject to remove .part objects
* Update tests
Previously, some data would be discarded if the user pauseed the upload.
Pausing causes the connection to be interrupted which makes Go's
net/http return an io.ErrUnexpectedEOF.
Before https://github.com/tus/tusd/pull/219 from @acj this would
not be noticed since data is discarded anyway. However, after this
PR, every byte can now be saved on S3 even if we don't have enough
data for a multipart upload part.
* Handle "NotFound" error code from HeadObject
This accommodates third party implementations of the S3 interface, such as Minio, that may return a different error string.
* Check NotFound error string in test
This also fixes an incorrect return value.
* Add HeadObject function to S3API
* Regenerate S3API mock
* Include incomplete part size in the offset
* Add CRUD functions for managing incomplete parts
* Account for incomplete parts in S3Store's Terminate
* Account for incomplete parts in S3Store's WriteChunk
* Factor out writeInfo function
* Declare support for deferred length in S3Store
* Add test for S3Store's DeclareLength
* Adapt S3Store tests to new implementation
* Add PutObjectInputMatcher test helper
* Add test for prepending incomplete parts
* Add GetInfo test for incomplete parts
* Update S3Store docs
* Consistently handle NoSuchKey errors from S3
* Handle both 403 and 404 responses from HeadObject
If the IAM role doesn't have permission to list the contents of the bucket, then HEAD requests will return 403 for nonexistent objects.
* Add ObjectPrefix field to S3Store
* Integrate S3ObjectPrefix with Flags
* Account for S3ObjectPrefix flag in CreateComposer
* Account for ObjectPrefix in S3Store operations
* Add test for configuring an S3Store with ObjectPrefix
See https://github.com/tus/tusd/issues/149 and
https://github.com/tus/tusd/pull/150 for more details.
Squashed commit of the following:
commit 78312ab26ea7ee664038e5b5d362bd534bfe0e37
Author: Marius <maerious@gmail.com>
Date: Fri Sep 1 19:49:48 2017 +0200
Correct error assertions for exceeding max part size
commit 9350712c0a46651e6a7a91d8819307ba4b08ec7e
Author: Marius <maerious@gmail.com>
Date: Fri Sep 1 19:44:28 2017 +0200
Make CalcOptimalPartSize unexported
commit 593f3b2d37d16c51f229572c1d6b39fc2a234079
Author: Marius <maerious@gmail.com>
Date: Fri Sep 1 19:38:46 2017 +0200
Add more output for debugging tests
commit b7193bfe67b535c9b9dd441610b41af11fe4538f
Author: Marius <maerious@gmail.com>
Date: Fri Sep 1 19:35:48 2017 +0200
Extract size assertions into own function
commit 7521de23194652519fbbf3d61a41ef0b44b005fa
Author: Marius <maerious@gmail.com>
Date: Fri Sep 1 19:26:48 2017 +0200
Move tests for CalcPartSize into own file
commit 6c483de7710cc119c870271ccad629c98c15c9a3
Author: Marius <maerious@gmail.com>
Date: Fri Sep 1 19:13:02 2017 +0200
Use same assertions in AllUploadSizes test
commit 7b0290a07e7def09ea8ed982e7817a2ea7cd468a
Author: Marius <maerious@gmail.com>
Date: Fri Sep 1 18:30:02 2017 +0200
Split negative test case from TestCalcOptimalPartSize into own test
commit 79c0a20d7bc71b494bc0824ad2aa8879b0c2900b
Merge: 5240f9b 997961f
Author: Marius <maerious@gmail.com>
Date: Fri Sep 1 17:32:31 2017 +0200
Merge branch 'f-s3-part-size' of https://github.com/flaneurtv/tusd into flaneurtv-f-s3-part-size
commit 997961ff5c
Author: Markus Kienast <mark@rickkiste.at>
Date: Fri Sep 1 00:59:38 2017 +0200
TestNewUploadLargerMaxObjectSize
commit 0831bd79f8
Author: Markus Kienast <mark@rickkiste.at>
Date: Thu Aug 31 23:08:03 2017 +0200
fmt.Sprintf removed, range from 0 - MaxObjectSize+1
commit 1be7081524
Author: Markus Kienast <mark@rickkiste.at>
Date: Tue Aug 29 10:23:50 2017 +0200
turn off debug mode
commit be9a9bec10
Author: Markus Kienast <mark@rickkiste.at>
Date: Tue Aug 29 10:12:20 2017 +0200
moved MaxObjectSize check to NewUpload, refined tests
* moved MaxObjectSize check to NewUpload
* removed MaxObjectSize check from CalcOptimalPartSize
* switched to assert in tests
* added TestAllPartSizes, excluded in short mode
TODO: TestNewUploadLargerMaxObjectSize needs to fail if MaxObjectSize > size
commit 7c22847a45
Author: Markus Kienast <mark@rickkiste.at>
Date: Sat Aug 26 12:55:07 2017 +0200
adding debug code to TestCalcOptimalPartSize
commit 5240f9b549000fac34be79ddfbe6e82404387f6b
Merge: 63c011e 5b116e7
Author: Marius <maerious@gmail.com>
Date: Sat Aug 26 12:50:51 2017 +0200
Merge branch 'f-s3-part-size' of https://github.com/flaneurtv/tusd into flaneurtv-f-s3-part-size
commit 63c011ef768db42e99004df921c2b9e5c4776fd2
Author: Marius <maerious@gmail.com>
Date: Sat Aug 26 12:50:45 2017 +0200
Format s3store_test
commit 5b116e7087
Author: Markus Kienast <mark@rickkiste.at>
Date: Sat Aug 26 12:24:22 2017 +0200
restructuring tests to accommodate optimalPartSize of 0
commit 93134a5696
Author: Markus Kienast <mark@rickkiste.at>
Date: Sat Aug 26 12:03:18 2017 +0200
moving MaxObjectSize check to top
commit 68e6bb8c41
Author: Markus Kienast <mark@rickkiste.at>
Date: Sat Aug 26 02:31:27 2017 +0200
enhance readability, comments and errors
commit 8831a98c34
Author: Markus Kienast <mark@rickkiste.at>
Date: Thu Aug 24 02:27:57 2017 +0200
separated partsize calc and error handling
commit f059acc7cc
Author: Markus Kienast <mark@rickkiste.at>
Date: Thu Aug 24 01:29:26 2017 +0200
fixed edge cases; pre-cleanup
commit e2e3b9ffe4
Author: Markus Kienast <mark@rickkiste.at>
Date: Wed Aug 23 13:28:59 2017 +0200
added error, when size > MaxObjectSize; additional case in algorithm + tests; go fmt
commit 381d3326cb
Author: Markus Kienast <mark@rickkiste.at>
Date: Thu Aug 17 16:32:25 2017 +0200
calculating PartSize based on size of upload
simplified algorithm, respect MaxObjectSize, updated tests, go fmt
commit 1ad6187d6d
Author: koenvo <info@koenvossen.nl>
Date: Thu Aug 17 21:31:37 2017 +0200
Take IsTruncated field of S3 ListParts API response into account (#148)
* Take IsTruncated field of S3 ListParts API response into account
* Rename s3store.ListParts to ListAllParts
* Use proper formatting + make listAllParts private + test listAllParts through TestGetInfo
* Update TestFinishUpload to also test paged ListParts response
commit 5a268dbafb9318b888142931ea27a1af10b9a8e7
Author: Marius <maerious@gmail.com>
Date: Wed Jul 19 11:47:26 2017 +0200
Remove manual assignment of upload ID in S3Store
commit a37e149090ee7fd5f170d24ccc33b8af9ae18fae
Author: Marius <maerious@gmail.com>
Date: Wed Jul 19 11:42:00 2017 +0200
Format Go code
commit 6643a9be62
Author: Markus Kienast <mark@rickkiste.at>
Date: Sun Jul 16 17:08:24 2017 +0200
fixed ID value in .info; adjusted tests; fixed assert(expected, received) swap
AWS does not handle non-ASCII encoded values for metadata values well since
they are transported in HTTP header values which, by specification, should
only contain ASCII characters. If you still supply AWS with, for example,
UTF-8 encoded strings it will reject the request due to mismatching
signatures. Our solution is to replace these characters with question
marks.