Compare commits

..

190 Commits

Author SHA1 Message Date
semantic-release-bot 66b9cd2022 chore(release): 0.1.0-develop.1 [skip ci]
# [0.1.0-develop.1](https://git.lumeweb.com/LumeWeb/portal/compare/v0.0.1...v0.1.0-develop.1) (2023-08-15)

### Bug Fixes

* abort if we don't have a password for the account, assume its pubkey only ([c20dec0](c20dec0204))
* add a check for a 500 error ([df08fc9](df08fc980a))
* add missing request connection close ([dff3ca4](dff3ca4589))
* add shutdown signal and flag for renterd ([fb65690](fb65690abd))
* **auth:** eager load the account relation to return it ([a23d165](a23d165caa))
* change jwtKey to ed25519.PrivateKey ([bf576df](bf576dfaee))
* close db on shutdown ([78ee15c](78ee15cf4b))
* Ctx must be public ([a0d747f](a0d747fdf4))
* ctx needs to be public in AuthService ([a3cfeba](a3cfebab30))
* **db:** need to set charset, parseTime and loc in connection for mysql ([5d15ca3](5d15ca330a))
* disable client warnings ([9b8cb38](9b8cb38496))
* dont try to stream if we have an error ([b21a425](b21a425e24))
* encode size as uint64 to the end of the cid ([5aca66d](5aca66d919))
* ensure all models auto increment the id field ([934f8e6](934f8e6236))
* ensure we store the pubkey in lowercase ([def1b50](def1b50cfc))
* handle duplicate tus uploads by hash ([f3172b0](f3172b0d31))
* hasher needs the size set to 32 ([294370d](294370d88d))
* if upload status code isn't 200, make it an err based on the body ([039a4a3](039a4a3354))
* if uploading returns a 500 and its a slab error, treat as a 404 ([6ddef03](6ddef03790))
* if we have an existing upload, just return it as if successful ([90170e5](90170e5b81))
* iris context.User needs to be embedded in our User struct for type checking to properly work ([1cfc222](1cfc2223a6))
* just use the any route ([e100429](e100429b60))
* load config before db ([58165e0](58165e01af))
* make an attempt to look for the token before adding to db ([f11b285](f11b285d4e))
* missing setting SetTusComposer ([80561f8](80561f89e9))
* newer gorm version causes db rebuilds every boot ([72255eb](72255eb3c5))
* only panic if the error is other than a missing config file ([6e0ec8a](6e0ec8aaf9))
* output error info ([cfa7ceb](cfa7ceb2f4))
* PostPubkeyChallenge should be lowercasing the pubkey for consistency ([d680f06](d680f0660f))
* PostPubkeyChallenge should be using ChallengeRequest ([36745bb](36745bb55b))
* PostPubkeyChallenge should not be checking email, but pubkey ([db3ba1f](db3ba1f014))
* PostPubkeyLogin should be lowercasing the pubkey and signature ([09d53ff](09d53ffa76))
* PostPubkeyLogin should not preload any model ([27e7ea7](27e7ea7d7a))
* properly handle missing size bytes ([c0df04d](c0df04d7d5))
* public_key should be pubkey ([09b9f19](09b9f195f4))
* register LoginSession model ([48164ec](48164ec320))
* register request validation ([c197b14](c197b1425b))
* remove PrivateKey, rename PublicKey in Key model ([00f2b96](00f2b962a0))
* rewrite gorm query logic for tus uploads ([f8aaeff](f8aaeff6de))
* rewrite sql logic ([ce1b5e3](ce1b5e31d5))
* rewrite streaming logic and centralize in a helper function ([bb26cfc](bb26cfca5b))
* save upload info after every chunk ([038d2c4](038d2c440b))
* temp workaround on race condition ([e2db880](e2db880038))
* **tus:** switch to normal clone package, not generic ([faaec64](faaec649ea))
* update default flag values ([241db4d](241db4deb6))
* update model relationships ([628f1b4](628f1b4aca))
* **upload:** add account to upload record ([e018a4b](e018a4b743))
* uploading of main file ([7aea462](7aea462ab7))
* upstream renterd updates ([5ad91ad](5ad91ad263))
* use AccountID not Account ([f5e4377](f5e437777a))
* use bufio reader ([90e4ce6](90e4ce6408))
* use challengeObj ([9b82fa7](9b82fa7828))
* use database.path over database.name ([25c7d6d](25c7d6d4fb))
* use getWorkerObjectUrl ([4ff1334](4ff1334d8a))
* Use gorm save, and return nil if successful ([26042b6](26042b62ac))
* we can't use AddHandler inside BeginRequest ([f941ee4](f941ee46d4))
* wrap Register api in an atomic transaction to avoid dead locks ([e09e51b](e09e51bb52))
* wrong algo ([86380c7](86380c7b3a))

### Features

* add a status endpoint and move cid validation to a utility method ([38b7615](38b76155af))
* add a Status method for uploads ([1f195cf](1f195cf328))
* add auth status endpoint ([1dd4fa2](1dd4fa22cd))
* add bao package and rust bao wasm library ([4c649bf](4c649bfcb9))
* add cid package ([706f7a0](706f7a05b9))
* add ComputeFile bao RPC method ([687f26c](687f26cc77))
* add debug mode logging support ([99d7b83](99d7b8347a))
* add download endpoint ([79fd550](79fd550c54))
* add EncodeString function ([488f873](488f8737c0))
* add files service with upload endpoint ([b16beeb](b16beebabb))
* add files/upload/limit endpoint ([b77bebe](b77bebe3b1))
* add getCurrentUserId helper function ([29d6db2](29d6db2009))
* add global cors ([1f5a3d1](1f5a3d19e4))
* add jwt package ([ea99108](ea99108327))
* add more validation, and put account creation, with optional pubkey in a transaction ([699e424](699e4244e0))
* add new user service object that implements iris context User interface ([a14dad4](a14dad43ed))
* add newrelic support ([06b3ab8](06b3ab87f7))
* add pin model ([aaa2c17](aaa2c17212))
* add pin service method ([8692a02](8692a0225e))
* add PostPinBy controller endpoint for pinning a file ([be03a6c](be03a6c686))
* add pprof support ([ee17409](ee17409e12))
* add proof download ([3b1e860](3b1e860256))
* add StringHash ([118c679](118c679f76))
* add swagger support ([49c3844](49c3844406))
* add upload model ([f73a04b](f73a04bb2e))
* add Valid, and Decode methods, and create CID struct ([4e6c29f](4e6c29f1fd))
* add validation to account register ([7257b5d](7257b5d597))
* generate and/or load an ed25519 private key for jwt token generation ([85a0295](85a02952df))
* initial dnslink support ([cd2f63e](cd2f63eb72))
* pin file after basic upload ([892f093](892f093d93))
* pin file after tus upload ([5579ab8](5579ab85a3))
* tus support ([3005be6](3005be6fec))
* wip version ([9a4c3d5](9a4c3d5d13))
2023-08-15 06:18:56 +00:00
Derrick Hammer 9879662d5b
ci: add semantic-release pkgs 2023-08-15 02:16:22 -04:00
Derrick Hammer cd2f63eb72
feat: initial dnslink support 2023-08-15 02:11:55 -04:00
Derrick Hammer 3e80bb43fa
reactor: revert
Revert "feat: add pprof support"

This reverts commit ee17409e12.

Revert "fix: just use the any route"

This reverts commit e100429b60.
2023-08-14 23:17:25 -04:00
Derrick Hammer e100429b60
fix: just use the any route 2023-08-09 03:28:42 -04:00
Derrick Hammer ee17409e12
feat: add pprof support 2023-08-09 03:03:12 -04:00
Derrick Hammer 18529f2cd1
refactor: Revert "feat: add newrelic support"
This reverts commit 06b3ab87f7.
2023-08-09 02:36:24 -04:00
Derrick Hammer 06b3ab87f7
feat: add newrelic support 2023-08-05 17:19:03 -04:00
Derrick Hammer 18e102cc8a
refactor: always ensure the db connection closes by using a defer 2023-08-05 17:17:26 -04:00
Derrick Hammer f11b285d4e
fix: make an attempt to look for the token before adding to db 2023-08-04 12:54:45 -04:00
Derrick Hammer a7ac5a5b72
refactor: change generateToken to set audience based on a type to separate auth and challenge tokens 2023-08-04 12:54:13 -04:00
Derrick Hammer e2db880038
fix: temp workaround on race condition 2023-08-04 12:53:13 -04:00
Derrick Hammer e09e51bb52
fix: wrap Register api in an atomic transaction to avoid dead locks 2023-08-04 11:51:18 -04:00
Derrick Hammer dff3ca4589
fix: add missing request connection close 2023-08-04 11:46:25 -04:00
Derrick Hammer 8d3f490c01
Merge remote-tracking branch 'origin/develop' into develop 2023-08-03 08:49:20 -04:00
Derrick Hammer 78ee15cf4b
fix: close db on shutdown 2023-08-03 08:48:49 -04:00
Derrick Hammer 1cfc2223a6
fix: iris context.User needs to be embedded in our User struct for type checking to properly work 2023-06-29 07:05:46 -04:00
Derrick Hammer a23d165caa
fix(auth): eager load the account relation to return it 2023-06-29 07:04:24 -04:00
Derrick Hammer 934f8e6236
fix: ensure all models auto increment the id field 2023-06-29 06:19:50 -04:00
Derrick Hammer 504dcefb35
ci: allow both "deps" and "dep" to be a patch 2023-06-29 06:01:38 -04:00
Derrick Hammer 76d3043dda
deps: update 2023-06-29 06:01:05 -04:00
Derrick Hammer faaec649ea
fix(tus): switch to normal clone package, not generic 2023-06-29 06:00:45 -04:00
Derrick Hammer ceb729f11d
refactor(tus): add auth requirement on TUS and add support for tracking and storing the uploader throughout the upload lifecycle 2023-06-29 05:48:56 -04:00
Derrick Hammer 0bc862e35d
dep: used forked tusd 2023-06-29 05:46:51 -04:00
Derrick Hammer 53f29c99bc
dep: update package deps 2023-06-29 05:46:37 -04:00
Derrick Hammer e018a4b743
fix(upload): add account to upload record 2023-06-29 05:42:59 -04:00
Derrick Hammer 637b656d36
refactor(auth): move getCurrentUserId to auth package and make public 2023-06-29 05:41:26 -04:00
Derrick Hammer 5d15ca330a
fix(db): need to set charset, parseTime and loc in connection for mysql 2023-06-29 02:54:31 -04:00
Derrick Hammer 993b9e8208
ci: add .releaserc.json 2023-06-29 00:38:11 -04:00
Derrick Hammer 66f2545781
ci: add dummy index.html 2023-06-28 01:56:03 -04:00
Derrick Hammer 2062562f6b
ci: ensure app dir exists 2023-06-28 01:53:22 -04:00
Derrick Hammer b122626e97
ci: fix swag command path 2023-06-28 01:49:19 -04:00
Derrick Hammer 976394b29d
ci: setup swagger build 2023-06-28 01:38:18 -04:00
Derrick Hammer 914313a585
ci: setup and add semantic-release 2023-06-28 01:33:42 -04:00
Derrick Hammer 1f5a3d19e4
feat: add global cors 2023-06-28 01:31:55 -04:00
Derrick Hammer 1dd4fa22cd
feat: add auth status endpoint 2023-06-15 01:26:36 -04:00
Derrick Hammer 30ad92fb8d
refactor: rename to FileStatusResponse 2023-06-15 00:25:38 -04:00
Derrick Hammer ce1b5e31d5
fix: rewrite sql logic 2023-06-11 03:57:56 -04:00
Derrick Hammer bb26cfca5b
fix: rewrite streaming logic and centralize in a helper function 2023-06-11 03:19:07 -04:00
Derrick Hammer 4ff1334d8a
fix: use getWorkerObjectUrl 2023-06-11 03:17:32 -04:00
Derrick Hammer c197b1425b
fix: register request validation 2023-06-11 02:04:36 -04:00
Derrick Hammer c0df04d7d5
fix: properly handle missing size bytes 2023-06-11 01:38:19 -04:00
Derrick Hammer 385a51e504
refactor: fix not_found status code 2023-06-10 02:50:20 -04:00
Derrick Hammer b104af5e4c
refactor: change to use hash metadata key 2023-06-10 01:59:56 -04:00
Derrick Hammer b77bebe3b1
feat: add files/upload/limit endpoint 2023-06-10 01:58:45 -04:00
Derrick Hammer 86380c7b3a
fix: wrong algo 2023-06-10 01:15:17 -04:00
Derrick Hammer 9b82fa7828
fix: use challengeObj 2023-06-10 01:02:31 -04:00
Derrick Hammer bf576dfaee
fix: change jwtKey to ed25519.PrivateKey 2023-06-10 00:54:57 -04:00
Derrick Hammer 3b1e860256
feat: add proof download 2023-06-09 15:52:58 -04:00
Derrick Hammer 160a9f7ebb
refactor: use getWorkerProofUrl 2023-06-09 15:48:41 -04:00
Derrick Hammer 85a02952df
feat: generate and/or load an ed25519 private key for jwt token generation 2023-06-09 15:36:45 -04:00
Derrick Hammer da0efcdd0c
refactor: store config path options in ConfigFilePaths 2023-06-09 15:35:33 -04:00
Derrick Hammer 0d0a46e5e1
refactor: use errors.Is and gorm.ErrRecordNotFound 2023-06-09 07:57:06 -04:00
Derrick Hammer be03a6c686
feat: add PostPinBy controller endpoint for pinning a file 2023-06-09 07:39:43 -04:00
Derrick Hammer 29d6db2009
feat: add getCurrentUserId helper function 2023-06-09 07:38:59 -04:00
Derrick Hammer 40309311bd
refactor: Set the current user in the request for VerifyJwt middleware 2023-06-09 07:38:21 -04:00
Derrick Hammer a9d153a22f
refactor: modify VerifyLoginToken to return a pointer to the account model 2023-06-09 07:37:45 -04:00
Derrick Hammer a14dad43ed
feat: add new user service object that implements iris context User interface 2023-06-09 07:36:44 -04:00
Derrick Hammer 892f093d93
feat: pin file after basic upload 2023-06-09 07:06:33 -04:00
Derrick Hammer 5579ab85a3
feat: pin file after tus upload 2023-06-09 07:04:52 -04:00
Derrick Hammer 8692a0225e
feat: add pin service method 2023-06-09 07:04:06 -04:00
Derrick Hammer 9e52cd671b
refactor: standardize errors to global error objects 2023-06-09 06:24:42 -04:00
Derrick Hammer d1d4f6b679
refactor: try to decode the token claim for pre-verification 2023-06-09 04:29:18 -04:00
Derrick Hammer 16f2ac3604
refactor: verify the token is a valid format, then check the db, then validate, and if it fails, delete from the db 2023-06-09 04:26:50 -04:00
Derrick Hammer f941ee46d4
fix: we can't use AddHandler inside BeginRequest 2023-06-09 04:16:58 -04:00
Derrick Hammer e98e2d0c89
refactor: add jwt auth middleware to files controller 2023-06-09 04:06:03 -04:00
Derrick Hammer 34be432af7
refactor: use controller base class 2023-06-09 04:05:19 -04:00
Derrick Hammer 73e1c5a363
refactor: move all primary logic to service packages and standardize error objects 2023-06-09 04:03:29 -04:00
Derrick Hammer d18be0acc8
refactor: move error helpers to controller.go 2023-06-07 13:17:49 -04:00
Derrick Hammer 98fd2a097e
refactor: move more response structs to response package 2023-06-07 13:17:11 -04:00
Derrick Hammer cfa7ceb2f4
fix: output error info 2023-06-07 13:12:37 -04:00
Derrick Hammer 2f7c31d53c
refactor: completely restructure validation. split request and respond structs to their own package 2023-06-07 13:04:38 -04:00
Derrick Hammer bfbf13a57d
refactor: use tryParseRequest 2023-06-07 08:50:29 -04:00
Derrick Hammer 9d843bffdb
refactor: use tryParseRequest 2023-06-07 08:49:07 -04:00
Derrick Hammer f3e43f522f
refactor: add validation for all auth request structs 2023-06-06 23:28:33 -04:00
Derrick Hammer dd8e5704c8
refactor: rename checkPubkey to CheckPubkeyValidator 2023-06-06 23:27:43 -04:00
Derrick Hammer 9bacd95c9d
refactor: move to ozzo-validation 2023-06-06 23:16:34 -04:00
Derrick Hammer 27e7ea7d7a
fix: PostPubkeyLogin should not preload any model 2023-06-06 22:28:58 -04:00
Derrick Hammer 09d53ffa76
fix: PostPubkeyLogin should be lowercasing the pubkey and signature 2023-06-06 22:28:40 -04:00
Derrick Hammer d680f0660f
fix: PostPubkeyChallenge should be lowercasing the pubkey for consistency 2023-06-06 22:28:17 -04:00
Derrick Hammer 36745bb55b
fix: PostPubkeyChallenge should be using ChallengeRequest 2023-06-06 22:27:34 -04:00
Derrick Hammer db3ba1f014
fix: PostPubkeyChallenge should not be checking email, but pubkey 2023-06-06 22:27:07 -04:00
Derrick Hammer c20dec0204
fix: abort if we don't have a password for the account, assume its pubkey only 2023-06-06 22:05:49 -04:00
Derrick Hammer def1b50cfc
fix: ensure we store the pubkey in lowercase 2023-06-06 22:04:59 -04:00
Derrick Hammer f3172b0d31
fix: handle duplicate tus uploads by hash 2023-06-06 17:25:29 -04:00
Derrick Hammer f8aaeff6de
fix: rewrite gorm query logic for tus uploads 2023-06-06 17:01:54 -04:00
Derrick Hammer 99d7b8347a
feat: add debug mode logging support 2023-06-06 16:37:22 -04:00
Derrick Hammer 670bc9d64c
refactor: enable automatic env parsing 2023-06-06 16:35:58 -04:00
Derrick Hammer 4831b8b68f
refactor: need to add renterd-api-password config arg 2023-06-06 16:35:20 -04:00
Derrick Hammer 38b76155af
feat: add a status endpoint and move cid validation to a utility method 2023-06-06 16:34:05 -04:00
Derrick Hammer 1f195cf328
feat: add a Status method for uploads 2023-06-06 16:33:14 -04:00
Derrick Hammer d0e59c8729
refactor: no longer embed renterd 2023-06-06 16:32:27 -04:00
Derrick Hammer 72255eb3c5
fix: newer gorm version causes db rebuilds every boot 2023-06-02 04:48:46 -04:00
Derrick Hammer 8331136f7f
chore: update renterd 2023-05-31 19:28:24 -04:00
Derrick Hammer 325ab7044f
refactor: sync cli options and env code with upstream 2023-05-31 00:17:02 -04:00
Derrick Hammer d1742265b6
chore: update renterd 2023-05-31 00:15:36 -04:00
Derrick Hammer 09cd274d29
chore: update renterd 2023-05-29 13:06:32 -04:00
Derrick Hammer 26042b62ac
fix: Use gorm save, and return nil if successful 2023-05-23 20:16:26 -04:00
Derrick Hammer 038d2c440b
fix: save upload info after every chunk 2023-05-23 20:15:49 -04:00
Derrick Hammer 96ac75bf3f
refactor: add logging 2023-05-23 20:15:24 -04:00
Derrick Hammer 56d61895f5
refactor: pass id to FileInfo and use info in fileUpload 2023-05-23 20:15:08 -04:00
Derrick Hammer 89ef950432
refactor: use provided file hash 2023-05-23 20:14:21 -04:00
Derrick Hammer 396b3f60a8
refactor: move terminateUpload db logic to store 2023-05-23 20:12:48 -04:00
Derrick Hammer e7d1bd0f09
refactor: add getStore helper 2023-05-23 20:10:51 -04:00
Derrick Hammer e8c232dfdd
refactor: change shared to use interfaces to avoid an import cycle 2023-05-23 20:10:17 -04:00
Derrick Hammer 39936b3b14
refactor: create a new tus store that uses the db for meta instead of the filesystem 2023-05-22 19:07:06 -04:00
Derrick Hammer 7845f95776
refactor: move logger to its own package 2023-05-22 19:05:53 -04:00
Derrick Hammer 6d5b9d880b
refactor: deduplicate building api urls 2023-05-22 17:14:32 -04:00
Derrick Hammer 4b712a3a80
refactor: see if proof exists and only if both the proof and file are not 404, do we abort as already existing 2023-05-22 16:36:19 -04:00
Derrick Hammer 7fe05862b1
chore: update renterd 2023-05-22 11:02:59 -04:00
Derrick Hammer 90170e5b81
fix: if we have an existing upload, just return it as if successful 2023-05-22 11:02:47 -04:00
Derrick Hammer ed6220fc7d
refactor: optionally compare passed hash with computed one and reject if they don't match 2023-05-22 11:00:24 -04:00
Derrick Hammer 09f9a5bdfd
refactor: update id fields 2023-05-22 10:59:16 -04:00
Derrick Hammer 75e5838b01
refactor: move tus record delete logic to terminateUpload and delete by tus upload id 2023-05-22 10:26:48 -04:00
Derrick Hammer 6ddef03790
fix: if uploading returns a 500 and its a slab error, treat as a 404 2023-05-19 09:05:40 -04:00
Derrick Hammer bef2ed7431
refactor: add logging 2023-05-19 09:04:47 -04:00
Derrick Hammer 748cac542e
refactor: add zap logger 2023-05-17 13:35:22 -04:00
Derrick Hammer 0a90ff6439
refactor: add terminateUpload method 2023-05-17 13:34:27 -04:00
Derrick Hammer 80561f89e9
fix: missing setting SetTusComposer 2023-05-17 13:34:05 -04:00
Derrick Hammer df08fc980a
fix: add a check for a 500 error 2023-05-17 13:33:22 -04:00
Derrick Hammer 76b6fb34fe
chore: update renterd 2023-05-17 10:55:06 -04:00
Derrick Hammer 033522222f
chore: remove unused deps 2023-05-17 10:53:42 -04:00
Derrick Hammer ee33da755c
refactor: use BaoEncodedSize 2023-05-17 10:12:22 -04:00
Derrick Hammer aa702ffd02
refactor: move to new golang bao implementation 2023-05-17 09:52:25 -04:00
Derrick Hammer 2f514c02be
refactor: move shared global state to a shared package 2023-05-16 18:46:08 -04:00
Derrick Hammer 503cb55c55
refactor: tus needs to move to its own package 2023-05-16 18:45:32 -04:00
Derrick Hammer 55d8dda6e8
refactor: have the Download method check for a tus upload thats still in progress and use it if a upload item does not exist 2023-05-16 18:42:03 -04:00
Derrick Hammer 4548de5c60
refactor: change storing path to just ID and fetch from upload via the tus store 2023-05-16 17:11:38 -04:00
Derrick Hammer 673f7c6dfd
refactor: have Upload take both a io.ReaderSeeker and os.File and update usages based of if we are streaming a small file or handling a big one via filename 2023-05-15 15:47:46 -04:00
Derrick Hammer 687f26cc77
feat: add ComputeFile bao RPC method 2023-05-15 15:45:05 -04:00
Derrick Hammer 35878a2427
chore: update deps 2023-05-15 12:36:08 -04:00
Derrick Hammer 3005be6fec
feat: tus support 2023-05-15 12:36:00 -04:00
Derrick Hammer a8d2ad3393
refactor: move to a go-plugin based GRPC approach for bao 2023-05-15 12:34:55 -04:00
Derrick Hammer 435445dda5
refactor: change where to use a struct 2023-05-11 15:25:31 -04:00
Derrick Hammer 294370d88d
fix: hasher needs the size set to 32 2023-05-11 15:24:49 -04:00
Derrick Hammer b44b12f85e
refactor: change download controller method to use a path argument and not a query 2023-05-10 15:09:18 -04:00
Derrick Hammer 90e4ce6408
fix: use bufio reader 2023-05-10 14:50:36 -04:00
Derrick Hammer b48db1d8c4
refactor: add Download function to files service 2023-05-10 14:41:12 -04:00
Derrick Hammer 73bc836cbc
refactor: change files controller to use new files service api 2023-05-10 14:40:29 -04:00
Derrick Hammer 118c679f76
feat: add StringHash 2023-05-10 14:36:45 -04:00
Derrick Hammer a93add8f70
refactor: create new files package with Upload 2023-05-10 14:28:32 -04:00
Derrick Hammer 2dae0c8687
refactor: rename services to controllers 2023-05-10 14:23:22 -04:00
Derrick Hammer 488f8737c0
feat: add EncodeString function 2023-05-10 14:17:50 -04:00
Derrick Hammer 8f3af2084c
refactor: rename services to controllers 2023-05-10 07:07:56 -04:00
Derrick Hammer 6ceefc11cf
refactor: make encode fixed method to take a [32]byte, and change Encode to take a byte array that just copies and calls EncodeFixed 2023-05-09 12:48:48 -04:00
Derrick Hammer b21a425e24
fix: dont try to stream if we have an error 2023-05-08 10:16:47 -04:00
Derrick Hammer 9b17557d14
chore: update deps 2023-05-08 10:11:19 -04:00
Derrick Hammer 79fd550c54
feat: add download endpoint 2023-05-08 10:10:57 -04:00
Derrick Hammer fb65690abd
fix: add shutdown signal and flag for renterd 2023-05-08 10:09:33 -04:00
Derrick Hammer 4e6c29f1fd
feat: add Valid, and Decode methods, and create CID struct 2023-05-08 10:09:00 -04:00
Derrick Hammer 2dc9d4dcf6
refactor: rename encode method 2023-05-06 04:38:09 -04:00
Derrick Hammer 5ba0111d08
core: update deps 2023-05-06 03:56:48 -04:00
Derrick Hammer 9b8cb38496
fix: disable client warnings 2023-05-06 03:56:38 -04:00
Derrick Hammer 7aea462ab7
fix: uploading of main file 2023-05-04 09:11:31 -04:00
Derrick Hammer 5aca66d919
fix: encode size as uint64 to the end of the cid 2023-05-04 08:16:44 -04:00
Derrick Hammer 479df7eb39
chore: update deps 2023-05-04 08:01:46 -04:00
Derrick Hammer 039a4a3354
fix: if upload status code isn't 200, make it an err based on the body 2023-05-04 07:58:06 -04:00
Derrick Hammer fbc9133df5
chore: update deps 2023-05-04 04:21:51 -04:00
Derrick Hammer 37033bf45c
refactor: register files service 2023-05-04 04:21:39 -04:00
Derrick Hammer ca3d3588a4
refactor: call renterd earlier but wait until its ready 2023-05-04 04:20:39 -04:00
Derrick Hammer b16beebabb
feat: add files service with upload endpoint 2023-05-04 04:18:38 -04:00
Derrick Hammer 4c649bfcb9
feat: add bao package and rust bao wasm library 2023-05-04 04:18:15 -04:00
Derrick Hammer 706f7a05b9
feat: add cid package 2023-05-04 04:14:47 -04:00
Derrick Hammer 13d1adb717
refactor: add upload and pin models to migration 2023-05-04 04:14:31 -04:00
Derrick Hammer ea99108327
feat: add jwt package 2023-05-04 04:13:53 -04:00
Derrick Hammer aaa2c17212
feat: add pin model 2023-05-04 04:13:27 -04:00
Derrick Hammer f73a04bb2e
feat: add upload model 2023-05-04 04:13:19 -04:00
Derrick Hammer 782ac58ed2
refactor: update LoginSession 2023-05-04 04:13:00 -04:00
Derrick Hammer 7bb7edb7d9
refactor: make seed/password public to be called, and create a chan to be checked with Ready to see if the renter is ready 2023-05-04 04:12:26 -04:00
Derrick Hammer 5ad91ad263
fix: upstream renterd updates 2023-05-04 04:11:11 -04:00
Derrick Hammer a3cfebab30
fix: ctx needs to be public in AuthService 2023-04-30 04:49:19 -04:00
Derrick Hammer 699e4244e0
feat: add more validation, and put account creation, with optional pubkey in a transaction 2023-04-30 04:48:56 -04:00
Derrick Hammer 00f2b962a0
fix: remove PrivateKey, rename PublicKey in Key model 2023-04-30 04:47:32 -04:00
Derrick Hammer 48164ec320
fix: register LoginSession model 2023-04-30 04:47:07 -04:00
Derrick Hammer 7257b5d597
feat: add validation to account register 2023-04-30 04:09:42 -04:00
Derrick Hammer 53264dfb24
refactor: change var name to not conflict 2023-04-30 03:30:19 -04:00
Derrick Hammer b023105b21
refactor: setup api versioning 2023-04-30 03:30:03 -04:00
Derrick Hammer a0d747fdf4
fix: Ctx must be public 2023-04-30 03:29:38 -04:00
Derrick Hammer 44d0a90ca0
style: comment formatting 2023-04-30 02:46:47 -04:00
Derrick Hammer c42d2b5dfe
core: fix code comment 2023-04-30 02:46:11 -04:00
Derrick Hammer 593410c137
chore: update deps 2023-04-30 02:18:55 -04:00
Derrick Hammer 49c3844406
feat: add swagger support 2023-04-30 02:18:42 -04:00
Derrick Hammer f5e437777a
fix: use AccountID not Account 2023-04-30 02:17:33 -04:00
Derrick Hammer 09b9f195f4
fix: public_key should be pubkey 2023-04-30 02:16:53 -04:00
Derrick Hammer 628f1b4aca
fix: update model relationships 2023-04-30 02:16:32 -04:00
Derrick Hammer 25c7d6d4fb
fix: use database.path over database.name 2023-04-30 02:15:57 -04:00
Derrick Hammer 6e0ec8aaf9
fix: only panic if the error is other than a missing config file 2023-04-30 02:14:44 -04:00
Derrick Hammer 241db4deb6
fix: update default flag values 2023-04-30 02:14:14 -04:00
Derrick Hammer 58165e01af
fix: load config before db 2023-04-30 02:10:52 -04:00
Derrick Hammer 9a4c3d5d13
feat: wip version 2023-04-29 13:40:43 -04:00
49 changed files with 12147 additions and 1 deletions

51
.github/workflows/ci.yml vendored Normal file
View File

@ -0,0 +1,51 @@
name: Build/Publish
on:
workflow_dispatch:
inputs:
debug_enabled:
description: Debug
type: boolean
default: false
push:
branches:
- master
- develop
- develop-*
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Use Node.js
uses: actions/setup-node@v3
with:
node-version: 18.x
cache: 'npm'
- run: npm ci
- name: Setup Swagger
run: |
go install github.com/swaggo/swag/cmd/swag@latest;
$HOME/go/bin/swag init -g main.go;
$HOME/go/bin/swag fmt;
- name: Build
run: |
mkdir app;
touch app/index.html
go build .;
- name: Install SSH key
uses: shimataro/ssh-key-action@v2
with:
key: ${{ secrets.GITEA_SSH_KEY }}
known_hosts: ${{ secrets.GITEA_KNOWN_HOST }}
- name: Publish
run: npm run semantic-release
env:
NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
- name: Setup tmate session
uses: mxschmitt/action-tmate@v3
if: ${{ github.event_name == 'workflow_dispatch' && inputs.debug_enabled && failure() }}
with:
limit-access-to-actor: true

76
.releaserc.json Normal file
View File

@ -0,0 +1,76 @@
{
"plugins": [
[
"@semantic-release/commit-analyzer",
{
"releaseRules": [
{
"breaking": true,
"release": "major"
},
{
"revert": true,
"release": "patch"
},
{
"type": "feat",
"release": "minor"
},
{
"type": "fix",
"release": "patch"
},
{
"type": "perf",
"release": "patch"
},
{
"type": "dep",
"release": "patch"
},
{
"type": "deps",
"release": "patch"
},
{
"type": "refactor",
"release": "patch"
}
]
}
],
"@semantic-release/release-notes-generator",
[
"@semantic-release/changelog",
{
"changelogFile": "CHANGELOG.md"
}
],
[
"@semantic-release/npm",
{
"npmPublish": false
}
],
[
"@semantic-release/git",
{
"assets": [
"package.json",
"CHANGELOG.md"
]
}
]
],
"branches": [
"master",
{
"name": "develop",
"prerelease": true
},
{
"name": "develop-*",
"prerelease": true
}
]
}

100
CHANGELOG.md Normal file
View File

@ -0,0 +1,100 @@
# [0.1.0-develop.1](https://git.lumeweb.com/LumeWeb/portal/compare/v0.0.1...v0.1.0-develop.1) (2023-08-15)
### Bug Fixes
* abort if we don't have a password for the account, assume its pubkey only ([c20dec0](https://git.lumeweb.com/LumeWeb/portal/commit/c20dec020437d91cf2728852b8bed5c4a0c481e9))
* add a check for a 500 error ([df08fc9](https://git.lumeweb.com/LumeWeb/portal/commit/df08fc980ac3f710a67bd692b8126eb978699d5b))
* add missing request connection close ([dff3ca4](https://git.lumeweb.com/LumeWeb/portal/commit/dff3ca45895095b82ba2e76b2e61487e28151b7d))
* add shutdown signal and flag for renterd ([fb65690](https://git.lumeweb.com/LumeWeb/portal/commit/fb65690abd5c190dce30d3cfe0d079b27040a309))
* **auth:** eager load the account relation to return it ([a23d165](https://git.lumeweb.com/LumeWeb/portal/commit/a23d165caa3ba4832c9d37a0b833b9b58df60732))
* change jwtKey to ed25519.PrivateKey ([bf576df](https://git.lumeweb.com/LumeWeb/portal/commit/bf576dfaeef51078d7bdae885550fc235d49c1eb))
* close db on shutdown ([78ee15c](https://git.lumeweb.com/LumeWeb/portal/commit/78ee15cf4b5d3a55209a9c7559700a2c5b227f87))
* Ctx must be public ([a0d747f](https://git.lumeweb.com/LumeWeb/portal/commit/a0d747fdf4e6ee3fa6a3b4dca180e4f14af30ed9))
* ctx needs to be public in AuthService ([a3cfeba](https://git.lumeweb.com/LumeWeb/portal/commit/a3cfebab307a87bc895d7b1c1f0e6632a708562c))
* **db:** need to set charset, parseTime and loc in connection for mysql ([5d15ca3](https://git.lumeweb.com/LumeWeb/portal/commit/5d15ca330abd26576ef9865c110975aeb27c3ab3))
* disable client warnings ([9b8cb38](https://git.lumeweb.com/LumeWeb/portal/commit/9b8cb38496541b0ab50d28eef63658f9723c5802))
* dont try to stream if we have an error ([b21a425](https://git.lumeweb.com/LumeWeb/portal/commit/b21a425e24f5543802e7267369f37967d4805697))
* encode size as uint64 to the end of the cid ([5aca66d](https://git.lumeweb.com/LumeWeb/portal/commit/5aca66d91981d8fae88194df6b03c239dbd179a8))
* ensure all models auto increment the id field ([934f8e6](https://git.lumeweb.com/LumeWeb/portal/commit/934f8e6236ef1eef8db1d06a1d7a7fded8afe694))
* ensure we store the pubkey in lowercase ([def1b50](https://git.lumeweb.com/LumeWeb/portal/commit/def1b50cfcba8c68f3b95209790418638374fad9))
* handle duplicate tus uploads by hash ([f3172b0](https://git.lumeweb.com/LumeWeb/portal/commit/f3172b0d31f844b95a0e64b3a5d821f71b0fbe07))
* hasher needs the size set to 32 ([294370d](https://git.lumeweb.com/LumeWeb/portal/commit/294370d88dd159ae173a6a955a417a1547de60ed))
* if upload status code isn't 200, make it an err based on the body ([039a4a3](https://git.lumeweb.com/LumeWeb/portal/commit/039a4a33547a59b4f3ec86199664b5bb94d258a6))
* if uploading returns a 500 and its a slab error, treat as a 404 ([6ddef03](https://git.lumeweb.com/LumeWeb/portal/commit/6ddef03790971e346fa0a7d33a462f39348bc6cc))
* if we have an existing upload, just return it as if successful ([90170e5](https://git.lumeweb.com/LumeWeb/portal/commit/90170e5b81831f3d768291fd37c7c13e32d522fe))
* iris context.User needs to be embedded in our User struct for type checking to properly work ([1cfc222](https://git.lumeweb.com/LumeWeb/portal/commit/1cfc2223a6df614f26fd0337ced68d92e774589f))
* just use the any route ([e100429](https://git.lumeweb.com/LumeWeb/portal/commit/e100429b60e783f6c7c3ddecab7bb9b4dd599726))
* load config before db ([58165e0](https://git.lumeweb.com/LumeWeb/portal/commit/58165e01af9f2b183d654d3d8809cbd1eda0a9bb))
* make an attempt to look for the token before adding to db ([f11b285](https://git.lumeweb.com/LumeWeb/portal/commit/f11b285d4e255c1c4c95f6ac15aa904d7a5730e4))
* missing setting SetTusComposer ([80561f8](https://git.lumeweb.com/LumeWeb/portal/commit/80561f89e92dfa86887ada8361e0046ee6288234))
* newer gorm version causes db rebuilds every boot ([72255eb](https://git.lumeweb.com/LumeWeb/portal/commit/72255eb3c50892aa5f2cfdc4cb1daa5883f0affc))
* only panic if the error is other than a missing config file ([6e0ec8a](https://git.lumeweb.com/LumeWeb/portal/commit/6e0ec8aaf90e86bcb7cb6c8c53f6569e6885e0aa))
* output error info ([cfa7ceb](https://git.lumeweb.com/LumeWeb/portal/commit/cfa7ceb2f422a6e594a424315c8eaeffc6572926))
* PostPubkeyChallenge should be lowercasing the pubkey for consistency ([d680f06](https://git.lumeweb.com/LumeWeb/portal/commit/d680f0660f910e323356a1169ee13ef2e647a015))
* PostPubkeyChallenge should be using ChallengeRequest ([36745bb](https://git.lumeweb.com/LumeWeb/portal/commit/36745bb55b1d7cd464b085e410333089504591c1))
* PostPubkeyChallenge should not be checking email, but pubkey ([db3ba1f](https://git.lumeweb.com/LumeWeb/portal/commit/db3ba1f0148b6abc34b4606f9b8103963a3c6850))
* PostPubkeyLogin should be lowercasing the pubkey and signature ([09d53ff](https://git.lumeweb.com/LumeWeb/portal/commit/09d53ffa7645b64aed4170e698b8eb62d2c3590e))
* PostPubkeyLogin should not preload any model ([27e7ea7](https://git.lumeweb.com/LumeWeb/portal/commit/27e7ea7d7a0bbf6c147ff625591acf6376c6c62d))
* properly handle missing size bytes ([c0df04d](https://git.lumeweb.com/LumeWeb/portal/commit/c0df04d7d5309e32348ceecc68eecd64c5e5cba4))
* public_key should be pubkey ([09b9f19](https://git.lumeweb.com/LumeWeb/portal/commit/09b9f195f47ea9ae47069a517a77609c74ea3ca5))
* register LoginSession model ([48164ec](https://git.lumeweb.com/LumeWeb/portal/commit/48164ec320c693937ead352246ec1e94bede3684))
* register request validation ([c197b14](https://git.lumeweb.com/LumeWeb/portal/commit/c197b1425bbd689e8f662846de0478aff8d38f35))
* remove PrivateKey, rename PublicKey in Key model ([00f2b96](https://git.lumeweb.com/LumeWeb/portal/commit/00f2b962a0da956f971dc94d75726c1bab693232))
* rewrite gorm query logic for tus uploads ([f8aaeff](https://git.lumeweb.com/LumeWeb/portal/commit/f8aaeff6de2dc5e5321840460d55d79ad1b5ab1a))
* rewrite sql logic ([ce1b5e3](https://git.lumeweb.com/LumeWeb/portal/commit/ce1b5e31d5d6a69dc91d88a6fd2f1317e07dc1ea))
* rewrite streaming logic and centralize in a helper function ([bb26cfc](https://git.lumeweb.com/LumeWeb/portal/commit/bb26cfca5b4017bbbbf5aeee9bd3577c724f83ca))
* save upload info after every chunk ([038d2c4](https://git.lumeweb.com/LumeWeb/portal/commit/038d2c440b24b7c0f1ea72e0bfeda369f766c691))
* temp workaround on race condition ([e2db880](https://git.lumeweb.com/LumeWeb/portal/commit/e2db880038f51e0e16ce270fe29fce7785cce878))
* **tus:** switch to normal clone package, not generic ([faaec64](https://git.lumeweb.com/LumeWeb/portal/commit/faaec649ead00567ced56edfa9db11eb34655178))
* update default flag values ([241db4d](https://git.lumeweb.com/LumeWeb/portal/commit/241db4deb6808d950d55efa38e11d60469cc6778))
* update model relationships ([628f1b4](https://git.lumeweb.com/LumeWeb/portal/commit/628f1b4acaac1d2bf373b7008f2e0c070fd64ae5))
* **upload:** add account to upload record ([e018a4b](https://git.lumeweb.com/LumeWeb/portal/commit/e018a4b7430bc375ff3b72537e71295cdf67ef93))
* uploading of main file ([7aea462](https://git.lumeweb.com/LumeWeb/portal/commit/7aea462ab752e999030837d13733508369524cf3))
* upstream renterd updates ([5ad91ad](https://git.lumeweb.com/LumeWeb/portal/commit/5ad91ad263f01830623958141a7e7c8523bee85f))
* use AccountID not Account ([f5e4377](https://git.lumeweb.com/LumeWeb/portal/commit/f5e437777a52e2a9bbf55903cea17ec073fbb406))
* use bufio reader ([90e4ce6](https://git.lumeweb.com/LumeWeb/portal/commit/90e4ce6408391dc270ca4405a7c5282c2d4766b2))
* use challengeObj ([9b82fa7](https://git.lumeweb.com/LumeWeb/portal/commit/9b82fa7828946803289add03fc84be1dc4f86d8b))
* use database.path over database.name ([25c7d6d](https://git.lumeweb.com/LumeWeb/portal/commit/25c7d6d4fb48b69239eba131232a78e90a576e2f))
* use getWorkerObjectUrl ([4ff1334](https://git.lumeweb.com/LumeWeb/portal/commit/4ff1334d8afd9379db687fc6b764f5b0f1bcc08c))
* Use gorm save, and return nil if successful ([26042b6](https://git.lumeweb.com/LumeWeb/portal/commit/26042b62acd7f7346f1a99a0ac37b3f2f99e3f75))
* we can't use AddHandler inside BeginRequest ([f941ee4](https://git.lumeweb.com/LumeWeb/portal/commit/f941ee46d469a3f0a6302b188f566029fdec4e70))
* wrap Register api in an atomic transaction to avoid dead locks ([e09e51b](https://git.lumeweb.com/LumeWeb/portal/commit/e09e51bb52d513abcbbf53352a5d8ff68eb5364a))
* wrong algo ([86380c7](https://git.lumeweb.com/LumeWeb/portal/commit/86380c7b3a97e785b99af456305c01d18f776ddf))
### Features
* add a status endpoint and move cid validation to a utility method ([38b7615](https://git.lumeweb.com/LumeWeb/portal/commit/38b76155af954dc3602a5035cb7b53a7f625fbfd))
* add a Status method for uploads ([1f195cf](https://git.lumeweb.com/LumeWeb/portal/commit/1f195cf328ee176be9283ab0cc40e65bb6c40948))
* add auth status endpoint ([1dd4fa2](https://git.lumeweb.com/LumeWeb/portal/commit/1dd4fa22cdfc749c5474f94108bca0aec34aea81))
* add bao package and rust bao wasm library ([4c649bf](https://git.lumeweb.com/LumeWeb/portal/commit/4c649bfcb92e8632e45cf10b27fa062ff1680c32))
* add cid package ([706f7a0](https://git.lumeweb.com/LumeWeb/portal/commit/706f7a05b9a4ed464f693941235aa7e9ca14145a))
* add ComputeFile bao RPC method ([687f26c](https://git.lumeweb.com/LumeWeb/portal/commit/687f26cc779f4f50166108d6e78fe1456cfa128d))
* add debug mode logging support ([99d7b83](https://git.lumeweb.com/LumeWeb/portal/commit/99d7b8347af25fe65a1f1aecc9960424a101c279))
* add download endpoint ([79fd550](https://git.lumeweb.com/LumeWeb/portal/commit/79fd550c54bf74e84d012805f60c036c19fbbef2))
* add EncodeString function ([488f873](https://git.lumeweb.com/LumeWeb/portal/commit/488f8737c09b7757c5649b3d8a3568e3c1d5fe45))
* add files service with upload endpoint ([b16beeb](https://git.lumeweb.com/LumeWeb/portal/commit/b16beebabb254488897edde870e9588b7be5293e))
* add files/upload/limit endpoint ([b77bebe](https://git.lumeweb.com/LumeWeb/portal/commit/b77bebe3b1a03cecdd7e80f575452d5ce91ccfac))
* add getCurrentUserId helper function ([29d6db2](https://git.lumeweb.com/LumeWeb/portal/commit/29d6db20096e61efa9a792ef837ef93ca14107ae))
* add global cors ([1f5a3d1](https://git.lumeweb.com/LumeWeb/portal/commit/1f5a3d19e44f1db2f8587623e868fa48b23d1a74))
* add jwt package ([ea99108](https://git.lumeweb.com/LumeWeb/portal/commit/ea991083276a576003eb3633bd1bde98e13dfe84))
* add more validation, and put account creation, with optional pubkey in a transaction ([699e424](https://git.lumeweb.com/LumeWeb/portal/commit/699e4244e0d877d8d9df9d3d4894351785fe7f4d))
* add new user service object that implements iris context User interface ([a14dad4](https://git.lumeweb.com/LumeWeb/portal/commit/a14dad43ed3140f73d817ef2438aacbc0939de69))
* add newrelic support ([06b3ab8](https://git.lumeweb.com/LumeWeb/portal/commit/06b3ab87f7e1b982d3fb42a3e06897a2fd1387ed))
* add pin model ([aaa2c17](https://git.lumeweb.com/LumeWeb/portal/commit/aaa2c17212bd5e646036252a0e1f8d8bdb68f5a7))
* add pin service method ([8692a02](https://git.lumeweb.com/LumeWeb/portal/commit/8692a0225ebb71502811cba063e32dd11cdd10c9))
* add PostPinBy controller endpoint for pinning a file ([be03a6c](https://git.lumeweb.com/LumeWeb/portal/commit/be03a6c6867f305529af90e6206a0597bb84f015))
* add pprof support ([ee17409](https://git.lumeweb.com/LumeWeb/portal/commit/ee17409e1252e9cbae0b17ccbb1949c9a81dff82))
* add proof download ([3b1e860](https://git.lumeweb.com/LumeWeb/portal/commit/3b1e860256297d3515f0fcd58dd28292c316d79f))
* add StringHash ([118c679](https://git.lumeweb.com/LumeWeb/portal/commit/118c679f769bec2971e4e4b00ec41841a02b8a1c))
* add swagger support ([49c3844](https://git.lumeweb.com/LumeWeb/portal/commit/49c38444066c89d7258fd85d114d9d74babb8d55))
* add upload model ([f73a04b](https://git.lumeweb.com/LumeWeb/portal/commit/f73a04bb2e48b78e22b531a9121fe4baa011deaf))
* add Valid, and Decode methods, and create CID struct ([4e6c29f](https://git.lumeweb.com/LumeWeb/portal/commit/4e6c29f1fd7c33ce442fe741e08b32c8e3e9f393))
* add validation to account register ([7257b5d](https://git.lumeweb.com/LumeWeb/portal/commit/7257b5d597a28069c87437cabd71f51c187eb80c))
* generate and/or load an ed25519 private key for jwt token generation ([85a0295](https://git.lumeweb.com/LumeWeb/portal/commit/85a02952dffb1873c557f30483606d678e46749d))
* initial dnslink support ([cd2f63e](https://git.lumeweb.com/LumeWeb/portal/commit/cd2f63eb72c2bfc404d8d1b5a6fdb53f61a31d1b))
* pin file after basic upload ([892f093](https://git.lumeweb.com/LumeWeb/portal/commit/892f093d93348459d113041104d773fdd5124a8d))
* pin file after tus upload ([5579ab8](https://git.lumeweb.com/LumeWeb/portal/commit/5579ab85a374be457163d06caf1ac6e260082cca))
* tus support ([3005be6](https://git.lumeweb.com/LumeWeb/portal/commit/3005be6fec8136214c1e9480c788f62564a2c5f9))
* wip version ([9a4c3d5](https://git.lumeweb.com/LumeWeb/portal/commit/9a4c3d5d13a3e76fe91eb5d78a6f2f0f8e238f80))

View File

@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) <year> <copyright holders> Copyright (c) 2023 Hammer Technologies LLC
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

31
bao/bao.go Normal file
View File

@ -0,0 +1,31 @@
package bao
import (
"bufio"
_ "embed"
"io"
"lukechampine.com/blake3"
)
func ComputeTree(reader io.Reader, size int64) ([]byte, [32]byte, error) {
bufSize := blake3.BaoEncodedSize(int(size), true)
buf := bufferAt{buf: make([]byte, bufSize)}
hash, err := blake3.BaoEncode(&buf, bufio.NewReader(reader), size, true)
if err != nil {
return nil, [32]byte{}, err
}
return buf.buf, hash, nil
}
type bufferAt struct {
buf []byte
}
func (b *bufferAt) WriteAt(p []byte, off int64) (int, error) {
if copy(b.buf[off:], p) != len(p) {
panic("bad buffer size")
}
return len(p), nil
}

100
cid/cid.go Normal file
View File

@ -0,0 +1,100 @@
package cid
import (
"bytes"
"encoding/binary"
"encoding/hex"
"errors"
"github.com/multiformats/go-multibase"
)
var MAGIC_BYTES = []byte{0x26, 0x1f}
var (
ErrMissingEmptySize = errors.New("Missing or empty size")
ErrInvalidCIDMagic = errors.New("CID magic bytes missing or invalid")
)
type CID struct {
Hash [32]byte
Size uint64
}
func (c CID) StringHash() string {
return hex.EncodeToString(c.Hash[:])
}
func Encode(hash []byte, size uint64) (string, error) {
var hashBytes [32]byte
copy(hashBytes[:], hash)
return EncodeFixed(hashBytes, size)
}
func EncodeFixed(hash [32]byte, size uint64) (string, error) {
sizeBytes := make([]byte, 8)
binary.LittleEndian.PutUint64(sizeBytes, size)
prefixedHash := append(MAGIC_BYTES, hash[:]...)
prefixedHash = append(prefixedHash, sizeBytes...)
return multibase.Encode(multibase.Base58BTC, prefixedHash)
}
func EncodeString(hash string, size uint64) (string, error) {
hashBytes, err := hex.DecodeString(hash)
if err != nil {
return "", err
}
return Encode(hashBytes, size)
}
func Valid(cid string) (bool, error) {
_, err := maybeDecode(cid)
if err != nil {
return false, err
}
return true, nil
}
func Decode(cid string) (*CID, error) {
data, err := maybeDecode(cid)
if err != nil {
return &CID{}, err
}
data = data[len(MAGIC_BYTES):]
var hash [32]byte
copy(hash[:], data[:])
size := binary.LittleEndian.Uint64(data[32:])
return &CID{Hash: hash, Size: size}, nil
}
func maybeDecode(cid string) ([]byte, error) {
_, data, err := multibase.Decode(cid)
if err != nil {
return nil, err
}
if bytes.Compare(data[0:len(MAGIC_BYTES)], MAGIC_BYTES) != 0 {
return nil, ErrInvalidCIDMagic
}
sizeBytes := data[len(MAGIC_BYTES)+32:]
if len(sizeBytes) == 0 {
return nil, ErrMissingEmptySize
}
size := binary.LittleEndian.Uint64(sizeBytes)
if size == 0 {
return nil, ErrMissingEmptySize
}
return data, nil
}

59
config/config.go Normal file
View File

@ -0,0 +1,59 @@
package config
import (
"errors"
"fmt"
"github.com/spf13/pflag"
"github.com/spf13/viper"
"log"
)
var (
ConfigFilePaths = []string{
"/etc/lumeweb/portal/",
"$HOME/.lumeweb/portal/",
".",
}
)
func Init() {
viper.SetConfigName("config")
viper.SetConfigType("json")
for _, path := range ConfigFilePaths {
viper.AddConfigPath(path)
}
viper.SetEnvPrefix("LUME_WEB_PORTAL")
viper.AutomaticEnv()
pflag.String("database.type", "sqlite", "Database type")
pflag.String("database.host", "localhost", "Database host")
pflag.Int("database.port", 3306, "Database port")
pflag.String("database.user", "root", "Database user")
pflag.String("database.password", "", "Database password")
pflag.String("database.name", "lumeweb_portal", "Database name")
pflag.String("database.path", "./db.sqlite", "Database path for SQLite")
pflag.String("renterd-api-password", ".", "admin password for renterd")
pflag.Bool("debug", false, "enable debug mode")
pflag.Parse()
err := viper.BindPFlags(pflag.CommandLine)
if err != nil {
log.Fatalf("Fatal error arguments: %s \n", err)
return
}
err = viper.ReadInConfig()
if err != nil {
if errors.As(err, &viper.ConfigFileNotFoundError{}) {
// Config file not found, this is not an error.
fmt.Println("Config file not found, using default settings.")
} else {
// Other error, panic.
panic(fmt.Errorf("Fatal error config file: %s \n", err))
}
}
}

35
controller/account.go Normal file
View File

@ -0,0 +1,35 @@
package controller
import (
"git.lumeweb.com/LumeWeb/portal/controller/request"
"git.lumeweb.com/LumeWeb/portal/service/account"
"github.com/kataras/iris/v12"
)
type AccountController struct {
Controller
}
func (a *AccountController) PostRegister() {
ri, success := tryParseRequest(request.RegisterRequest{}, a.Ctx)
if !success {
return
}
r, _ := ri.(*request.RegisterRequest)
err := account.Register(r.Email, r.Password, r.Pubkey)
if err != nil {
if err == account.ErrQueryingAcct || err == account.ErrFailedCreateAccount {
a.Ctx.StopWithError(iris.StatusInternalServerError, err)
} else {
a.Ctx.StopWithError(iris.StatusBadRequest, err)
}
return
}
// Return a success response to the client.
a.Ctx.StatusCode(iris.StatusCreated)
}

112
controller/auth.go Normal file
View File

@ -0,0 +1,112 @@
package controller
import (
"git.lumeweb.com/LumeWeb/portal/controller/request"
"git.lumeweb.com/LumeWeb/portal/controller/response"
"git.lumeweb.com/LumeWeb/portal/middleware"
"git.lumeweb.com/LumeWeb/portal/service/auth"
"github.com/kataras/iris/v12"
)
type AuthController struct {
Controller
}
// PostLogin handles the POST /api/auth/login request to authenticate a user and return a JWT token.
func (a *AuthController) PostLogin() {
ri, success := tryParseRequest(request.LoginRequest{}, a.Ctx)
if !success {
return
}
r, _ := ri.(*request.LoginRequest)
token, err := auth.LoginWithPassword(r.Email, r.Password)
if err != nil {
if err == auth.ErrFailedGenerateToken {
a.Ctx.StopWithError(iris.StatusInternalServerError, err)
} else {
a.Ctx.StopWithError(iris.StatusUnauthorized, err)
}
return
}
a.respondJSON(&response.LoginResponse{Token: token})
}
// PostChallenge handles the POST /api/auth/pubkey/challenge request to generate a challenge for a user's public key.
func (a *AuthController) PostPubkeyChallenge() {
ri, success := tryParseRequest(request.PubkeyChallengeRequest{}, a.Ctx)
if !success {
return
}
r, _ := (ri).(*request.PubkeyChallengeRequest)
challenge, err := auth.GeneratePubkeyChallenge(r.Pubkey)
if err != nil {
if err == auth.ErrFailedGenerateKeyChallenge {
a.Ctx.StopWithError(iris.StatusInternalServerError, err)
} else {
a.Ctx.StopWithError(iris.StatusUnauthorized, err)
}
return
}
a.respondJSON(&response.ChallengeResponse{Challenge: challenge})
}
// PostKeyLogin handles the POST /api/auth/pubkey/login request to authenticate a user using a public key challenge and return a JWT token.
func (a *AuthController) PostPubkeyLogin() {
ri, success := tryParseRequest(request.PubkeyLoginRequest{}, a.Ctx)
if !success {
return
}
r, _ := ri.(*request.PubkeyLoginRequest)
token, err := auth.LoginWithPubkey(r.Pubkey, r.Challenge, r.Signature)
if err != nil {
if err == auth.ErrFailedGenerateKeyChallenge || err == auth.ErrFailedGenerateToken || err == auth.ErrFailedSaveToken {
a.Ctx.StopWithError(iris.StatusInternalServerError, err)
} else {
a.Ctx.StopWithError(iris.StatusUnauthorized, err)
}
return
}
a.respondJSON(&response.LoginResponse{Token: token})
}
// PostLogout handles the POST /api/auth/logout request to invalidate a JWT token.
func (a *AuthController) PostLogout() {
ri, success := tryParseRequest(request.LogoutRequest{}, a.Ctx)
if !success {
return
}
r, _ := ri.(*request.LogoutRequest)
err := auth.Logout(r.Token)
if err != nil {
a.Ctx.StopWithError(iris.StatusBadRequest, err)
return
}
// Return a success response to the client.
a.Ctx.StatusCode(iris.StatusNoContent)
}
func (a *AuthController) GetStatus() {
middleware.VerifyJwt(a.Ctx)
if a.Ctx.IsStopped() {
return
}
a.respondJSON(&response.AuthStatusResponse{Status: true})
}

72
controller/controller.go Normal file
View File

@ -0,0 +1,72 @@
package controller
import (
"git.lumeweb.com/LumeWeb/portal/controller/validators"
"git.lumeweb.com/LumeWeb/portal/logger"
"github.com/kataras/iris/v12"
"go.uber.org/zap"
)
func tryParseRequest(r interface{}, ctx iris.Context) (interface{}, bool) {
v, ok := r.(validators.Validatable)
if !ok {
return r, true
}
var d map[string]interface{}
// Read the logout request from the client.
if err := ctx.ReadJSON(&d); err != nil {
logger.Get().Debug("failed to parse request", zap.Error(err))
ctx.StopWithError(iris.StatusBadRequest, err)
return nil, false
}
data, err := v.Import(d)
if err != nil {
logger.Get().Debug("failed to parse request", zap.Error(err))
ctx.StopWithError(iris.StatusBadRequest, err)
return nil, false
}
if err := data.Validate(); err != nil {
logger.Get().Debug("failed to parse request", zap.Error(err))
ctx.StopWithError(iris.StatusBadRequest, err)
return nil, false
}
return data, true
}
func sendErrorCustom(ctx iris.Context, err error, customError error, irisError int) bool {
if err != nil {
if customError != nil {
err = customError
}
ctx.StopWithError(irisError, err)
return true
}
return false
}
func InternalError(ctx iris.Context, err error) bool {
return sendErrorCustom(ctx, err, nil, iris.StatusInternalServerError)
}
func internalErrorCustom(ctx iris.Context, err error, customError error) bool {
return sendErrorCustom(ctx, err, customError, iris.StatusInternalServerError)
}
func SendError(ctx iris.Context, err error, irisError int) bool {
return sendErrorCustom(ctx, err, nil, irisError)
}
type Controller struct {
Ctx iris.Context
}
func (c Controller) respondJSON(data interface{}) {
err := c.Ctx.JSON(data)
if err != nil {
logger.Get().Error("failed to generate response", zap.Error(err))
}
}

213
controller/files.go Normal file
View File

@ -0,0 +1,213 @@
package controller
import (
"errors"
"git.lumeweb.com/LumeWeb/portal/cid"
"git.lumeweb.com/LumeWeb/portal/controller/response"
"git.lumeweb.com/LumeWeb/portal/logger"
"git.lumeweb.com/LumeWeb/portal/middleware"
"git.lumeweb.com/LumeWeb/portal/service/auth"
"git.lumeweb.com/LumeWeb/portal/service/files"
"github.com/kataras/iris/v12"
"go.uber.org/zap"
"io"
)
var ErrStreamDone = errors.New("done")
type FilesController struct {
Controller
}
func (f *FilesController) BeginRequest(ctx iris.Context) {
middleware.VerifyJwt(ctx)
}
func (f *FilesController) EndRequest(ctx iris.Context) {
}
func (f *FilesController) PostUpload() {
ctx := f.Ctx
file, meta, err := f.Ctx.FormFile("file")
if internalErrorCustom(ctx, err, errors.New("invalid file data")) {
logger.Get().Debug("invalid file data", zap.Error(err))
return
}
upload, err := files.Upload(file, meta.Size, nil, auth.GetCurrentUserId(ctx))
if InternalError(ctx, err) {
logger.Get().Debug("failed uploading file", zap.Error(err))
return
}
err = files.Pin(upload.Hash, upload.AccountID)
if InternalError(ctx, err) {
logger.Get().Debug("failed pinning file", zap.Error(err))
return
}
cidString, err := cid.EncodeString(upload.Hash, uint64(meta.Size))
if InternalError(ctx, err) {
logger.Get().Debug("failed creating cid", zap.Error(err))
return
}
err = ctx.JSON(&response.UploadResponse{Cid: cidString})
if err != nil {
logger.Get().Error("failed to create response", zap.Error(err))
}
}
func (f *FilesController) GetDownloadBy(cidString string) {
ctx := f.Ctx
hashHex, valid := ValidateCid(cidString, true, ctx)
if !valid {
return
}
download, err := files.Download(hashHex)
if InternalError(ctx, err) {
logger.Get().Debug("failed fetching file", zap.Error(err))
return
}
err = PassThroughStream(download, ctx)
if err != ErrStreamDone && InternalError(ctx, err) {
logger.Get().Debug("failed streaming file", zap.Error(err))
}
}
func (f *FilesController) GetProofBy(cidString string) {
ctx := f.Ctx
hashHex, valid := ValidateCid(cidString, true, ctx)
if !valid {
return
}
proof, err := files.DownloadProof(hashHex)
if InternalError(ctx, err) {
logger.Get().Debug("failed fetching file proof", zap.Error(err))
return
}
err = PassThroughStream(proof, ctx)
if InternalError(ctx, err) {
logger.Get().Debug("failed streaming file proof", zap.Error(err))
}
}
func (f *FilesController) GetStatusBy(cidString string) {
ctx := f.Ctx
hashHex, valid := ValidateCid(cidString, false, ctx)
if !valid {
return
}
status := files.Status(hashHex)
var statusCode string
switch status {
case files.STATUS_UPLOADED:
statusCode = "uploaded"
break
case files.STATUS_UPLOADING:
statusCode = "uploading"
break
case files.STATUS_NOT_FOUND:
statusCode = "not_found"
break
}
f.respondJSON(&response.FileStatusResponse{Status: statusCode})
}
func (f *FilesController) PostPinBy(cidString string) {
ctx := f.Ctx
hashHex, valid := ValidateCid(cidString, true, ctx)
if !valid {
return
}
err := files.Pin(hashHex, auth.GetCurrentUserId(ctx))
if InternalError(ctx, err) {
logger.Get().Error(err.Error())
return
}
f.Ctx.StatusCode(iris.StatusCreated)
}
func (f *FilesController) GetUploadLimit() {
f.respondJSON(&response.UploadLimit{Limit: f.Ctx.Application().ConfigurationReadOnly().GetPostMaxMemory()})
}
func ValidateCid(cidString string, validateStatus bool, ctx iris.Context) (string, bool) {
_, err := cid.Valid(cidString)
if SendError(ctx, err, iris.StatusBadRequest) {
logger.Get().Debug("invalid cid", zap.Error(err))
return "", false
}
cidObject, _ := cid.Decode(cidString)
hashHex := cidObject.StringHash()
if validateStatus {
status := files.Status(hashHex)
if status == files.STATUS_NOT_FOUND {
err := errors.New("cid not found")
SendError(ctx, errors.New("cid not found"), iris.StatusNotFound)
logger.Get().Debug("cid not found", zap.Error(err))
return "", false
}
}
return hashHex, true
}
func PassThroughStream(stream io.Reader, ctx iris.Context) error {
closed := false
err := ctx.StreamWriter(func(w io.Writer) error {
if closed {
return ErrStreamDone
}
count, err := io.CopyN(w, stream, 1024)
if count == 0 || err == io.EOF {
err = stream.(io.Closer).Close()
if err != nil {
logger.Get().Error("failed closing stream", zap.Error(err))
return err
}
closed = true
return nil
}
if err != nil {
return err
}
return nil
})
if err == ErrStreamDone {
err = nil
}
return err
}

View File

@ -0,0 +1,23 @@
package request
import (
"git.lumeweb.com/LumeWeb/portal/controller/validators"
validation "github.com/go-ozzo/ozzo-validation/v4"
"github.com/go-ozzo/ozzo-validation/v4/is"
)
type LoginRequest struct {
validatable validators.ValidatableImpl
Email string `json:"email"`
Password string `json:"password"`
}
func (r LoginRequest) Validate() error {
return validation.ValidateStruct(&r,
validation.Field(&r.Email, is.EmailFormat, validation.Required),
validation.Field(&r.Password, validation.Required),
)
}
func (r LoginRequest) Import(d map[string]interface{}) (validators.Validatable, error) {
return r.validatable.Import(d, r)
}

View File

@ -0,0 +1,19 @@
package request
import (
"git.lumeweb.com/LumeWeb/portal/controller/validators"
validation "github.com/go-ozzo/ozzo-validation/v4"
)
type LogoutRequest struct {
validatable validators.ValidatableImpl
Token string `json:"token"`
}
func (r LogoutRequest) Validate() error {
return validation.ValidateStruct(&r, validation.Field(&r.Token, validation.Required))
}
func (r LogoutRequest) Import(d map[string]interface{}) (validators.Validatable, error) {
return r.validatable.Import(d, r)
}

View File

@ -0,0 +1,21 @@
package request
import (
"git.lumeweb.com/LumeWeb/portal/controller/validators"
validation "github.com/go-ozzo/ozzo-validation/v4"
)
type PubkeyChallengeRequest struct {
validatable validators.ValidatableImpl
Pubkey string `json:"pubkey"`
}
func (r PubkeyChallengeRequest) Validate() error {
return validation.ValidateStruct(&r,
validation.Field(&r.Pubkey, validation.Required, validation.By(validators.CheckPubkeyValidator)),
)
}
func (r PubkeyChallengeRequest) Import(d map[string]interface{}) (validators.Validatable, error) {
return r.validatable.Import(d, r)
}

View File

@ -0,0 +1,25 @@
package request
import (
"git.lumeweb.com/LumeWeb/portal/controller/validators"
validation "github.com/go-ozzo/ozzo-validation/v4"
)
type PubkeyLoginRequest struct {
validatable validators.ValidatableImpl
Pubkey string `json:"pubkey"`
Challenge string `json:"challenge"`
Signature string `json:"signature"`
}
func (r PubkeyLoginRequest) Validate() error {
return validation.ValidateStruct(&r,
validation.Field(&r.Pubkey, validation.Required, validation.By(validators.CheckPubkeyValidator)),
validation.Field(&r.Challenge, validation.Required),
validation.Field(&r.Signature, validation.Required, validation.Length(128, 128)),
)
}
func (r PubkeyLoginRequest) Import(d map[string]interface{}) (validators.Validatable, error) {
return r.validatable.Import(d, r)
}

View File

@ -0,0 +1,25 @@
package request
import (
"git.lumeweb.com/LumeWeb/portal/controller/validators"
validation "github.com/go-ozzo/ozzo-validation/v4"
"github.com/go-ozzo/ozzo-validation/v4/is"
)
type RegisterRequest struct {
validatable validators.ValidatableImpl
Email string `json:"email"`
Password string `json:"password"`
Pubkey string `json:"pubkey"`
}
func (r RegisterRequest) Validate() error {
return validation.ValidateStruct(&r,
validation.Field(&r.Email, validation.Required, is.EmailFormat),
validation.Field(&r.Pubkey, validation.When(len(r.Password) == 0, validation.Required, validation.By(validators.CheckPubkeyValidator))),
validation.Field(&r.Password, validation.When(len(r.Pubkey) == 0, validation.Required)),
)
}
func (r RegisterRequest) Import(d map[string]interface{}) (validators.Validatable, error) {
return r.validatable.Import(d, r)
}

View File

@ -0,0 +1,5 @@
package response
type AuthStatusResponse struct {
Status bool `json:"status"`
}

View File

@ -0,0 +1,5 @@
package response
type ChallengeResponse struct {
Challenge string `json:"challenge"`
}

View File

@ -0,0 +1,5 @@
package response
type FileStatusResponse struct {
Status string `json:"status"`
}

View File

@ -0,0 +1,5 @@
package response
type LoginResponse struct {
Token string `json:"token"`
}

View File

@ -0,0 +1,5 @@
package response
type UploadResponse struct {
Cid string `json:"cid"`
}

View File

@ -0,0 +1,5 @@
package response
type UploadLimit struct {
Limit int64 `json:"limit"`
}

View File

@ -0,0 +1,43 @@
package validators
import (
"crypto/ed25519"
"encoding/hex"
"errors"
"fmt"
validation "github.com/go-ozzo/ozzo-validation/v4"
"github.com/imdario/mergo"
"reflect"
)
func CheckPubkeyValidator(value interface{}) error {
p, _ := value.(string)
pubkeyBytes, err := hex.DecodeString(p)
if err != nil {
return err
}
if len(pubkeyBytes) != ed25519.PublicKeySize {
return errors.New(fmt.Sprintf("pubkey must be %d bytes in hexadecimal format", ed25519.PublicKeySize))
}
return nil
}
type Validatable interface {
validation.Validatable
Import(d map[string]interface{}) (Validatable, error)
}
type ValidatableImpl struct {
}
func (v ValidatableImpl) Import(d map[string]interface{}, destType Validatable) (Validatable, error) {
instance := reflect.New(reflect.TypeOf(destType)).Interface().(Validatable)
// Perform the import logic
if err := mergo.Map(instance, d, mergo.WithOverride); err != nil {
return nil, err
}
return instance, nil
}

71
db/db.go Normal file
View File

@ -0,0 +1,71 @@
package db
import (
"fmt"
"git.lumeweb.com/LumeWeb/portal/model"
"github.com/spf13/viper"
"gorm.io/driver/mysql"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
// Declare a global variable to hold the database connection.
var db *gorm.DB
// Init initializes the database connection based on the app's configuration settings.
func Init() {
// If the database connection has already been initialized, panic.
if db != nil {
panic("DB already initialized")
}
// Retrieve database connection settings from the app's configuration using the viper library.
dbType := viper.GetString("database.type")
dbHost := viper.GetString("database.host")
dbPort := viper.GetInt("database.port")
dbSocket := viper.GetString("database.socket")
dbUser := viper.GetString("database.user")
dbPassword := viper.GetString("database.password")
dbName := viper.GetString("database.name")
dbPath := viper.GetString("database.path")
var err error
var dsn string
switch dbType {
// Connect to a MySQL database.
case "mysql":
if dbSocket != "" {
dsn = fmt.Sprintf("%s:%s@unix(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local", dbUser, dbPassword, dbSocket, dbName)
} else {
dsn = fmt.Sprintf("%s:%s@tcp(%s:%d)/%s?charset=utf8mb4&parseTime=True&loc=Local", dbUser, dbPassword, dbHost, dbPort, dbName)
}
db, err = gorm.Open(mysql.Open(dsn), &gorm.Config{})
// Connect to a SQLite database.
case "sqlite":
db, err = gorm.Open(sqlite.Open(dbPath), &gorm.Config{})
// If the database type is unsupported, panic.
default:
panic(fmt.Errorf("Unsupported database type: %s \n", dbType))
}
// If there was an error connecting to the database, panic.
if err != nil {
panic(fmt.Errorf("Failed to connect to database: %s \n", err))
}
// Automatically migrate the database schema based on the model definitions.
err = db.Migrator().AutoMigrate(&model.Account{}, &model.Key{}, &model.KeyChallenge{}, &model.LoginSession{}, &model.Upload{}, &model.Pin{}, &model.Tus{}, &model.Dnslink{})
if err != nil {
panic(fmt.Errorf("Database setup failed database type: %s \n", err))
}
}
// Get returns the database connection instance.
func Get() *gorm.DB {
return db
}
func Close() error {
instance, _ := db.DB()
return instance.Close()
}

225
dnslink/dnslink.go Normal file
View File

@ -0,0 +1,225 @@
package dnslink
import (
"errors"
"git.lumeweb.com/LumeWeb/portal/cid"
"git.lumeweb.com/LumeWeb/portal/controller"
"git.lumeweb.com/LumeWeb/portal/db"
"git.lumeweb.com/LumeWeb/portal/logger"
"git.lumeweb.com/LumeWeb/portal/model"
"git.lumeweb.com/LumeWeb/portal/service/files"
dnslink "github.com/dnslink-std/go"
"github.com/kataras/iris/v12"
"github.com/kataras/iris/v12/context"
"github.com/vmihailenco/msgpack/v5"
"go.uber.org/zap"
"io"
"path/filepath"
"strings"
)
var (
ErrFailedReadAppManifest = errors.New("failed to read app manifest")
ErrInvalidAppManifest = errors.New("invalid app manifest")
)
type CID string
type ExtraMetadata map[string]interface{}
type WebAppMetadata struct {
Schema string `msgpack:"$schema,omitempty"`
Type string `msgpack:"type"`
Name string `msgpack:"name,omitempty"`
TryFiles []string `msgpack:"tryFiles,omitempty"`
ErrorPages map[string]string `msgpack:"errorPages,omitempty"`
Paths map[string]PathContent `msgpack:"paths"`
ExtraMetadata ExtraMetadata `msgpack:"extraMetadata,omitempty"`
}
type PathContent struct {
CID CID `msgpack:"cid"`
ContentType string `msgpack:"contentType,omitempty"`
}
func Handler(ctx *context.Context) {
record := model.Dnslink{}
domain := ctx.Request().Host
if err := db.Get().Model(&model.Dnslink{Domain: domain}).First(&record).Error; err != nil {
ctx.StopWithStatus(iris.StatusNotFound)
return
}
ret, err := dnslink.Resolve(domain)
if err != nil {
switch e := err.(type) {
default:
ctx.StopWithStatus(iris.StatusInternalServerError)
return
case dnslink.DNSRCodeError:
if e.DNSRCode == 3 {
ctx.StopWithStatus(iris.StatusNotFound)
return
}
}
}
if ret.Links["sia"] == nil || len(ret.Links["sia"]) == 0 {
ctx.StopWithStatus(iris.StatusNotFound)
return
}
appManifest := ret.Links["sia"][0]
decodedCid, valid := controller.ValidateCid(appManifest.Identifier, true, ctx)
if !valid {
return
}
manifest := fetchManifest(ctx, decodedCid)
if manifest == nil {
return
}
path := ctx.Path()
if strings.HasSuffix(path, "/") || filepath.Ext(path) == "" {
var directoryIndex *PathContent
for _, indexFile := range manifest.TryFiles {
path, exists := manifest.Paths[indexFile]
if !exists {
continue
}
_, err := cid.Valid(string(manifest.Paths[indexFile].CID))
if err != nil {
continue
}
cidObject, _ := cid.Decode(string(path.CID))
hashHex := cidObject.StringHash()
status := files.Status(hashHex)
if status == files.STATUS_NOT_FOUND {
continue
}
if status == files.STATUS_UPLOADED {
directoryIndex = &path
break
}
}
if directoryIndex == nil {
ctx.StopWithStatus(iris.StatusNotFound)
return
}
file, err := fetchFile(directoryIndex)
if maybeHandleFileError(err, ctx) {
return
}
ctx.Header("Content-Type", directoryIndex.ContentType)
streamFile(file, ctx)
return
}
requestedPath, exists := manifest.Paths[path]
if !exists {
ctx.StopWithStatus(iris.StatusNotFound)
return
}
file, err := fetchFile(&requestedPath)
if maybeHandleFileError(err, ctx) {
return
}
ctx.Header("Content-Type", requestedPath.ContentType)
streamFile(file, ctx)
}
func maybeHandleFileError(err error, ctx *context.Context) bool {
if err != nil {
if err == files.ErrInvalidFile {
controller.SendError(ctx, err, iris.StatusNotFound)
return true
}
controller.SendError(ctx, err, iris.StatusInternalServerError)
}
return err != nil
}
func streamFile(stream io.Reader, ctx *context.Context) {
err := controller.PassThroughStream(stream, ctx)
if err != controller.ErrStreamDone && controller.InternalError(ctx, err) {
logger.Get().Debug("failed streaming file", zap.Error(err))
}
}
func fetchFile(path *PathContent) (io.Reader, error) {
_, err := cid.Valid(string(path.CID))
if err != nil {
return nil, err
}
cidObject, _ := cid.Decode(string(path.CID))
hashHex := cidObject.StringHash()
status := files.Status(hashHex)
if status == files.STATUS_NOT_FOUND {
return nil, errors.New("cid not found")
}
if status == files.STATUS_UPLOADED {
stream, err := files.Download(hashHex)
if err != nil {
return nil, err
}
return stream, nil
}
return nil, errors.New("cid not found")
}
func fetchManifest(ctx iris.Context, hash string) *WebAppMetadata {
stream, err := files.Download(hash)
if err != nil {
if errors.Is(err, files.ErrInvalidFile) {
controller.SendError(ctx, err, iris.StatusNotFound)
return nil
}
controller.SendError(ctx, err, iris.StatusInternalServerError)
}
var metadata WebAppMetadata
data, err := io.ReadAll(stream)
if err != nil {
logger.Get().Debug(ErrFailedReadAppManifest.Error(), zap.Error(err))
controller.SendError(ctx, ErrFailedReadAppManifest, iris.StatusInternalServerError)
return nil
}
err = msgpack.Unmarshal(data, &metadata)
if err != nil {
logger.Get().Debug(ErrFailedReadAppManifest.Error(), zap.Error(err))
controller.SendError(ctx, ErrFailedReadAppManifest, iris.StatusInternalServerError)
return nil
}
if metadata.Type != "web_app" {
logger.Get().Debug(ErrInvalidAppManifest.Error())
controller.SendError(ctx, ErrInvalidAppManifest, iris.StatusInternalServerError)
return nil
}
return &metadata
}

128
go.mod Normal file
View File

@ -0,0 +1,128 @@
module git.lumeweb.com/LumeWeb/portal
go 1.18
require (
github.com/dnslink-std/go v0.6.0
github.com/go-ozzo/ozzo-validation/v4 v4.3.0
github.com/go-resty/resty/v2 v2.7.0
github.com/golang-queue/queue v0.1.3
github.com/huandu/go-clone v1.6.0
github.com/imdario/mergo v0.3.16
github.com/iris-contrib/swagger v0.0.0-20230531125653-f4ee631290a7
github.com/kataras/iris/v12 v12.2.0
github.com/kataras/jwt v0.1.8
github.com/multiformats/go-multibase v0.2.0
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.16.0
github.com/swaggo/swag v1.16.1
github.com/tus/tusd v1.11.0
github.com/vmihailenco/msgpack/v5 v5.3.5
go.uber.org/zap v1.24.0
golang.org/x/crypto v0.10.0
golang.org/x/exp v0.0.0-20230626212559-97b1e661b5df
gorm.io/driver/mysql v1.5.1
gorm.io/driver/sqlite v1.5.2
gorm.io/gorm v1.25.2
lukechampine.com/blake3 v1.2.1
)
require (
github.com/BurntSushi/toml v1.3.2 // indirect
github.com/CloudyKit/fastprinter v0.0.0-20200109182630-33d98a066a53 // indirect
github.com/CloudyKit/jet/v6 v6.2.0 // indirect
github.com/Joker/jade v1.1.3 // indirect
github.com/KyleBanks/depth v1.2.1 // indirect
github.com/Shopify/goreferrer v0.0.0-20220729165902-8cddb4f5de06 // indirect
github.com/andybalholm/brotli v1.0.5 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/aymerick/douceur v0.2.0 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/bmizerany/pat v0.0.0-20210406213842-e4b6760bdd6f // indirect
github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385 // indirect
github.com/fatih/structs v1.1.0 // indirect
github.com/flosch/pongo2/v4 v4.0.2 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/spec v0.20.9 // indirect
github.com/go-openapi/swag v0.22.4 // indirect
github.com/go-sql-driver/mysql v1.7.1 // indirect
github.com/gobwas/httphead v0.1.0 // indirect
github.com/gobwas/pool v0.2.1 // indirect
github.com/gobwas/ws v1.2.1 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/gorilla/css v1.0.0 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/iris-contrib/go.uuid v2.0.0+incompatible // indirect
github.com/iris-contrib/schema v0.0.6 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/kataras/blocks v0.0.7 // indirect
github.com/kataras/golog v0.1.9 // indirect
github.com/kataras/neffos v0.0.21 // indirect
github.com/kataras/pio v0.0.12 // indirect
github.com/kataras/sitemap v0.0.6 // indirect
github.com/kataras/tunnel v0.0.4 // indirect
github.com/klauspost/compress v1.16.6 // indirect
github.com/klauspost/cpuid/v2 v2.2.5 // indirect
github.com/magiconair/properties v1.8.7 // indirect
github.com/mailgun/raymond/v2 v2.0.48 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-sqlite3 v1.14.17 // indirect
github.com/mediocregopher/radix/v3 v3.8.1 // indirect
github.com/microcosm-cc/bluemonday v1.0.24 // indirect
github.com/miekg/dns v1.1.43 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/nats-io/nats.go v1.27.1 // indirect
github.com/nats-io/nkeys v0.4.4 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/pelletier/go-toml/v2 v2.0.8 // indirect
github.com/rogpeppe/go-internal v1.10.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/schollz/closestmatch v2.1.0+incompatible // indirect
github.com/sergi/go-diff v1.1.0 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/spf13/afero v1.9.5 // indirect
github.com/spf13/cast v1.5.1 // indirect
github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/subosito/gotenv v1.4.2 // indirect
github.com/tdewolff/minify/v2 v2.12.7 // indirect
github.com/tdewolff/parse/v2 v2.6.6 // indirect
github.com/valyala/bytebufferpool v1.0.0 // indirect
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
github.com/yosssi/ace v0.0.5 // indirect
go.uber.org/atomic v1.11.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/net v0.11.0 // indirect
golang.org/x/sys v0.9.0 // indirect
golang.org/x/text v0.10.0 // indirect
golang.org/x/time v0.3.0 // indirect
golang.org/x/tools v0.10.0 // indirect
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
google.golang.org/protobuf v1.31.0 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
replace go.uber.org/multierr => go.uber.org/multierr v1.9.0
replace (
github.com/tus/tusd => git.lumeweb.com/LumeWeb/tusd v1.11.1-0.20230629085530-7b20ce6a9ae5
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp => go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.39.0
go.opentelemetry.io/otel => go.opentelemetry.io/otel v1.14.0
go.opentelemetry.io/otel/exporters/otlp/internal/retry => go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.12.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace => go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.12.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp => go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.12.0
go.opentelemetry.io/otel/metric => go.opentelemetry.io/otel/metric v0.37.0
go.opentelemetry.io/otel/sdk => go.opentelemetry.io/otel/sdk v1.12.0
go.opentelemetry.io/otel/trace => go.opentelemetry.io/otel/trace v1.14.0
go.opentelemetry.io/proto/otlp => go.opentelemetry.io/proto/otlp v0.19.0
)

1949
go.sum Normal file

File diff suppressed because it is too large Load Diff

30
logger/logger.go Normal file
View File

@ -0,0 +1,30 @@
package logger
import (
"github.com/spf13/viper"
"go.uber.org/zap"
"log"
)
var logger *zap.Logger
func Init() {
var newLogger *zap.Logger
var err error
if viper.GetBool("debug") {
newLogger, err = zap.NewDevelopment()
} else {
newLogger, err = zap.NewProduction()
}
if err != nil {
log.Fatal(err)
}
logger = newLogger
}
func Get() *zap.Logger {
return logger
}

132
main.go Normal file
View File

@ -0,0 +1,132 @@
package main
import (
"context"
"embed"
"git.lumeweb.com/LumeWeb/portal/config"
"git.lumeweb.com/LumeWeb/portal/controller"
"git.lumeweb.com/LumeWeb/portal/db"
"git.lumeweb.com/LumeWeb/portal/dnslink"
_ "git.lumeweb.com/LumeWeb/portal/docs"
"git.lumeweb.com/LumeWeb/portal/logger"
"git.lumeweb.com/LumeWeb/portal/middleware"
"git.lumeweb.com/LumeWeb/portal/service/auth"
"git.lumeweb.com/LumeWeb/portal/service/files"
"git.lumeweb.com/LumeWeb/portal/shared"
"git.lumeweb.com/LumeWeb/portal/tus"
"github.com/iris-contrib/swagger"
"github.com/iris-contrib/swagger/swaggerFiles"
"github.com/kataras/iris/v12"
irisContext "github.com/kataras/iris/v12/context"
"github.com/kataras/iris/v12/middleware/cors"
"github.com/kataras/iris/v12/mvc"
"go.uber.org/zap"
"log"
"net/http"
)
// Embed a directory of static files for serving from the app's root path
//
//go:embed app/*
var embedFrontend embed.FS
// @title Lume Web Portal
// @version 1.0
// @description A decentralized data storage portal for the open web
// @contact.name Lume Web Project
// @contact.url https://lumeweb.com
// @contact.email contact@lumeweb.com
// @license.name MIT
// @license.url https://opensource.org/license/mit/
// @externalDocs.description OpenAPI
// @externalDocs.url https://swagger.io/resources/open-api/
func main() {
// Initialize the configuration settings
config.Init()
// Initialize the database connection
db.Init()
defer func() {
err := db.Close()
if err != nil {
logger.Get().Error("Failed to close db connection", zap.Error(err))
}
}()
logger.Init()
files.Init()
auth.Init()
// Create a new Iris app instance
app := iris.New()
// Enable Gzip compression for responses
app.Use(iris.Compression)
app.UseRouter(cors.New().Handler())
// Serve static files from the embedded directory at the app's root path
_ = embedFrontend
// app.HandleDir("/", embedFrontend)
api := app.Party("/api")
v1 := api.Party("/v1")
tusHandler := tus.Init()
// Register the AccountController with the MVC framework and attach it to the "/api/account" path
mvc.Configure(v1.Party("/account"), func(app *mvc.Application) {
app.Handle(new(controller.AccountController))
})
mvc.Configure(v1.Party("/auth"), func(app *mvc.Application) {
app.Handle(new(controller.AuthController))
})
mvc.Configure(v1.Party("/files"), func(app *mvc.Application) {
tusRoute := app.Router.Party(tus.TUS_API_PATH)
tusRoute.Use(middleware.VerifyJwt)
fromStd := func(handler http.Handler) func(ctx *irisContext.Context) {
return func(ctx *irisContext.Context) {
newCtx := context.WithValue(ctx.Request().Context(), shared.TusRequestContextKey, ctx)
handler.ServeHTTP(ctx.ResponseWriter(), ctx.Request().WithContext(newCtx))
}
}
tusRoute.Any("/{fileparam:path}", fromStd(http.StripPrefix(v1.GetRelPath()+tus.TUS_API_PATH+"/", tusHandler)))
tusRoute.Post("/{p:path}", fromStd(http.StripPrefix(tusRoute.GetRelPath()+tus.TUS_API_PATH, tusHandler)))
app.Handle(new(controller.FilesController))
})
swaggerConfig := swagger.Config{
// The url pointing to API definition.
URL: "http://localhost:8080/swagger/doc.json",
DeepLinking: true,
DocExpansion: "list",
DomID: "#swagger-ui",
// The UI prefix URL (see route).
Prefix: "/swagger",
}
swaggerUI := swagger.Handler(swaggerFiles.Handler, swaggerConfig)
app.Get("/swagger", swaggerUI)
// And the wildcard one for index.html, *.js, *.css and e.t.c.
app.Get("/swagger/{any:path}", swaggerUI)
app.Party("*").Any("*", dnslink.Handler)
// Start the Iris app and listen for incoming requests on port 80
err := app.Listen(":8080", func(app *iris.Application) {
routes := app.GetRoutes()
for _, route := range routes {
log.Println(route)
}
})
if err != nil {
logger.Get().Error("Failed starting webserver", zap.Error(err))
}
}

28
middleware/jwt.go Normal file
View File

@ -0,0 +1,28 @@
package middleware
import (
"git.lumeweb.com/LumeWeb/portal/service/account"
"git.lumeweb.com/LumeWeb/portal/service/auth"
"github.com/kataras/iris/v12"
)
func VerifyJwt(ctx iris.Context) {
token := auth.GetRequestAuthCode(ctx)
if len(token) == 0 {
ctx.StopWithError(iris.StatusUnauthorized, auth.ErrInvalidToken)
return
}
acct, err := auth.VerifyLoginToken(token)
if err != nil {
ctx.StopWithError(iris.StatusUnauthorized, auth.ErrInvalidToken)
return
}
err = ctx.SetUser(account.NewUser(acct))
if err != nil {
ctx.StopWithError(iris.StatusInternalServerError, err)
}
}

17
model/account.go Normal file
View File

@ -0,0 +1,17 @@
package model
import (
"gorm.io/gorm"
"time"
)
type Account struct {
gorm.Model
ID uint `gorm:"primaryKey" gorm:"AUTO_INCREMENT"`
Email string `gorm:"uniqueIndex"`
Password *string
CreatedAt time.Time
UpdatedAt time.Time
LoginTokens []LoginSession
Keys []Key
}

11
model/dnslink.go Normal file
View File

@ -0,0 +1,11 @@
package model
import (
"gorm.io/gorm"
)
type Dnslink struct {
gorm.Model
ID uint `gorm:"primaryKey" gorm:"AUTO_INCREMENT"`
Domain string `gorm:"uniqueIndex"`
}

16
model/key.go Normal file
View File

@ -0,0 +1,16 @@
package model
import (
"gorm.io/gorm"
"time"
)
type Key struct {
gorm.Model
ID uint `gorm:"primaryKey" gorm:"AUTO_INCREMENT"`
AccountID uint
Account Account
Pubkey string
CreatedAt time.Time
UpdatedAt time.Time
}

15
model/key_challenge.go Normal file
View File

@ -0,0 +1,15 @@
package model
import (
"gorm.io/gorm"
"time"
)
type KeyChallenge struct {
gorm.Model
ID uint `gorm:"primaryKey" gorm:"AUTO_INCREMENT"`
AccountID uint
Account Account
Challenge string `gorm:"not null"`
Expiration time.Time
}

22
model/login_session.go Normal file
View File

@ -0,0 +1,22 @@
package model
import (
"gorm.io/gorm"
"time"
)
type LoginSession struct {
gorm.Model
ID uint `gorm:"primaryKey" gorm:"AUTO_INCREMENT"`
AccountID uint
Account Account
Token string `gorm:"uniqueIndex"`
Expiration time.Time
CreatedAt time.Time
UpdatedAt time.Time
}
func (s *LoginSession) BeforeCreate(tx *gorm.DB) (err error) {
s.Expiration = time.Now().Add(time.Hour * 24)
return
}

12
model/pin.go Normal file
View File

@ -0,0 +1,12 @@
package model
import "gorm.io/gorm"
type Pin struct {
gorm.Model
ID uint `gorm:"primaryKey" gorm:"AUTO_INCREMENT"`
AccountID uint `gorm:"uniqueIndex:idx_account_upload"`
UploadID uint `gorm:"uniqueIndex:idx_account_upload"`
Account Account
Upload Upload
}

15
model/tus.go Normal file
View File

@ -0,0 +1,15 @@
package model
import (
"gorm.io/gorm"
)
type Tus struct {
gorm.Model
ID uint `gorm:"primaryKey" gorm:"AUTO_INCREMENT"`
UploadID string
Hash string
Info string
AccountID uint
Account Account
}

13
model/upload.go Normal file
View File

@ -0,0 +1,13 @@
package model
import (
"gorm.io/gorm"
)
type Upload struct {
gorm.Model
ID uint `gorm:"primaryKey" gorm:"AUTO_INCREMENT"`
AccountID uint `gorm:"index"`
Account Account
Hash string `gorm:"uniqueIndex"`
}

6988
npm-shrinkwrap.json generated Normal file

File diff suppressed because it is too large Load Diff

18
package.json Normal file
View File

@ -0,0 +1,18 @@
{
"name": "@lumeweb/portal",
"version": "0.1.0-develop.1",
"repository": {
"type": "git",
"url": "gitea@git.lumeweb.com:LumeWeb/portal.git"
},
"devDependencies": {
"@semantic-release/changelog": "^6.0.3",
"@semantic-release/git": "^10.0.1",
"@semantic-release/npm": "^10.0.4",
"@semantic-release/release-notes-generator": "^11.0.4",
"semantic-release": "^21.0.5"
},
"scripts": {
"semantic-release": "semantic-release"
}
}

View File

@ -0,0 +1,76 @@
package account
import (
"errors"
"git.lumeweb.com/LumeWeb/portal/db"
"git.lumeweb.com/LumeWeb/portal/logger"
"git.lumeweb.com/LumeWeb/portal/model"
"go.uber.org/zap"
"gorm.io/gorm"
)
var (
ErrEmailExists = errors.New("Account with email already exists")
ErrPubkeyExists = errors.New("Account with pubkey already exists")
ErrQueryingAcct = errors.New("Error querying accounts")
ErrFailedHashPassword = errors.New("Failed to hash password")
ErrFailedCreateAccount = errors.New("Failed to create account")
)
func Register(email string, password string, pubkey string) error {
err := db.Get().Transaction(func(tx *gorm.DB) error {
existingAccount := model.Account{}
err := tx.Where("email = ?", email).First(&existingAccount).Error
if err == nil {
return ErrEmailExists
} else if !errors.Is(err, gorm.ErrRecordNotFound) {
return err
}
if len(pubkey) > 0 {
var count int64
err := tx.Model(&model.Key{}).Where("pubkey = ?", pubkey).Count(&count).Error
if err != nil && !errors.Is(err, gorm.ErrRecordNotFound) {
return err
}
if count > 0 {
// An account with the same pubkey already exists.
// Return an error response to the client.
return ErrPubkeyExists
}
}
// Create a new Account model with the provided email and hashed password.
account := model.Account{
Email: email,
}
// Hash the password before saving it to the database.
if len(password) > 0 {
hashedPassword, err := hashPassword(password)
if err != nil {
return err
}
account.Password = &hashedPassword
}
if err := tx.Create(&account).Error; err != nil {
return err
}
if len(pubkey) > 0 {
if err := tx.Create(&model.Key{Account: account, Pubkey: pubkey}).Error; err != nil {
return err
}
}
return nil
})
if err != nil {
logger.Get().Error(ErrFailedCreateAccount.Error(), zap.Error(err))
return err
}
return nil
}

20
service/account/user.go Normal file
View File

@ -0,0 +1,20 @@
package account
import (
"git.lumeweb.com/LumeWeb/portal/model"
"github.com/kataras/iris/v12/context"
"strconv"
)
type User struct {
context.User
account *model.Account
}
func (u User) GetID() (string, error) {
return strconv.Itoa(int(u.account.ID)), nil
}
func NewUser(account *model.Account) *User {
return &User{account: account}
}

19
service/account/util.go Normal file
View File

@ -0,0 +1,19 @@
package account
import (
"git.lumeweb.com/LumeWeb/portal/logger"
"go.uber.org/zap"
"golang.org/x/crypto/bcrypt"
)
func hashPassword(password string) (string, error) {
// Generate a new bcrypt hash from the provided password.
hashedPassword, err := bcrypt.GenerateFromPassword([]byte(password), bcrypt.DefaultCost)
if err != nil {
logger.Get().Error(ErrFailedHashPassword.Error(), zap.Error(err))
return "", ErrFailedHashPassword
}
// Convert the hashed password to a string and return it.
return string(hashedPassword), nil
}

264
service/auth/auth.go Normal file
View File

@ -0,0 +1,264 @@
package auth
import (
"crypto/ed25519"
"crypto/x509"
"encoding/hex"
"encoding/pem"
"errors"
"git.lumeweb.com/LumeWeb/portal/config"
"git.lumeweb.com/LumeWeb/portal/db"
"git.lumeweb.com/LumeWeb/portal/logger"
"git.lumeweb.com/LumeWeb/portal/model"
"github.com/kataras/jwt"
"github.com/spf13/viper"
"go.uber.org/zap"
"os"
"path"
"path/filepath"
"strings"
"time"
)
var jwtKey = ed25519.PrivateKey{}
var blocklist *jwt.Blocklist
var (
ErrInvalidEmailPassword = errors.New("Invalid email or password")
ErrPubkeyOnly = errors.New("Only pubkey login is supported")
ErrFailedGenerateToken = errors.New("Failed to generate token")
ErrFailedGenerateKeyChallenge = errors.New("Failed to generate key challenge")
ErrFailedSignJwt = errors.New("Failed to sign jwt")
ErrFailedSaveToken = errors.New("Failed to sign token")
ErrFailedDeleteKeyChallenge = errors.New("Failed to delete key challenge")
ErrFailedInvalidateToken = errors.New("Failed to invalidate token")
ErrInvalidKeyChallenge = errors.New("Invalid key challenge")
ErrInvalidPubkey = errors.New("Invalid pubkey")
ErrInvalidSignature = errors.New("Invalid signature")
ErrInvalidToken = errors.New("Invalid token")
)
func Init() {
blocklist = jwt.NewBlocklist(0)
configFile := viper.ConfigFileUsed()
var jwtPemPath string
jwtPemName := "jwt.pem"
if configFile == "" {
jwtPemPath = path.Join(config.ConfigFilePaths[0], jwtPemName)
} else {
jwtPemPath = path.Join(filepath.Dir(configFile), jwtPemName)
}
if _, err := os.Stat(jwtPemPath); err != nil {
_, private, err := ed25519.GenerateKey(nil)
if err != nil {
logger.Get().Fatal("Failed to compute JWT private key", zap.Error(err))
}
privateBytes, err := x509.MarshalPKCS8PrivateKey(private)
if err != nil {
logger.Get().Fatal("Failed to create marshal private key", zap.Error(err))
}
var pemPrivateBlock = &pem.Block{
Type: "PRIVATE KEY",
Bytes: privateBytes,
}
pemPrivateFile, err := os.Create(jwtPemPath)
if err != nil {
logger.Get().Fatal("Failed to create empty file for JWT private PEM", zap.Error(err))
}
err = pem.Encode(pemPrivateFile, pemPrivateBlock)
if err != nil {
logger.Get().Fatal("Failed to write JWT private PEM", zap.Error(err))
}
jwtKey = private
} else {
data, err := os.ReadFile(jwtPemPath)
if err != nil {
logger.Get().Fatal("Failed to read JWT private PEM", zap.Error(err))
}
pemBlock, _ := pem.Decode(data)
if err != nil {
logger.Get().Fatal("Failed to decode JWT private PEM", zap.Error(err))
}
privateBytes, err := x509.ParsePKCS8PrivateKey(pemBlock.Bytes)
if err != nil {
logger.Get().Fatal("Failed to unmarshal JWT private PEM", zap.Error(err))
}
jwtKey = privateBytes.(ed25519.PrivateKey)
}
}
func LoginWithPassword(email string, password string) (string, error) {
// Retrieve the account for the given email.
account := model.Account{}
if err := db.Get().Model(&account).Where("email = ?", email).First(&account).Error; err != nil {
logger.Get().Debug(ErrInvalidEmailPassword.Error(), zap.String("email", email))
return "", ErrInvalidEmailPassword
}
if account.Password == nil || len(*account.Password) == 0 {
logger.Get().Debug(ErrPubkeyOnly.Error(), zap.String("email", email))
return "", ErrPubkeyOnly
}
// Verify the provided password against the hashed password stored in the database.
if err := verifyPassword(*account.Password, password); err != nil {
logger.Get().Debug(ErrPubkeyOnly.Error(), zap.String("email", email))
return "", ErrInvalidEmailPassword
}
// Generate a JWT token for the authenticated user.
token, err := generateAndSaveLoginToken(account.ID, 24*time.Hour)
if err != nil {
return "", err
}
return token, nil
}
func LoginWithPubkey(pubkey string, challenge string, signature string) (string, error) {
pubkey = strings.ToLower(pubkey)
signature = strings.ToLower(signature)
// Retrieve the key challenge for the given challenge.
challengeObj := model.KeyChallenge{}
if err := db.Get().Model(challengeObj).Where("challenge = ?", challenge).First(&challengeObj).Error; err != nil {
logger.Get().Debug(ErrInvalidKeyChallenge.Error(), zap.Error(err), zap.String("challenge", challenge))
return "", ErrInvalidKeyChallenge
}
verifiedToken, err := jwt.Verify(jwt.EdDSA, jwtKey, []byte(challenge), blocklist)
if err != nil {
logger.Get().Debug(ErrInvalidKeyChallenge.Error(), zap.Error(err), zap.String("challenge", challenge))
return "", ErrInvalidKeyChallenge
}
rawPubKey, err := hex.DecodeString(pubkey)
if err != nil {
logger.Get().Debug(ErrInvalidPubkey.Error(), zap.Error(err), zap.String("pubkey", pubkey))
return "", ErrInvalidPubkey
}
rawSignature, err := hex.DecodeString(signature)
if err != nil {
logger.Get().Debug(ErrInvalidPubkey.Error(), zap.Error(err), zap.String("signature", pubkey))
return "", ErrInvalidSignature
}
publicKeyDecoded := ed25519.PublicKey(rawPubKey)
// Verify the challenge signature.
if !ed25519.Verify(publicKeyDecoded, []byte(challenge), rawSignature) {
logger.Get().Debug(ErrInvalidKeyChallenge.Error(), zap.Error(err), zap.String("challenge", challenge))
return "", ErrInvalidKeyChallenge
}
// Generate a JWT token for the authenticated user.
token, err := generateAndSaveLoginToken(challengeObj.AccountID, 24*time.Hour)
if err != nil {
return "", err
}
err = blocklist.InvalidateToken(verifiedToken.Token, verifiedToken.StandardClaims)
if err != nil {
logger.Get().Error(ErrFailedInvalidateToken.Error(), zap.Error(err), zap.String("pubkey", pubkey), zap.ByteString("token", verifiedToken.Token), zap.String("challenge", challenge))
return "", ErrFailedInvalidateToken
}
if err := db.Get().Delete(&challengeObj).Error; err != nil {
logger.Get().Debug(ErrFailedDeleteKeyChallenge.Error(), zap.Error(err))
return "", ErrFailedDeleteKeyChallenge
}
return token, nil
}
func GeneratePubkeyChallenge(pubkey string) (string, error) {
pubkey = strings.ToLower(pubkey)
// Retrieve the account for the given email.
account := model.Key{}
if err := db.Get().Where("pubkey = ?", pubkey).First(&account).Error; err != nil {
logger.Get().Debug("failed to query pubkey", zap.Error(err))
return "", errors.New("invalid pubkey")
}
// Generate a random challenge string.
challenge, err := generateAndSaveChallengeToken(account.AccountID, time.Minute)
if err != nil {
logger.Get().Error(ErrFailedGenerateKeyChallenge.Error())
return "", ErrFailedGenerateKeyChallenge
}
return challenge, nil
}
func Logout(token string) error {
// Verify the provided token.
claims, err := jwt.Verify(jwt.EdDSA, jwtKey, []byte(token), blocklist)
if err != nil {
logger.Get().Debug(ErrInvalidToken.Error(), zap.Error(err))
return ErrInvalidToken
}
err = blocklist.InvalidateToken(claims.Token, claims.StandardClaims)
if err != nil {
logger.Get().Error(ErrFailedInvalidateToken.Error(), zap.Error(err), zap.String("token", token))
return ErrFailedInvalidateToken
}
// Retrieve the key challenge for the given challenge.
session := model.LoginSession{}
if err := db.Get().Model(session).Where("token = ?", token).First(&session).Error; err != nil {
logger.Get().Debug(ErrFailedInvalidateToken.Error(), zap.Error(err), zap.String("token", token))
return ErrFailedInvalidateToken
}
db.Get().Delete(&session)
return nil
}
func VerifyLoginToken(token string) (*model.Account, error) {
uvt, err := jwt.Decode([]byte(token))
if err != nil {
return nil, ErrInvalidToken
}
var claim jwt.Claims
err = uvt.Claims(&claim)
if err != nil {
return nil, ErrInvalidToken
}
session := model.LoginSession{}
if err := db.Get().Model(session).Preload("Account").Where("token = ?", token).First(&session).Error; err != nil {
logger.Get().Debug(ErrInvalidToken.Error(), zap.Error(err), zap.String("token", token))
return nil, ErrInvalidToken
}
_, err = jwt.Verify(jwt.EdDSA, jwtKey, []byte(token), blocklist)
if err != nil {
db.Get().Delete(&session)
return nil, err
}
return &session.Account, nil
}

137
service/auth/util.go Normal file
View File

@ -0,0 +1,137 @@
package auth
import (
"errors"
"git.lumeweb.com/LumeWeb/portal/db"
"git.lumeweb.com/LumeWeb/portal/logger"
"git.lumeweb.com/LumeWeb/portal/model"
"github.com/kataras/iris/v12"
"github.com/kataras/jwt"
"go.uber.org/zap"
"golang.org/x/crypto/bcrypt"
"strconv"
"strings"
"time"
)
// verifyPassword compares the provided plaintext password with a hashed password and returns an error if they don't match.
func verifyPassword(hashedPassword, password string) error {
err := bcrypt.CompareHashAndPassword([]byte(hashedPassword), []byte(password))
if err != nil {
return errors.New("invalid email or password")
}
return nil
}
// generateToken generates a JWT token for the given account ID.
func generateToken(maxAge time.Duration, ttype string) (string, error) {
// Define the JWT claims.
claim := jwt.Claims{
Expiry: time.Now().Add(time.Hour * 24).Unix(), // Token expires in 24 hours.
IssuedAt: time.Now().Unix(),
Audience: []string{ttype},
}
token, err := jwt.Sign(jwt.EdDSA, jwtKey, claim, jwt.MaxAge(maxAge))
if err != nil {
logger.Get().Error(ErrFailedSignJwt.Error(), zap.Error(err))
return "", err
}
return string(token), nil
}
func generateAndSaveLoginToken(accountID uint, maxAge time.Duration) (string, error) {
// Generate a JWT token for the authenticated user.
token, err := generateToken(maxAge, "auth")
if err != nil {
logger.Get().Error(ErrFailedGenerateToken.Error())
return "", ErrFailedGenerateToken
}
verifiedToken, _ := jwt.Verify(jwt.EdDSA, jwtKey, []byte(token), blocklist)
var claim *jwt.Claims
_ = verifiedToken.Claims(&claim)
// Save the token to the database.
session := model.LoginSession{
Account: model.Account{ID: accountID},
Token: token,
Expiration: claim.ExpiresAt(),
}
existingSession := model.LoginSession{}
err = db.Get().Where("token = ?", token).First(&existingSession).Error
if err == nil {
return token, nil
}
if err := db.Get().Create(&session).Error; err != nil {
if strings.Contains(err.Error(), "Duplicate entry") {
return token, nil
}
logger.Get().Error(ErrFailedSaveToken.Error(), zap.Uint("account_id", accountID), zap.Duration("max_age", maxAge))
return "", ErrFailedSaveToken
}
return token, nil
}
func generateAndSaveChallengeToken(accountID uint, maxAge time.Duration) (string, error) {
// Generate a JWT token for the authenticated user.
token, err := generateToken(maxAge, "challenge")
if err != nil {
logger.Get().Error(ErrFailedGenerateToken.Error(), zap.Error(err))
return "", ErrFailedGenerateToken
}
verifiedToken, _ := jwt.Verify(jwt.EdDSA, jwtKey, []byte(token), blocklist)
var claim *jwt.Claims
_ = verifiedToken.Claims(&claim)
// Save the token to the database.
keyChallenge := model.KeyChallenge{
AccountID: accountID,
Challenge: token,
Expiration: claim.ExpiresAt(),
}
if err := db.Get().Create(&keyChallenge).Error; err != nil {
logger.Get().Error(ErrFailedSaveToken.Error(), zap.Error(err))
return "", ErrFailedSaveToken
}
return token, nil
}
func GetRequestAuthCode(ctx iris.Context) string {
authHeader := ctx.GetHeader("Authorization")
if authHeader == "" {
return ""
}
// pure check: authorization header format must be Bearer {token}
authHeaderParts := strings.Split(authHeader, " ")
if len(authHeaderParts) != 2 || strings.ToLower(authHeaderParts[0]) != "bearer" {
return ""
}
return authHeaderParts[1]
}
func GetCurrentUserId(ctx iris.Context) uint {
usr := ctx.User()
if usr == nil {
return 0
}
sid, _ := usr.GetID()
userID, _ := strconv.Atoi(sid)
return uint(userID)
}

316
service/files/files.go Normal file
View File

@ -0,0 +1,316 @@
package files
import (
"bytes"
"context"
"encoding/hex"
"errors"
"fmt"
"git.lumeweb.com/LumeWeb/portal/bao"
"git.lumeweb.com/LumeWeb/portal/db"
"git.lumeweb.com/LumeWeb/portal/logger"
"git.lumeweb.com/LumeWeb/portal/model"
"git.lumeweb.com/LumeWeb/portal/shared"
"git.lumeweb.com/LumeWeb/portal/tusstore"
"github.com/go-resty/resty/v2"
"github.com/spf13/viper"
_ "github.com/tus/tusd/pkg/handler"
"go.uber.org/zap"
"gorm.io/gorm"
"io"
"strings"
)
const (
STATUS_UPLOADED = iota
STATUS_UPLOADING = iota
STATUS_NOT_FOUND = iota
)
var (
ErrAlreadyExists = errors.New("Upload already exists")
ErrFailedFetchObject = errors.New("Failed fetching object")
ErrFailedFetchObjectProof = errors.New("Failed fetching object proof")
ErrFailedFetchTusObject = errors.New("Failed fetching tus object")
ErrFailedHashFile = errors.New("Failed to hash file")
ErrFailedQueryTusUpload = errors.New("Failed to query tus uploads")
ErrFailedQueryUpload = errors.New("Failed to query uploads")
ErrFailedQueryPins = errors.New("Failed to query pins")
ErrFailedSaveUpload = errors.New("Failed saving upload to db")
ErrFailedSavePin = errors.New("Failed saving pin to db")
ErrFailedUpload = errors.New("Failed uploading object")
ErrFailedUploadProof = errors.New("Failed uploading object proof")
ErrFileExistsOutOfSync = errors.New("File already exists in network, but missing in database")
ErrFileHashMismatch = errors.New("File hash does not match provided file hash")
ErrInvalidFile = errors.New("Invalid file")
)
var client *resty.Client
func Init() {
client = resty.New()
client.SetBaseURL("http://localhost:9980/api")
client.SetBasicAuth("", viper.GetString("renterd-api-password"))
client.SetDisableWarn(true)
client.SetCloseConnection(true)
}
func Upload(r io.ReadSeeker, size int64, hash []byte, accountID uint) (model.Upload, error) {
var upload model.Upload
tree, hashBytes, err := bao.ComputeTree(r, size)
if err != nil {
logger.Get().Error(ErrFailedHashFile.Error(), zap.Error(err))
return upload, ErrFailedHashFile
}
if hash != nil {
if bytes.Compare(hashBytes[:], hash) != 0 {
logger.Get().Error(ErrFileHashMismatch.Error())
return upload, ErrFileHashMismatch
}
}
hashHex := hex.EncodeToString(hashBytes[:])
_, err = r.Seek(0, io.SeekStart)
if err != nil {
return upload, err
}
result := db.Get().Where(&model.Upload{Hash: hashHex}).First(&upload)
if !errors.Is(result.Error, gorm.ErrRecordNotFound) {
if err != nil {
logger.Get().Error(ErrFailedQueryUpload.Error(), zap.Error(err))
return upload, ErrFailedQueryUpload
}
logger.Get().Info(ErrAlreadyExists.Error())
return upload, nil
}
objectExistsResult, err := client.R().Get(getBusObjectUrl(hashHex))
if err != nil {
logger.Get().Error(ErrFailedQueryUpload.Error(), zap.Error(err))
return upload, ErrFailedQueryUpload
}
objectStatusCode := objectExistsResult.StatusCode()
if objectStatusCode == 500 {
bodyErr := objectExistsResult.String()
if !strings.Contains(bodyErr, "no slabs found") {
logger.Get().Error(ErrFailedFetchObject.Error(), zap.String("error", objectExistsResult.String()))
return upload, ErrFailedFetchObject
}
objectStatusCode = 404
}
proofExistsResult, err := client.R().Get(getBusProofUrl(hashHex))
if err != nil {
logger.Get().Error(ErrFailedFetchObjectProof.Error(), zap.Error(err))
return upload, ErrFailedFetchObjectProof
}
proofStatusCode := proofExistsResult.StatusCode()
if proofStatusCode == 500 {
bodyErr := proofExistsResult.String()
if !strings.Contains(bodyErr, "no slabs found") {
logger.Get().Error(ErrFailedFetchObjectProof.Error(), zap.String("error", proofExistsResult.String()))
return upload, ErrFailedFetchObjectProof
}
objectStatusCode = 404
}
if objectStatusCode != 404 && proofStatusCode != 404 {
logger.Get().Error(ErrFileExistsOutOfSync.Error(), zap.String("hash", hashHex))
return upload, ErrFileExistsOutOfSync
}
ret, err := client.R().SetBody(r).Put(getWorkerObjectUrl(hashHex))
if ret.StatusCode() != 200 {
logger.Get().Error(ErrFailedUpload.Error(), zap.String("error", ret.String()))
return upload, ErrFailedUpload
}
ret, err = client.R().SetBody(tree).Put(getWorkerProofUrl(hashHex))
if ret.StatusCode() != 200 {
logger.Get().Error(ErrFailedUploadProof.Error(), zap.String("error", ret.String()))
return upload, ErrFailedUpload
}
upload = model.Upload{
Hash: hashHex,
AccountID: accountID,
}
if err = db.Get().Create(&upload).Error; err != nil {
logger.Get().Error(ErrFailedSaveUpload.Error(), zap.Error(err))
return upload, ErrFailedSaveUpload
}
return upload, nil
}
func Download(hash string) (io.Reader, error) {
uploadItem := db.Get().Table("uploads").Where(&model.Upload{Hash: hash}).Row()
tusItem := db.Get().Table("tus").Where(&model.Tus{Hash: hash}).Row()
if uploadItem.Err() == nil {
fetch, err := client.R().SetDoNotParseResponse(true).Get(getWorkerObjectUrl(hash))
if err != nil {
logger.Get().Error(ErrFailedFetchObject.Error(), zap.Error(err))
return nil, ErrFailedFetchObject
}
return fetch.RawBody(), nil
} else if tusItem.Err() == nil {
var tusData model.Tus
err := tusItem.Scan(&tusData)
if err != nil {
logger.Get().Error(ErrFailedQueryUpload.Error(), zap.Error(err))
return nil, ErrFailedQueryUpload
}
upload, err := getStore().GetUpload(context.Background(), tusData.UploadID)
if err != nil {
logger.Get().Error(ErrFailedQueryTusUpload.Error(), zap.Error(err))
return nil, ErrFailedQueryTusUpload
}
reader, err := upload.GetReader(context.Background())
if err != nil {
logger.Get().Error(ErrFailedFetchTusObject.Error(), zap.Error(err))
return nil, ErrFailedFetchTusObject
}
return reader, nil
} else {
logger.Get().Error(ErrInvalidFile.Error(), zap.String("hash", hash))
return nil, ErrInvalidFile
}
}
func DownloadProof(hash string) (io.Reader, error) {
uploadItem := db.Get().Model(&model.Upload{}).Where(&model.Upload{Hash: hash}).Row()
if uploadItem.Err() != nil {
logger.Get().Debug(ErrInvalidFile.Error(), zap.String("hash", hash))
return nil, ErrInvalidFile
}
fetch, err := client.R().SetDoNotParseResponse(true).Get(getWorkerProofUrl(hash))
if err != nil {
logger.Get().Error(ErrFailedFetchObject.Error(), zap.Error(err))
return nil, ErrFailedFetchObject
}
return fetch.RawBody(), nil
}
func Status(hash string) int {
var count int64
uploadItem := db.Get().Table("uploads").Where(&model.Upload{Hash: hash}).Count(&count)
if uploadItem.Error != nil && !errors.Is(uploadItem.Error, gorm.ErrRecordNotFound) {
logger.Get().Error(ErrFailedQueryUpload.Error(), zap.Error(uploadItem.Error))
}
if count > 0 {
return STATUS_UPLOADED
}
tusItem := db.Get().Table("tus").Where(&model.Tus{Hash: hash}).Count(&count)
if tusItem.Error != nil && !errors.Is(tusItem.Error, gorm.ErrRecordNotFound) {
logger.Get().Error(ErrFailedQueryUpload.Error(), zap.Error(tusItem.Error))
}
if count > 0 {
return STATUS_UPLOADING
}
return STATUS_NOT_FOUND
}
func objectUrlBuilder(hash string, bus bool, proof bool) string {
path := []string{}
if bus {
path = append(path, "bus")
} else {
path = append(path, "worker")
}
path = append(path, "objects")
name := "%s"
if proof {
name = name + ".obao"
}
path = append(path, name)
return fmt.Sprintf(strings.Join(path, "/"), hash)
}
func getBusObjectUrl(hash string) string {
return objectUrlBuilder(hash, true, false)
}
func getWorkerObjectUrl(hash string) string {
return objectUrlBuilder(hash, false, false)
}
func getWorkerProofUrl(hash string) string {
return objectUrlBuilder(hash, false, true)
}
func getBusProofUrl(hash string) string {
return objectUrlBuilder(hash, true, true)
}
func getStore() *tusstore.DbFileStore {
ret := shared.GetTusStore()
return (*ret).(*tusstore.DbFileStore)
}
func Pin(hash string, accountID uint) error {
var upload model.Upload
if result := db.Get().Model(&upload).Where("hash = ?", hash).First(&upload); result.Error != nil {
if !errors.Is(result.Error, gorm.ErrRecordNotFound) {
logger.Get().Error(ErrFailedQueryUpload.Error(), zap.Error(result.Error))
}
return ErrFailedQueryUpload
}
var pin model.Pin
result := db.Get().Model(&pin).Where(&model.Pin{Upload: upload, AccountID: accountID}).First(&pin)
if result.Error != nil && !errors.Is(result.Error, gorm.ErrRecordNotFound) {
logger.Get().Error(ErrFailedQueryPins.Error(), zap.Error(result.Error))
return ErrFailedQueryPins
}
if result.Error == nil {
return nil
}
pin.AccountID = upload.AccountID
pin.Upload = upload
result = db.Get().Save(&pin)
if result.Error != nil {
logger.Get().Error(ErrFailedSavePin.Error(), zap.Error(result.Error))
return ErrFailedSavePin
}
return nil
}

51
shared/shared.go Normal file
View File

@ -0,0 +1,51 @@
package shared
import (
tusd "github.com/tus/tusd/pkg/handler"
_ "go.uber.org/zap"
)
type TusFunc func(upload *tusd.Upload) error
var tusQueue *interface{}
var tusStore *interface{}
var tusComposer *interface{}
var tusWorker TusFunc
type tusRequestContextKey int
const (
TusRequestContextKey tusRequestContextKey = iota
)
func SetTusQueue(q interface{}) {
tusQueue = &q
}
func GetTusQueue() *interface{} {
return tusQueue
}
func SetTusStore(s interface{}) {
tusStore = &s
}
func GetTusStore() *interface{} {
return tusStore
}
func SetTusComposer(c interface{}) {
tusComposer = &c
}
func GetTusComposer() *interface{} {
return tusComposer
}
func SetTusWorker(w TusFunc) {
tusWorker = w
}
func GetTusWorker() TusFunc {
return tusWorker
}

222
tus/tus.go Normal file
View File

@ -0,0 +1,222 @@
package tus
import (
"context"
"encoding/hex"
"encoding/json"
"errors"
"git.lumeweb.com/LumeWeb/portal/cid"
"git.lumeweb.com/LumeWeb/portal/db"
"git.lumeweb.com/LumeWeb/portal/logger"
"git.lumeweb.com/LumeWeb/portal/model"
"git.lumeweb.com/LumeWeb/portal/service/files"
"git.lumeweb.com/LumeWeb/portal/shared"
"git.lumeweb.com/LumeWeb/portal/tusstore"
"github.com/golang-queue/queue"
tusd "github.com/tus/tusd/pkg/handler"
"github.com/tus/tusd/pkg/memorylocker"
"go.uber.org/zap"
"golang.org/x/exp/slices"
"gorm.io/gorm"
"io"
"strconv"
)
const TUS_API_PATH = "/files/tus"
const HASH_META_HEADER = "hash"
func Init() *tusd.Handler {
store := &tusstore.DbFileStore{
Path: "/tmp",
}
shared.SetTusStore(store)
composer := tusd.NewStoreComposer()
composer.UseCore(store)
composer.UseConcater(store)
composer.UseLocker(memorylocker.New())
composer.UseTerminater(store)
shared.SetTusComposer(composer)
handler, err := tusd.NewHandler(tusd.Config{
BasePath: "/api/v1" + TUS_API_PATH,
StoreComposer: composer,
PreUploadCreateCallback: func(hook tusd.HookEvent) error {
hash := hook.Upload.MetaData[HASH_META_HEADER]
if len(hash) == 0 {
msg := "missing hash metadata"
logger.Get().Debug(msg)
return errors.New(msg)
}
var upload model.Upload
result := db.Get().Where(&model.Upload{Hash: hash}).First(&upload)
if (result.Error != nil && !errors.Is(result.Error, gorm.ErrRecordNotFound)) || result.RowsAffected > 0 {
hashBytes, err := hex.DecodeString(hash)
if err != nil {
logger.Get().Debug("invalid hash", zap.Error(err))
return err
}
cidString, err := cid.Encode(hashBytes, uint64(hook.Upload.Size))
if err != nil {
logger.Get().Debug("failed to create cid", zap.Error(err))
return err
}
resp, err := json.Marshal(UploadResponse{Cid: cidString})
if err != nil {
logger.Get().Error("failed to create response", zap.Error(err))
return err
}
return tusd.NewHTTPError(errors.New(string(resp)), 304)
}
return nil
},
})
if err != nil {
panic(err)
}
pool := queue.NewPool(5)
shared.SetTusQueue(pool)
shared.SetTusWorker(tusWorker)
go tusStartup()
return handler
}
func tusStartup() {
tusQueue := getQueue()
store := getStore()
rows, err := db.Get().Model(&model.Tus{}).Rows()
if err != nil {
logger.Get().Error("failed to query tus uploads", zap.Error(err))
}
defer rows.Close()
processedHashes := make([]string, 0)
for rows.Next() {
var tusUpload model.Tus
err := db.Get().ScanRows(rows, &tusUpload)
if err != nil {
logger.Get().Error("failed to scan tus records", zap.Error(err))
return
}
upload, err := store.GetUpload(nil, tusUpload.UploadID)
if err != nil {
logger.Get().Error("failed to query tus upload", zap.Error(err))
db.Get().Delete(&tusUpload)
continue
}
if slices.Contains(processedHashes, tusUpload.Hash) {
err := terminateUpload(upload)
if err != nil {
logger.Get().Error("failed to terminate tus upload", zap.Error(err))
}
continue
}
if err := tusQueue.QueueTask(func(ctx context.Context) error {
return tusWorker(&upload)
}); err != nil {
logger.Get().Error("failed to queue tus upload", zap.Error(err))
} else {
processedHashes = append(processedHashes, tusUpload.Hash)
}
}
}
func tusWorker(upload *tusd.Upload) error {
info, err := (*upload).GetInfo(context.Background())
if err != nil {
logger.Get().Error("failed to query tus upload metadata", zap.Error(err))
return err
}
file, err := (*upload).GetReader(context.Background())
if err != nil {
logger.Get().Error("failed reading upload", zap.Error(err))
return err
}
hashHex := info.MetaData[HASH_META_HEADER]
hashBytes, err := hex.DecodeString(hashHex)
if err != nil {
logger.Get().Error("failed decoding hash", zap.Error(err))
tErr := terminateUpload(*upload)
if tErr != nil {
return tErr
}
return err
}
uploader, _ := strconv.Atoi(info.Storage["uploader"])
newUpload, err := files.Upload(file.(io.ReadSeeker), info.Size, hashBytes, uint(uploader))
tErr := terminateUpload(*upload)
if tErr != nil {
return tErr
}
if err != nil {
return err
}
err = files.Pin(newUpload.Hash, newUpload.AccountID)
if err != nil {
return err
}
return nil
}
func terminateUpload(upload tusd.Upload) error {
err := getComposer().Terminater.AsTerminatableUpload(upload).Terminate(context.Background())
if err != nil {
logger.Get().Error("failed deleting tus upload", zap.Error(err))
}
if err != nil {
return err
}
return nil
}
type UploadResponse struct {
Cid string `json:"cid"`
}
func getQueue() *queue.Queue {
ret := shared.GetTusQueue()
return (*ret).(*queue.Queue)
}
func getStore() *tusstore.DbFileStore {
ret := shared.GetTusStore()
return (*ret).(*tusstore.DbFileStore)
}
func getComposer() *tusd.StoreComposer {
ret := shared.GetTusComposer()
return (*ret).(*tusd.StoreComposer)
}

316
tusstore/store.go Normal file
View File

@ -0,0 +1,316 @@
package tusstore
import (
"context"
"crypto/rand"
"encoding/hex"
"encoding/json"
"fmt"
"git.lumeweb.com/LumeWeb/portal/db"
"git.lumeweb.com/LumeWeb/portal/logger"
"git.lumeweb.com/LumeWeb/portal/model"
"git.lumeweb.com/LumeWeb/portal/service/auth"
"git.lumeweb.com/LumeWeb/portal/shared"
"github.com/golang-queue/queue"
clone "github.com/huandu/go-clone"
"github.com/kataras/iris/v12"
"github.com/tus/tusd/pkg/handler"
"go.uber.org/zap"
"io"
"os"
"path/filepath"
"strconv"
)
var defaultFilePerm = os.FileMode(0664)
type DbFileStore struct {
// Relative or absolute path to store files in. DbFileStore does not check
// whether the path exists, use os.MkdirAll in this case on your own.
Path string
}
func (store DbFileStore) UseIn(composer *handler.StoreComposer) {
composer.UseCore(store)
composer.UseTerminater(store)
composer.UseConcater(store)
composer.UseLengthDeferrer(store)
}
func (store DbFileStore) NewUpload(ctx context.Context, info handler.FileInfo) (handler.Upload, error) {
if info.ID == "" {
info.ID = uid()
}
binPath := store.binPath(info.ID)
info.Storage = map[string]string{
"Type": "dbstore",
"Path": binPath,
}
// Create binary file with no content
file, err := os.OpenFile(binPath, os.O_CREATE|os.O_WRONLY, defaultFilePerm)
if err != nil {
if os.IsNotExist(err) {
err = fmt.Errorf("upload directory does not exist: %s", store.Path)
}
return nil, err
}
err = file.Close()
if err != nil {
return nil, err
}
if err != nil {
return nil, err
}
irisContext := ctx.Value(shared.TusRequestContextKey).(iris.Context)
upload := &fileUpload{
info: info,
binPath: binPath,
hash: info.MetaData["hash"],
uploader: auth.GetCurrentUserId(irisContext),
}
// writeInfo creates the file by itself if necessary
err = upload.writeInfo()
if err != nil {
return nil, err
}
return upload, nil
}
func (store DbFileStore) GetUpload(ctx context.Context, id string) (handler.Upload, error) {
info := handler.FileInfo{
ID: id,
}
fUpload := &fileUpload{info: info}
record, is404, err := fUpload.getInfo()
if err != nil {
if is404 {
// Interpret os.ErrNotExist as 404 Not Found
err = handler.ErrNotFound
}
return nil, err
}
if err := json.Unmarshal([]byte(record.Info), &info); err != nil {
logger.Get().Error("fail to parse upload meta", zap.Error(err))
return nil, err
}
fUpload.info = info
fUpload.hash = record.Hash
binPath := store.binPath(id)
stat, err := os.Stat(binPath)
if err != nil {
if os.IsNotExist(err) {
// Interpret os.ErrNotExist as 404 Not Found
err = handler.ErrNotFound
}
return nil, err
}
info.Offset = stat.Size()
fUpload.binPath = binPath
return fUpload, nil
}
func (store DbFileStore) AsTerminatableUpload(upload handler.Upload) handler.TerminatableUpload {
return upload.(*fileUpload)
}
func (store DbFileStore) AsLengthDeclarableUpload(upload handler.Upload) handler.LengthDeclarableUpload {
return upload.(*fileUpload)
}
func (store DbFileStore) AsConcatableUpload(upload handler.Upload) handler.ConcatableUpload {
return upload.(*fileUpload)
}
// binPath returns the path to the file storing the binary data.
func (store DbFileStore) binPath(id string) string {
return filepath.Join(store.Path, id)
}
type fileUpload struct {
// info stores the current information about the upload
info handler.FileInfo
// binPath is the path to the binary file (which has no extension)
binPath string
hash string
uploader uint
}
func (upload *fileUpload) GetInfo(ctx context.Context) (handler.FileInfo, error) {
info := clone.Clone(upload.info).(handler.FileInfo)
info.Storage["uploader"] = strconv.Itoa(int(upload.uploader))
return upload.info, nil
}
func (upload *fileUpload) WriteChunk(ctx context.Context, offset int64, src io.Reader) (int64, error) {
file, err := os.OpenFile(upload.binPath, os.O_WRONLY|os.O_APPEND, defaultFilePerm)
if err != nil {
return 0, err
}
defer file.Close()
n, err := io.Copy(file, src)
upload.info.Offset += n
err = upload.writeInfo()
if err != nil {
return 0, err
}
return n, err
}
func (upload *fileUpload) GetReader(ctx context.Context) (io.Reader, error) {
return os.Open(upload.binPath)
}
func (upload *fileUpload) Terminate(ctx context.Context) error {
tusUpload := &model.Tus{
UploadID: upload.info.ID,
}
ret := db.Get().Where(&tusUpload).Delete(&tusUpload)
if ret.Error != nil {
logger.Get().Error("failed to delete tus entry", zap.Error(ret.Error))
}
if err := os.Remove(upload.binPath); err != nil {
return err
}
return nil
}
func (upload *fileUpload) ConcatUploads(ctx context.Context, uploads []handler.Upload) (err error) {
file, err := os.OpenFile(upload.binPath, os.O_WRONLY|os.O_APPEND, defaultFilePerm)
if err != nil {
return err
}
defer file.Close()
for _, partialUpload := range uploads {
fileUpload := partialUpload.(*fileUpload)
src, err := os.Open(fileUpload.binPath)
if err != nil {
return err
}
if _, err := io.Copy(file, src); err != nil {
return err
}
}
return
}
func (upload *fileUpload) DeclareLength(ctx context.Context, length int64) error {
upload.info.Size = length
upload.info.SizeIsDeferred = false
return upload.writeInfo()
}
// writeInfo updates the entire information. Everything will be overwritten.
func (upload *fileUpload) writeInfo() error {
data, err := json.Marshal(upload.info)
if err != nil {
return err
}
tusRecord, is404, err := upload.getInfo()
if err != nil && !is404 {
return err
}
if tusRecord != nil {
tusRecord.Info = string(data)
if ret := db.Get().Save(&tusRecord); ret.Error != nil {
logger.Get().Error("failed to update tus entry", zap.Error(ret.Error))
return ret.Error
}
return nil
}
tusRecord = &model.Tus{UploadID: upload.info.ID, Hash: upload.hash, Info: string(data), AccountID: upload.uploader}
if ret := db.Get().Create(&tusRecord); ret.Error != nil {
logger.Get().Error("failed to create tus entry", zap.Error(ret.Error))
return ret.Error
}
return nil
}
func (upload *fileUpload) getInfo() (*model.Tus, bool, error) {
var tusRecord model.Tus
result := db.Get().Where(&model.Tus{UploadID: upload.info.ID}).First(&tusRecord)
if result.Error != nil && result.Error.Error() != "record not found" {
logger.Get().Error("failed to query tus entry", zap.Error(result.Error))
return nil, false, result.Error
}
if result.Error != nil {
return nil, true, result.Error
}
return &tusRecord, false, nil
}
func (upload *fileUpload) FinishUpload(ctx context.Context) error {
if err := getQueue().QueueTask(func(ctx context.Context) error {
upload, err := getStore().GetUpload(nil, upload.info.ID)
if err != nil {
logger.Get().Error("failed to query tus upload", zap.Error(err))
return err
}
return shared.GetTusWorker()(&upload)
}); err != nil {
logger.Get().Error("failed to queue tus upload", zap.Error(err))
return err
}
return nil
}
func uid() string {
id := make([]byte, 16)
_, err := io.ReadFull(rand.Reader, id)
if err != nil {
// This is probably an appropriate way to handle errors from our source
// for random bits.
panic(err)
}
return hex.EncodeToString(id)
}
func getQueue() *queue.Queue {
ret := shared.GetTusQueue()
return (*ret).(*queue.Queue)
}
func getStore() *DbFileStore {
ret := shared.GetTusStore()
return (*ret).(*DbFileStore)
}