Cleanup Kratos, Oathkeeper, CockroachDB.
This commit is contained in:
parent
63244f8d9a
commit
991cfc3000
|
@ -86,16 +86,6 @@ __pycache__
|
|||
/.idea/
|
||||
/venv*
|
||||
|
||||
# CockroachDB certificates
|
||||
docker/cockroach/certs/*.crt
|
||||
docker/cockroach/certs/*.key
|
||||
docker/kratos/cr_certs/*.crt
|
||||
docker/kratos/cr_certs/*.key
|
||||
|
||||
# Oathkeeper JWKS signing token
|
||||
docker/kratos/oathkeeper/id_token.jwks.json
|
||||
docker/kratos/config/kratos.yml
|
||||
|
||||
# Setup-script log files
|
||||
setup-scripts/serverload.log
|
||||
setup-scripts/serverload.json
|
||||
|
|
89
README.md
89
README.md
|
@ -89,95 +89,6 @@ Add more nodes when they are ready:
|
|||
rs.add("second.node.net:27017")
|
||||
```
|
||||
|
||||
### Kratos & Oathkeeper Setup
|
||||
|
||||
[Kratos](https://www.ory.sh/kratos) is our user management system of choice and
|
||||
[Oathkeeper](https://www.ory.sh/oathkeeper) is the identity and access proxy.
|
||||
|
||||
Most of the needed config is already under `docker/kratos`. The only two things that need to be changed are the config
|
||||
for Kratos that might contain you email server password, and the JWKS Oathkeeper uses to sign its JWT tokens.
|
||||
|
||||
Make sure to create your own`docker/kratos/config/kratos.yml` by copying the `kratos.yml.sample` in the same directory.
|
||||
Also make sure to never add that file to source control because it will most probably contain your email password in
|
||||
plain text!
|
||||
|
||||
To override the JWKS you will need to directly edit
|
||||
`docker/kratos/oathkeeper/id_token.jwks.json` and replace it with your generated key set. If you don't know how to
|
||||
generate a key set you can use this code:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"log"
|
||||
"os"
|
||||
|
||||
"github.com/ory/hydra/jwk"
|
||||
)
|
||||
|
||||
func main() {
|
||||
gen := jwk.RS256Generator{
|
||||
KeyLength: 2048,
|
||||
}
|
||||
jwks, err := gen.Generate("", "sig")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
jsonbuf, err := json.MarshalIndent(jwks, "", " ")
|
||||
if err != nil {
|
||||
log.Fatal("failed to generate JSON: %s", err)
|
||||
}
|
||||
os.Stdout.Write(jsonbuf)
|
||||
}
|
||||
```
|
||||
|
||||
While you can directly put the output of this programme into the file mentioned above, you can also remove the public
|
||||
key from the set and change the `kid` of the private key to not include the prefix `private:`.
|
||||
|
||||
### CockroachDB Setup
|
||||
|
||||
Kratos uses CockroachDB to store its data. For that data to be shared across all nodes that comprise your portal cluster
|
||||
setup, we need to set up a CockroachDB cluster, complete with secure communication.
|
||||
|
||||
#### Generate the certificates for secure communication
|
||||
|
||||
For a detailed walk-through, please check [this guide](https://www.cockroachlabs.com/docs/v20.2/secure-a-cluster.html)
|
||||
out.
|
||||
|
||||
Steps:
|
||||
|
||||
1. Start a local cockroach docker instance:
|
||||
`docker run -d -v "<local dir>:/cockroach/cockroach-secure" --name=crdb cockroachdb/cockroach start --insecure`
|
||||
1. Get a shall into that instance: `docker exec -it crdb /bin/bash`
|
||||
1. Go to the directory we which we mapped to a local dir: `cd /cockroach/cockroach-secure`
|
||||
1. Create the subdirectories in which to create certificates and keys: `mkdir certs my-safe-directory`
|
||||
1. Create the CA (Certificate Authority) certificate and key
|
||||
pair: `cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key`
|
||||
1. Create a client certificate and key pair for the root
|
||||
user: `cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key`
|
||||
1. Create the certificate and key pair for your
|
||||
nodes: `cockroach cert create-node cockroach mynode.siasky.net --certs-dir=certs --ca-key=my-safe-directory/ca.key`.
|
||||
Don't forget the `cockroach` node name - it's needed by our docker-compose setup. If you want to create certificates
|
||||
for more nodes, just delete the `node.*` files (after you've finished the next steps for this node!) and re-run the
|
||||
above command with the new node name.
|
||||
1. Put the contents of the `certs` folder under `docker/cockroach/certs/*` under your portal's root dir and store the
|
||||
content of `my-safe-directory` somewhere safe.
|
||||
1. Put _another copy_ of those certificates under `docker/kratos/cr_certs` and change permissions of the `*.key` files,
|
||||
so they can be read by anyone (644).
|
||||
|
||||
#### Configure your CockroachDB node
|
||||
|
||||
Open port 26257 on all nodes that will take part in the cluster. Ideally, you would only open the port for the other
|
||||
nodes in the cluster.
|
||||
|
||||
There is some configuration that needs to be added to your `.env`file, namely:
|
||||
|
||||
1. CR_IP - the public IP of your node
|
||||
1. CR_CLUSTER_NODES - a list of IPs and ports which make up your cluster, e.g.
|
||||
`95.216.13.185:26257,147.135.37.21:26257,144.76.136.122:26257`. This will be the list of nodes that will make up your
|
||||
cluster, so make sure those are accurate.
|
||||
|
||||
## Contributing
|
||||
|
||||
### Testing Your Code
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
- Remove ORY Kratos, ORY Oathkeeper, CockroachDB.
|
|
@ -77,21 +77,3 @@ services:
|
|||
- 3000
|
||||
depends_on:
|
||||
- mongo
|
||||
|
||||
cockroach:
|
||||
image: cockroachdb/cockroach:v20.2.3
|
||||
container_name: cockroach
|
||||
restart: unless-stopped
|
||||
logging: *default-logging
|
||||
env_file:
|
||||
- .env
|
||||
command: start --advertise-addr=${CR_IP} --join=${CR_CLUSTER_NODES} --certs-dir=/certs --listen-addr=0.0.0.0:26257 --http-addr=0.0.0.0:8080
|
||||
volumes:
|
||||
- ./docker/data/cockroach/sqlite:/cockroach/cockroach-data
|
||||
- ./docker/cockroach/certs:/certs
|
||||
ports:
|
||||
- "4080:8080"
|
||||
- "26257:26257"
|
||||
networks:
|
||||
shared:
|
||||
ipv4_address: 10.10.10.84
|
||||
|
|
|
@ -53,23 +53,3 @@ else
|
|||
fi
|
||||
docker exec mongo rm -rf /data/db/backups/$DT
|
||||
fi
|
||||
|
||||
### COCKROACH DB ###
|
||||
echo "Creating a backup of CockroachDB:"
|
||||
# Check if a backup already exists:
|
||||
totalFoundObjects=$(aws s3 ls $S3_BACKUP_PATH/$DT --recursive --summarize | grep "cockroach" | wc -l)
|
||||
if [ "$totalFoundObjects" -ge "1" ]; then
|
||||
echo "Backup already exists for today. Skipping."
|
||||
else
|
||||
# Create a cockroachdb backup:
|
||||
docker exec cockroach \
|
||||
cockroach sql \
|
||||
--host cockroach:26257 \
|
||||
--certs-dir=/certs \
|
||||
--execute="BACKUP TO '$S3_BACKUP_PATH/$DT/cockroach/?AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID&AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY';"
|
||||
if [[ $? > 0 ]]; then
|
||||
echo "Creating a CockroachDB backup failed. Skipping."
|
||||
else
|
||||
echo "Successfully backed up CockroachDB."
|
||||
fi
|
||||
fi
|
||||
|
|
|
@ -28,59 +28,6 @@ if [[ $S3_BACKUP_PATH == "" ]]; then
|
|||
exit 1
|
||||
fi
|
||||
|
||||
### COCKROACH DB ###
|
||||
echo "Restoring CockroachDB."
|
||||
# Check if the backup exists:
|
||||
totalFoundObjects=$(aws s3 ls $S3_BACKUP_PATH/$BACKUP --recursive --summarize | grep "cockroach" | wc -l)
|
||||
if [ "$totalFoundObjects" -eq "0" ]; then
|
||||
echo "This backup doesn't exist!"
|
||||
exit 1
|
||||
fi
|
||||
# Restore the backup:
|
||||
docker exec cockroach \
|
||||
cockroach sql \
|
||||
--host cockroach:26257 \
|
||||
--certs-dir=/certs \
|
||||
--execute="ALTER DATABASE defaultdb RENAME TO defaultdb_backup;"
|
||||
if [[ $? > 0 ]]; then
|
||||
echo "Failed to rename existing CockroachDB database. Exiting."
|
||||
exit $?
|
||||
fi
|
||||
docker exec cockroach \
|
||||
cockroach sql \
|
||||
--host cockroach:26257 \
|
||||
--certs-dir=/certs \
|
||||
--execute="RESTORE DATABASE defaultdb FROM '$S3_BACKUP_PATH/$BACKUP/cockroach?AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID&AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY';"
|
||||
if [[ $? == 0 ]]; then
|
||||
# Restoration succeeded, drop the backup.
|
||||
docker exec cockroach \
|
||||
cockroach sql \
|
||||
--host cockroach:26257 \
|
||||
--certs-dir=/certs \
|
||||
--execute="DROP DATABASE defaultdb_backup;"
|
||||
echo "CockroachDB restoration succeeded."
|
||||
else
|
||||
# Restoration failed, drop the new DB and put back the old one.
|
||||
echo "CockroachDB restoration failed, rolling back."
|
||||
docker exec cockroach \
|
||||
cockroach sql \
|
||||
--host cockroach:26257 \
|
||||
--certs-dir=/certs \
|
||||
--execute="DROP DATABASE defaultdb;"
|
||||
docker exec cockroach \
|
||||
cockroach sql \
|
||||
--host cockroach:26257 \
|
||||
--certs-dir=/certs \
|
||||
--execute="ALTER DATABASE defaultdb_backup RENAME TO defaultdb;"
|
||||
if [[ $? > 0 ]]; then
|
||||
echo "ERROR: Rollback failed! Inspect manually!"
|
||||
exit $?
|
||||
else
|
||||
echo "Rollback successful. Restoration cancelled. Exiting."
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
### MONGO DB ###
|
||||
# Check if the backup exists:
|
||||
totalFoundObjects=$(aws s3 ls $S3_BACKUP_PATH/$BACKUP --recursive --summarize | grep "mongo.tgz" | wc -l)
|
||||
|
|
|
@ -19,8 +19,6 @@ You may want to fork this repository and replace ssh keys in
|
|||
- [handshake](https://handshake.org) ([github](https://github.com/handshake-org/hsd)): full handshake node
|
||||
- [handshake-api](https://github.com/SkynetLabs/skynet-webportal/tree/master/packages/handshake-api): simple API talking to the handshake node - [read more](https://github.com/SkynetLabs/skynet-webportal/blob/master/packages/handshake-api/README.md)
|
||||
- [website](https://github.com/SkynetLabs/skynet-webportal/tree/master/packages/website): portal frontend application - [read more](https://github.com/SkynetLabs/skynet-webportal/blob/master/packages/website/README.md)
|
||||
- [kratos](https://www.ory.sh/kratos/): user account management system
|
||||
- [oathkeeper](https://www.ory.sh/oathkeeper/): identity and access proxy
|
||||
- discord integration
|
||||
- [funds-checker](funds-checker.py): script that checks wallet balance and sends status messages to discord periodically
|
||||
- [health-checker](health-checker.py): script that monitors health-check service for server health issues and reports them to discord periodically
|
||||
|
@ -107,7 +105,6 @@ At this point we have almost everything running, we just need to set up your wal
|
|||
with path to the location in the bucket where we want to store the daily backups.
|
||||
|
||||
1. `docker-compose up -d` to restart the services so they pick up new env variables
|
||||
1. add your custom Kratos configuration to `/home/user/skynet-webportal/docker/kratos/config/kratos.yml` (in particular, the credentials for your mail server should be here, rather than in your source control). For a starting point you can take `docker/kratos/config/kratos.yml.sample`.
|
||||
|
||||
## Subdomains
|
||||
|
||||
|
|
|
@ -43,8 +43,6 @@ docker-compose --version # sanity check
|
|||
# * COOKIE_DOMAIN - (optional) if using `accounts` this is the domain to which your cookies will be issued
|
||||
# * COOKIE_HASH_KEY - (optional) if using `accounts` hashing secret, at least 32 bytes
|
||||
# * COOKIE_ENC_KEY - (optional) if using `accounts` encryption key, at least 32 bytes
|
||||
# * CR_IP - (optional) if using `accounts` the public IP/domain of your server, e.g. `helsinki.siasky.net`
|
||||
# * CR_CLUSTER_NODES - (optional) if using `accounts` the list of servers (with ports) which make up your CockroachDB cluster, e.g. `helsinki.siasky.net:26257,germany.siasky.net:26257,us-east.siasky.net:26257`
|
||||
if ! [ -f /home/user/skynet-webportal/.env ]; then
|
||||
HSD_API_KEY=$(openssl rand -base64 32) # generate safe random key for handshake
|
||||
printf "PORTAL_DOMAIN=siasky.net\nSERVER_DOMAIN=\nSKYNET_PORTAL_API=https://siasky.net\nSKYNET_SERVER_API=https://eu-dc-1.siasky.net\nSKYNET_DASHBOARD_URL=https://account.example.com\nEMAIL_ADDRESS=email@example.com\nSIA_WALLET_PASSWORD=\nHSD_API_KEY=${HSD_API_KEY}\nCLOUDFLARE_AUTH_TOKEN=\nAWS_ACCESS_KEY_ID=\nAWS_SECRET_ACCESS_KEY=\nPORTAL_NAME=\DISCORD_WEBHOOK_URL=\nDISCORD_MENTION_USER_ID=\nDISCORD_MENTION_ROLE_ID=\n" > /home/user/skynet-webportal/.env
|
||||
|
|
Reference in New Issue