Remove duplicated documentation and point to developer docs

This commit is contained in:
Matthew Sevey 2021-10-12 09:42:30 -04:00
parent 559a22723a
commit b001e953cc
No known key found for this signature in database
GPG Key ID: 9ADDD344F13057F6
2 changed files with 2 additions and 301 deletions

156
README.md
View File

@ -26,157 +26,8 @@ For the purposes of complying with our code license, you can use the following S
`fb6c9320bc7e01fbb9cd8d8c3caaa371386928793c736837832e634aaaa484650a3177d6714a`
### MongoDB Setup
Mongo needs a couple of extra steps in order to start a secure cluster.
- Open port 27017 on all nodes that will take part in the cluster. Ideally, you would only open the port for the other
nodes in the cluster.
- Manually add a `mgkey` file under `./docker/data/mongo` with the respective secret (
see [Mongo's keyfile access control](https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/)
for details).
- Manually run an initialisation `docker run` with extra environment variables that will initialise the admin user with
a password (example below).
- During the initialisation run mentioned above, we need to make two extra steps within the container:
- Change the ownership of `mgkey` to `mongodb:mongodb`
- Change its permissions to 400
- After these steps are done we can open a mongo shell on the primary node and run `rs.add()` in order to add the new
node to the cluster. If you don't know which node is primary, log onto any server and jump into the Mongo's container
(`docker exec -it mongo mongo -u admin -p`) and then get the status of the replica set (`rs.status()`).
Example initialisation docker run command:
```
docker run \
--rm \
--name mg \
-p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=<admin username> \
-e MONGO_INITDB_ROOT_PASSWORD=<admin password> \
-v /home/user/skynet-webportal/docker/data/mongo/db:/data/db \
-v /home/user/skynet-webportal/docker/data/mongo/mgkey:/data/mgkey \
mongo --keyFile=/data/mgkey --replSet=skynet
```
Regular docker run command:
```
docker run \
--rm \
--name mg \
-p 27017:27017 \
-v /home/user/skynet-webportal/docker/data/mongo/db:/data/db \
-v /home/user/skynet-webportal/docker/data/mongo/mgkey:/data/mgkey \
mongo --keyFile=/data/mgkey --replSet=skynet
```
Cluster initialisation mongo command:
```
rs.initiate(
{
_id : "skynet",
members: [
{ _id : 0, host : "mongo:27017" }
]
}
)
```
Add more nodes when they are ready:
```
rs.add("second.node.net:27017")
```
### Kratos & Oathkeeper Setup
[Kratos](https://www.ory.sh/kratos) is our user management system of choice and
[Oathkeeper](https://www.ory.sh/oathkeeper) is the identity and access proxy.
Most of the needed config is already under `docker/kratos`. The only two things that need to be changed are the config
for Kratos that might contain you email server password, and the JWKS Oathkeeper uses to sign its JWT tokens.
Make sure to create your own`docker/kratos/config/kratos.yml` by copying the `kratos.yml.sample` in the same directory.
Also make sure to never add that file to source control because it will most probably contain your email password in
plain text!
To override the JWKS you will need to directly edit
`docker/kratos/oathkeeper/id_token.jwks.json` and replace it with your generated key set. If you don't know how to
generate a key set you can use this code:
```go
package main
import (
"encoding/json"
"log"
"os"
"github.com/ory/hydra/jwk"
)
func main() {
gen := jwk.RS256Generator{
KeyLength: 2048,
}
jwks, err := gen.Generate("", "sig")
if err != nil {
log.Fatal(err)
}
jsonbuf, err := json.MarshalIndent(jwks, "", " ")
if err != nil {
log.Fatal("failed to generate JSON: %s", err)
}
os.Stdout.Write(jsonbuf)
}
```
While you can directly put the output of this programme into the file mentioned above, you can also remove the public
key from the set and change the `kid` of the private key to not include the prefix `private:`.
### CockroachDB Setup
Kratos uses CockroachDB to store its data. For that data to be shared across all nodes that comprise your portal cluster
setup, we need to set up a CockroachDB cluster, complete with secure communication.
#### Generate the certificates for secure communication
For a detailed walk-through, please check [this guide](https://www.cockroachlabs.com/docs/v20.2/secure-a-cluster.html)
out.
Steps:
1. Start a local cockroach docker instance:
`docker run -d -v "<local dir>:/cockroach/cockroach-secure" --name=crdb cockroachdb/cockroach start --insecure`
1. Get a shall into that instance: `docker exec -it crdb /bin/bash`
1. Go to the directory we which we mapped to a local dir: `cd /cockroach/cockroach-secure`
1. Create the subdirectories in which to create certificates and keys: `mkdir certs my-safe-directory`
1. Create the CA (Certificate Authority) certificate and key
pair: `cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key`
1. Create a client certificate and key pair for the root
user: `cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key`
1. Create the certificate and key pair for your
nodes: `cockroach cert create-node cockroach mynode.siasky.net --certs-dir=certs --ca-key=my-safe-directory/ca.key`.
Don't forget the `cockroach` node name - it's needed by our docker-compose setup. If you want to create certificates
for more nodes, just delete the `node.*` files (after you've finished the next steps for this node!) and re-run the
above command with the new node name.
1. Put the contents of the `certs` folder under `docker/cockroach/certs/*` under your portal's root dir and store the
content of `my-safe-directory` somewhere safe.
1. Put _another copy_ of those certificates under `docker/kratos/cr_certs` and change permissions of the `*.key` files,
so they can be read by anyone (644).
#### Configure your CockroachDB node
Open port 26257 on all nodes that will take part in the cluster. Ideally, you would only open the port for the other
nodes in the cluster.
There is some configuration that needs to be added to your `.env`file, namely:
1. CR_IP - the public IP of your node
1. CR_CLUSTER_NODES - a list of IPs and ports which make up your cluster, e.g.
`95.216.13.185:26257,147.135.37.21:26257,144.76.136.122:26257`. This will be the list of nodes that will make up your
cluster, so make sure those are accurate.
## Running a Portal
For those interested in running a Webportal, head over to our developer docs [here](https://docs.siasky.net/developer-guides/operating-a-skynet-webportal) to learn more.
## Contributing
@ -190,6 +41,3 @@ Verify the Cypress test suite by doing the following:
1. In one terminal screen run `GATSBY_API_URL=https://siasky.net website serve`
1. In a second terminal screen run `yarn cypress run`
## Setting up complete skynet server
A setup guide with installation scripts can be found in [setup-scripts/README.md](./setup-scripts/README.md).

View File

@ -1,147 +0,0 @@
# Skynet Portal Setup Scripts
This directory contains a setup guide and scripts that will install and
configure some basic requirements for running a Skynet Portal. The assumption is
that we are working with a Debian Buster Minimal system or similar.
## Initial Setup
You may want to fork this repository and replace ssh keys in
`setup-scripts/support/authorized_keys` and optionally edit the `setup-scripts/support/tmux.conf` and `setup-scripts/support/bashrc` configurations to fit your needs.
### Step 0: stack overview
- dockerized services inside `docker-compose.yml`
- [sia](https://sia.tech) ([docker hub](https://hub.docker.com/r/nebulouslabs/sia)): storage provider, heart of the portal setup
- [caddy](https://caddyserver.com) ([docker hub](https://hub.docker.com/r/caddy/caddy)): reverse proxy (similar to nginx) that handles ssl out of a box and acts as a transparent entry point
- [openresty](https://openresty.org) ([docker hub](https://hub.docker.com/r/openresty/openresty)): nginx custom build, acts as a cached proxy to siad and exposes all api endpoints
- [health-check](https://github.com/SkynetLabs/skynet-webportal/tree/master/packages/health-check): simple service that runs periodically and collects health data about the server (status and response times) - [read more](https://github.com/SkynetLabs/skynet-webportal/blob/master/packages/health-check/README.md)
- [handshake](https://handshake.org) ([github](https://github.com/handshake-org/hsd)): full handshake node
- [handshake-api](https://github.com/SkynetLabs/skynet-webportal/tree/master/packages/handshake-api): simple API talking to the handshake node - [read more](https://github.com/SkynetLabs/skynet-webportal/blob/master/packages/handshake-api/README.md)
- [website](https://github.com/SkynetLabs/skynet-webportal/tree/master/packages/website): portal frontend application - [read more](https://github.com/SkynetLabs/skynet-webportal/blob/master/packages/website/README.md)
- [kratos](https://www.ory.sh/kratos/): user account management system
- [oathkeeper](https://www.ory.sh/oathkeeper/): identity and access proxy
- discord integration
- [funds-checker](funds-checker.py): script that checks wallet balance and sends status messages to discord periodically
- [health-checker](health-checker.py): script that monitors health-check service for server health issues and reports them to discord periodically
- [log-checker](log-checker.py): script that scans siad logs for critical errors and reports them to discord periodically
- [blocklist-skylink](../scripts/blocklist-skylink.sh): script that can be run locally from a machine that has access to all your skynet portal servers that blocklists provided skylink and prunes nginx cache to ensure it's not available any more (that is a bit much but that's the best we can do right now without paid nginx version) - if you want to use it, make sure to adjust the server addresses
### Step 1: setting up server user
1. SSH in a freshly installed Debian machine on a user with sudo access (can be root)
1. `apt-get update && apt-get install sudo libnss3-tools -y` to make sure `sudo` is available
1. `adduser user` to create user called `user` (creates `/home/user` directory)
1. `usermod -aG sudo user` to add this new user to sudo group
1. `sudo groupadd docker` to create a group for docker (it might already exist)
1. `sudo usermod -aG docker user` to add your user to that group
1. Quit the ssh session with `exit` command
You can now ssh into your machine as the user `user`.
### Step 2: setting up environment
1. On your local machine: `ssh-copy-id user@ip-addr` to copy over your ssh key to server
1. On your local machine: `ssh user@ip-addr` to log in to server as user `user`
1. You are now logged in as `user`
**Following step will be executed on remote host logged in as a `user`:**
1. `sudo apt-get install git -y` to install git
1. `git clone https://github.com/SkynetLabs/skynet-webportal`
1. `cd skynet-webportal`
1. run setup scripts in the exact order and provide sudo password when asked (if one of them fails, you can retry just this one before proceeding further)
1. `/home/user/skynet-webportal/setup-scripts/setup-server.sh`
1. `/home/user/skynet-webportal/setup-scripts/setup-docker-services.sh`
1. `/home/user/skynet-webportal/setup-scripts/setup-health-check-scripts.sh` (optional)
### Step 3: configuring siad
At this point we have almost everything running, we just need to set up your wallet and allowance:
1. Create a new wallet (remember to save the seed)
> `docker exec -it sia siac wallet init`
1. Unlock the wallet (use the seed as password)
> `docker exec -it sia siac wallet unlock`
1. Generate a new wallet address (save it for later to transfer the funds)
> `docker exec -it sia siac wallet address`
1. Set up allowance
> `docker exec -it sia siac renter setallowance`
1. 10 KS (keep 25 KS in your wallet)
1. default period
1. default number of hosts
1. 4 week renewal time
1. 500 GB expected storage
1. 500 GB expected upload
1. 5 TB expected download
1. default redundancy
1. Set a maximum storage price
> `docker exec -it sia siac renter setallowance --max-storage-price 100SC`
1. Instruct siad to start making 10 contracts per block with many hosts to potentially view the whole network's files
> `docker exec -it sia siac renter setallowance --payment-contract-initial-funding 10SC`
### Step 4: configuring docker services
1. edit `/home/user/skynet-webportal/.env` and configure following environment variables
- `PORTAL_DOMAIN` (required) is a skynet portal domain (ex. siasky.net)
- `SERVER_DOMAIN` (optional) is an optional direct server domain (ex. eu-ger-1.siasky.net) - leave blank unless it is different than PORTAL_DOMAIN
- `EMAIL_ADDRESS` is your email address used for communication regarding SSL certification (required if you're using http-01 challenge)
- `SIA_WALLET_PASSWORD` is your wallet password (or seed if you did not set a password)
- `HSD_API_KEY` this is a random security key for a handshake integration that gets generated automatically
- `CLOUDFLARE_AUTH_TOKEN` (optional) if using cloudflare as dns loadbalancer (need to change it in Caddyfile too)
- `AWS_ACCESS_KEY_ID` (optional) if using route53 as a dns loadbalancer
- `AWS_SECRET_ACCESS_KEY` (optional) if using route53 as a dns loadbalancer
- `PORTAL_NAME` a string representing name of your portal e.g. `siasky.xyz` or `my skynet portal`
- `DISCORD_WEBHOOK_URL` (required if using Discord notifications) discord webhook url (generate from discord app)
- `DISCORD_MENTION_USER_ID` (optional) add `/cc @user` mention to important messages from webhook (has to be id not user name)
- `DISCORD_MENTION_ROLE_ID` (optional) add `/cc @role` mention to important messages from webhook (has to be id not role name)
- `SKYNET_DB_USER` (optional) if using `accounts` this is the MongoDB username
- `SKYNET_DB_PASS` (optional) if using `accounts` this is the MongoDB password
- `SKYNET_DB_HOST` (optional) if using `accounts` this is the MongoDB address or container name
- `SKYNET_DB_PORT` (optional) if using `accounts` this is the MongoDB port
- `COOKIE_DOMAIN` (optional) if using `accounts` this is the domain to which your cookies will be issued
- `COOKIE_HASH_KEY` (optional) if using `accounts` hashing secret, at least 32 bytes
- `COOKIE_ENC_KEY` (optional) if using `accounts` encryption key, at least 32 bytes
- `S3_BACKUP_PATH` (optional) is using `accounts` and backing up the databases to S3. This path should be an S3 bucket
with path to the location in the bucket where we want to store the daily backups.
1. `docker-compose up -d` to restart the services so they pick up new env variables
1. add your custom Kratos configuration to `/home/user/skynet-webportal/docker/kratos/config/kratos.yml` (in particular, the credentials for your mail server should be here, rather than in your source control). For a starting point you can take `docker/kratos/config/kratos.yml.sample`.
## Subdomains
It might prove useful for certain skapps to be accessible through a custom subdomain. So instead of being accessed through `https://portal.com/[skylink]`, it would be accessible through `https://[skylink_base32].portal.com`. We call this "subdomain access" and it is made possible by encoding Skylinks using a base32 encoding. We have to use a base32 encoding scheme because subdomains have to be all lower case and the base64 encoded Skylink is case sensitive and thus might contain uppercase characters.
You can convert Skylinks using this [converter skapp](https://convert-skylink.hns.siasky.net). To see how the encoding and decoding works, please follow the link to the repo in the application itself.
There is also an option to access handshake domain through the subdomain using `https://[domain_name].hns.portal.com`.
To configure this on your portal, you have to make sure to configure the following:
## Useful Commands
- Starting the whole stack
> `docker-compose up -d`
- Stopping the whole stack
> `docker-compose down`
- Accessing siac
> `docker exec -it sia siac`
- Portal maintenance
- Pulling portal out for maintenance
> `scripts/portal-down.sh`
- Putting portal back into place after maintenance
> `scripts/portal-up.sh`
- Upgrading portal containers (takes care of pulling it and putting it back)
> `scripts/portal-upgrade.sh`
- Restarting caddy gracefully after making changes to Caddyfile (no downtime)
> `docker exec caddy caddy reload --config /etc/caddy/Caddyfile`
- Restarting nginx gracefully after making changes to nginx configs (no downtime)
> `docker exec nginx openresty -s reload`
- Checking siad service logs (since last hour)
> `docker logs --since 1h $(docker ps -q --filter "name=^sia$")`
- Checking caddy logs (for example in case ssl certificate fails)
> `docker logs caddy -f`
- Checking nginx logs (nginx handles all communication to siad instances)
> `tail -n 50 docker/data/nginx/logs/access.log` to follow last 50 lines of access log
> `tail -n 50 docker/data/nginx/logs/error.log` to follow last 50 lines of error log