42e3df128a
We will override that with `portal-latest` on all portals. |
||
---|---|---|
.github | ||
.vscode | ||
docker | ||
packages | ||
scripts | ||
setup-scripts | ||
.gitignore | ||
.prettierignore | ||
LICENSE.md | ||
README.md | ||
docker-compose.accounts.yml | ||
docker-compose.uploads.yml | ||
docker-compose.yml | ||
package.json | ||
yarn.lock |
README.md
Skynet Portal
Web application
Use yarn workspace webapp start
to start the development server.
Use yarn workspace webapp build
to compile the application to /public
directory.
You can use the below build parameters to customize your web application.
- development example
GATSBY_API_URL=https://siasky.dev yarn workspace webapp start
- production example
GATSBY_API_URL=https://siasky.net yarn workspace webapp build
List of available parameters:
GATSBY_API_URL
: override api url (defaults to location origin)
License
Skynet uses a custom License. The Skynet License is a source code license that allows you to use, modify and distribute the software, but you must preserve the payment mechanism in the software.
For the purposes of complying with our code license, you can use the following Siacoin address:
fb6c9320bc7e01fbb9cd8d8c3caaa371386928793c736837832e634aaaa484650a3177d6714a
MongoDB Setup
Mongo needs a couple of extra steps in order to start a secure cluster.
- Open port 27017 on all nodes that will take part in the cluster. Ideally, you would only open the port for the other nodes in the cluster.
- Manually add a
mgkey
file under./docker/data/mongo
with the respective secret ( see Mongo's keyfile access control for details). - Manually run an initialisation
docker run
with extra environment variables that will initialise the admin user with a password (example below). - During the initialisation run mentioned above, we need to make two extra steps within the container:
- Change the ownership of
mgkey
tomongodb:mongodb
- Change its permissions to 400
- Change the ownership of
- After these steps are done we can open a mongo shell on the master node and run
rs.add()
in order to add the new node to the cluster.
Example initialisation docker run command:
docker run \
--rm \
--name mg \
-p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=<admin username> \
-e MONGO_INITDB_ROOT_PASSWORD=<admin password> \
-v /home/user/skynet-webportal/docker/data/mongo/db:/data/db \
-v /home/user/skynet-webportal/docker/data/mongo/mgkey:/data/mgkey \
mongo --keyFile=/data/mgkey --replSet=skynet
Regular docker run command:
docker run \
--rm \
--name mg \
-p 27017:27017 \
-v /home/user/skynet-webportal/docker/data/mongo/db:/data/db \
-v /home/user/skynet-webportal/docker/data/mongo/mgkey:/data/mgkey \
mongo --keyFile=/data/mgkey --replSet=skynet
Cluster initialisation mongo command:
rs.initiate(
{
_id : "skynet",
members: [
{ _id : 0, host : "mongo:27017" }
]
}
)
Add more nodes when they are ready:
rs.add("second.node.net:27017")
Kratos & Oathkeeper Setup
Kratos is our user management system of choice and Oathkeeper is the identity and access proxy.
Most of the needed config is already under docker/kratos
. The only two things that need to be changed are the config
for Kratos that might contain you email server password, and the JWKS Oathkeeper uses to sign its JWT tokens.
Make sure to create your owndocker/kratos/config/kratos.yml
by copying the kratos.yml.sample
in the same directory.
Also make sure to never add that file to source control because it will most probably contain your email password in
plain text!
To override the JWKS you will need to directly edit
docker/kratos/oathkeeper/id_token.jwks.json
and replace it with your generated key set. If you don't know how to
generate a key set you can use this code:
package main
import (
"encoding/json"
"log"
"os"
"github.com/ory/hydra/jwk"
)
func main() {
gen := jwk.RS256Generator{
KeyLength: 2048,
}
jwks, err := gen.Generate("", "sig")
if err != nil {
log.Fatal(err)
}
jsonbuf, err := json.MarshalIndent(jwks, "", " ")
if err != nil {
log.Fatal("failed to generate JSON: %s", err)
}
os.Stdout.Write(jsonbuf)
}
While you can directly put the output of this programme into the file mentioned above, you can also remove the public
key from the set and change the kid
of the private key to not include the prefix private:
.
CockroachDB Setup
Kratos uses CockroachDB to store its data. For that data to be shared across all nodes that comprise your portal cluster setup, we need to set up a CockroachDB cluster, complete with secure communication.
Generate the certificates for secure communication
For a detailed walk-through, please check this guide out.
Steps:
- Start a local cockroach docker instance:
docker run -d -v "<local dir>:/cockroach/cockroach-secure" --name=crdb cockroachdb/cockroach start --insecure
- Get a shall into that instance:
docker exec -it crdb /bin/bash
- Go to the directory we which we mapped to a local dir:
cd /cockroach/cockroach-secure
- Create the subdirectories in which to create certificates and keys:
mkdir certs my-safe-directory
- Create the CA (Certificate Authority) certificate and key
pair:
cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key
- Create a client certificate and key pair for the root
user:
cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key
- Create the certificate and key pair for your
nodes:
cockroach cert create-node cockroach mynode.siasky.net --certs-dir=certs --ca-key=my-safe-directory/ca.key
. Don't forget thecockroach
node name - it's needed by our docker-compose setup. If you want to create certificates for more nodes, just delete thenode.*
files (after you've finished the next steps for this node!) and re-run the above command with the new node name. - Put the contents of the
certs
folder underdocker/cockroach/certs/*
under your portal's root dir and store the content ofmy-safe-directory
somewhere safe. - Put another copy of those certificates under
docker/kratos/cr_certs
and change permissions of the*.key
files, so they can be read by anyone (644).
Configure your CockroachDB node
There is some configuration that needs to be added to your .env
file, namely:
- CR_NODE - the name of your node
- CR_IP - the public IP of your node
- CR_CLUSTER_NODES - a list of IPs and ports which make up your cluster, e.g.
95.216.13.185:26257,147.135.37.21:26257,144.76.136.122:26257
. This will be the list of nodes that will make up your cluster, so make sure those are accurate.
Contributing
Testing Your Code
Before pushing your code, you should verify that it will pass our online test suite.
Cypress Tests Verify the Cypress test suite by doing the following:
- In one terminal screen run
GATSBY_API_URL=https://siasky.net yarn workspace webapp start
- In a second terminal screen run
yarn workspace webapp cypress run
Setting up complete skynet server
A setup guide with installation scripts can be found in setup-scripts/README.md.