Move infra builder to own repo

This commit is contained in:
Kevin van Zonneveld 2015-08-19 16:34:49 +02:00
parent 942ec77216
commit 186e192cb0
15 changed files with 0 additions and 889 deletions

6
.gitignore vendored
View File

@ -1,9 +1,3 @@
tusd/data tusd/data
cover.out cover.out
data/ data/
# Infra
env.sh
terraform/
.DS_Store
envs/production/infra-tusd.pem

View File

@ -1,88 +0,0 @@
# Deploying Infra
## Intro
This repository can create a tusd server from scratch, following this flow:
```yaml
- prepare: Install prerequisites
- init : Refreshes current infra state and saves to terraform.tfstate
- plan : Shows infra changes and saves in an executable plan
- backup : Backs up server state
- launch : Launches virtual machines at a provider (if needed) using Terraform's ./infra.tf
- install: Runs Ansible to install software packages & configuration templates
- upload : Upload the application
- setup : Runs the ./playbook/setup.sh remotely, installing app dependencies and starting it
- show : Displays active platform
```
## Important files
- [envs/production/infra.tf](envs/production/infra.tf) responsible for creating server/ram/cpu/dns
- [playbook/playbook.yml](playbook/playbook.yml) responsible for installing APT packages
- [control.sh](control.sh) executes each step of the flow in a logical order. Relies on Terraform and Ansible.
- [Makefile](Makefile) provides convenience shortcuts such as `make deploy`. [Bash autocomplete](http://blog.jeffterrace.com/2012/09/bash-completion-for-mac-os-x.html) makes this sweet.
- [env.example.sh](env.example.sh) should be copied to `env.sh` and contains the secret keys to the infra provider (amazon, google, digitalocean, etc). These keys are necessary to change infra, but not packers & config, as the SSH keys are included in this repo
Not included with this repo:
- `envs/production/infra-tusd.pem`
- `env.sh`
As these contain the keys to create new infra and ssh into the created servers.
## Demo
In this 2 minute demo:
- first a server is provisioned
- the machine-type is changed from `c3.large` (2 cores) to `c3.xlarge` (4 cores)
- `make deploy-infra`
- it detects a change, replaces the server, and provisions it
![terrible](https://cloud.githubusercontent.com/assets/26752/9314635/64b6be5c-452a-11e5-8d00-74e0b023077e.gif)
as you see this is a very powerful way to set up many more servers, or deal with calamities. Since everything is in Git, changes can be reviewed, reverted, etc. `make deploy-infra`, done.
## Prerequisites
### Terraform
> Terraform can set up machines & other resources at nearly all cloud providers
Installed automatically by `control.sh prepare` if missing.
### terraform-inventory
> Passes the hosts that Terraform created to Ansible.
> Parses state file, converts that to Ansible inventory.
**On OSX**
brew install https://raw.github.com/adammck/terraform-inventory/master/homebrew/terraform-inventory.rb
**On Linux**
Either compile the Go build, or use https://github.com/Homebrew/linuxbrew and `brew install` as well.
### Ansible
> A pragmatic, standardized way of provisioning servers with software & configuration.
**On OSX**
```bash
sudo -HE easy_install pip
sudo -HE pip install --upgrade pip
sudo -HE CFLAGS=-Qunused-arguments CPPFLAGS=-Qunused-arguments pip install --upgrade ansible
```
**On Linux**
```bash
sudo -HE easy_install pip
sudo -HE pip install --upgrade pip
sudo -HE pip install --upgrade ansible
```

View File

@ -1,350 +0,0 @@
#!/usr/bin/env bash
# infra-tusd. Copyright (c) 2015, Transloadit Ltd.
#
# This file:
#
# - Runs on a workstation
# - Looks at environment for cloud provider credentials, keys and their locations
# - Takes a 1st argument, the step:
# - prepare: Install prerequisites
# - init : Refreshes current infra state and saves to terraform.tfstate
# - plan : Shows infra changes and saves in an executable plan
# - backup : Backs up server state
# - launch : Launches virtual machines at a provider (if needed) using Terraform's ./infra.tf
# - install: Runs Ansible to install software packages & configuration templates
# - upload : Upload the application
# - setup : Runs the ./playbook/setup.sh remotely, installing app dependencies and starting it
# - show : Displays active platform
# - Takes an optional 2nd argument: "done". If it's set, only 1 step will execute
# - Will cycle through all subsequential steps. So if you choose 'upload', 'setup' will
# automatically be executed
# - Looks at TSD_DRY_RUN environment var. Set it to 1 to just show what will happen
#
# Run as:
#
# ./control.sh upload
#
# Authors:
#
# - Kevin van Zonneveld <kevin@infra-tusd>
set -o pipefail
set -o errexit
set -o nounset
# set -o xtrace
if [ -z "${DEPLOY_ENV}" ]; then
echo "Environment ${DEPLOY_ENV} not recognized. "
echo "Please first source envs/development/config.sh or source envs/production/config.sh"
exit 1
fi
# Set magic variables for current FILE & DIR
__dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
__file="${__dir}/$(basename "${BASH_SOURCE[0]}")"
__base="$(basename ${__file} .sh)"
__rootdir="${__dir}"
__terraformDir="${__rootdir}/terraform"
__envdir="${__rootdir}/envs/${DEPLOY_ENV}"
__playbookdir="${__rootdir}/playbook"
__terraformExe="${__terraformDir}/terraform"
__planfile="${__envdir}/terraform.plan"
__statefile="${__envdir}/terraform.tfstate"
__playbookfile="${__playbookdir}/main.yml"
__terraformVersion="0.6.3"
### Functions
####################################################################################
function syncUp() {
[ -z "${host:-}" ] && host="$(${__terraformExe} output public_address)"
chmod 600 ${TSD_SSH_KEYPUB_FILE}
chmod 600 ${TSD_SSH_KEY_FILE}
rsync \
--archive \
--delete \
--exclude=.git* \
--exclude=.DS_Store \
--exclude=node_modules \
--exclude=terraform.* \
--itemize-changes \
--checksum \
--no-times \
--no-group \
--no-motd \
--no-owner \
--rsh="ssh \
-i \"${TSD_SSH_KEY_FILE}\" \
-l ${TSD_SSH_USER} \
-o CheckHostIP=no \
-o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no" \
${@:2} \
${host}:${1}
}
function syncDown() {
[ -z "${host:-}" ] && host="$(${__terraformExe} output public_address)"
chmod 600 ${TSD_SSH_KEYPUB_FILE}
chmod 600 ${TSD_SSH_KEY_FILE}
rsync \
--archive \
--delete \
--exclude=.git* \
--exclude=.java* \
--exclude=.* \
--exclude=*.log \
--exclude=*.log.* \
--exclude=*.txt \
--exclude=org.jenkinsci.plugins.ghprb.GhprbTrigger.xml \
--exclude=*.bak \
--exclude=*.hpi \
--exclude=node_modules \
--exclude=.DS_Store \
--exclude=plugins \
--exclude=builds \
--exclude=lastStable \
--exclude=lastSuccessful \
--exclude=*secret* \
--exclude=*identity* \
--exclude=nextBuildNumber \
--exclude=userContent \
--exclude=nodes \
--exclude=updates \
--exclude=terraform.* \
--itemize-changes \
--checksum \
--no-times \
--no-group \
--no-motd \
--no-owner \
--no-perms \
--rsh="ssh \
-i \"${TSD_SSH_KEY_FILE}\" \
-l ${TSD_SSH_USER} \
-o CheckHostIP=no \
-o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no" \
${host}:${1} \
${2}
}
function remote() {
[ -z "${host:-}" ] && host="$(${__terraformExe} output public_address)"
chmod 600 ${TSD_SSH_KEYPUB_FILE}
chmod 600 ${TSD_SSH_KEY_FILE}
ssh ${host} \
-i "${TSD_SSH_KEY_FILE}" \
-l ${TSD_SSH_USER} \
-o UserKnownHostsFile=/dev/null \
-o CheckHostIP=no \
-o StrictHostKeyChecking=no "${@:-}"
}
# Waits on first host, then does the rest in parallel
# This is so that the leader can be setup, and then all the followers can join
function inParallel () {
cnt=0
for host in $(${__terraformExe} output public_addresses); do
let "cnt = cnt + 1"
if [ "${cnt}" = 1 ]; then
# wait on leader leader
${@}
else
${@} &
fi
done
fail=0
for job in $(jobs -p); do
# echo ${job}
wait ${job} || let "fail = fail + 1"
done
if [ "${fail}" -ne 0 ]; then
exit 1
fi
}
### Vars
####################################################################################
dryRun="${TSD_DRY_RUN:-0}"
step="${1:-prepare}"
afterone="${2:-}"
enabled=0
### Runtime
####################################################################################
pushd "${__envdir}" > /dev/null
if [ "${step}" = "remote" ]; then
remote ${@:2}
exit ${?}
fi
if [ "${step}" = "facts" ]; then
ANSIBLE_HOST_KEY_CHECKING=False \
TF_STATE="${__statefile}" \
ansible all \
--user="${TSD_SSH_USER}" \
--private-key="${TSD_SSH_KEY_FILE}" \
--inventory-file="$(which terraform-inventory)" \
--module-name=setup \
--args='filter=ansible_*'
exit ${?}
fi
if [ "${step}" = "backup" ]; then
syncDown "/var/lib/jenkins/" "${__dir}/playbook/templates/"
exit ${?}
fi
if [ "${step}" = "restore" ]; then
remote "sudo /etc/init.d/redis-server stop || true"
remote "sudo addgroup ubuntu redis || true"
remote "sudo chmod -R g+wr /var/lib/redis"
syncUp "/var/lib/redis/dump.rdb" "./data/redis-dump.rdb"
remote "sudo chown -R redis.redis /var/lib/redis"
remote "sudo /etc/init.d/redis-server start"
exit ${?}
fi
processed=""
for action in "prepare" "init" "plan" "backup" "launch" "install" "upload" "setup" "show"; do
[ "${action}" = "${step}" ] && enabled=1
[ "${enabled}" -eq 0 ] && continue
if [ -n "${processed}" ] && [ "${afterone}" = "done" ]; then
break
fi
echo "--> ${TSD_HOSTNAME} - ${action}"
if [ "${action}" = "prepare" ]; then
os="linux"
if [[ "${OSTYPE}" == "darwin"* ]]; then
os="darwin"
fi
# Install brew/wget on OSX
if [ "${os}" = "darwin" ]; then
[ -z "$(which brew 2>/dev/null)" ] && ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
[ -z "$(which wget 2>/dev/null)" ] && brew install wget
fi
# Install Terraform
arch="amd64"
zipFile="terraform_${__terraformVersion}_${os}_${arch}.zip"
url="https://dl.bintray.com/mitchellh/terraform/${zipFile}"
dir="${__terraformDir}"
mkdir -p "${dir}"
pushd "${dir}" > /dev/null
if [ "$(echo $("${__terraformExe}" version))" != "Terraform v${__terraformVersion}" ]; then
rm -f "${zipFile}" || true
wget "${url}"
unzip -o "${zipFile}"
rm -f "${zipFile}"
fi
"${__terraformExe}" version |grep "Terraform v${__terraformVersion}"
popd > /dev/null
# Install SSH Keys
if [ ! -f "${TSD_SSH_KEY_FILE}" ]; then
echo -e "\n\n" | ssh-keygen -t rsa -C "${TSD_SSH_EMAIL}" -f "${TSD_SSH_KEY_FILE}"
echo "You may need to add ${TSD_SSH_KEYPUB_FILE} to the Digital Ocean"
export TSD_SSH_KEYPUB=$(echo "$(cat "${TSD_SSH_KEYPUB_FILE}")") || true
# Digital ocean requires this:
export TSD_SSH_KEYPUB_FINGERPRINT="$(ssh-keygen -lf ${TSD_SSH_KEYPUB_FILE} | awk '{print $2}')"
fi
if [ ! -f "${TSD_SSH_KEYPUB_FILE}" ]; then
chmod 600 "${TSD_SSH_KEY_FILE}" || true
ssh-keygen -yf "${IIM_SSH_KEY_FILE}" > "${IIM_SSH_KEYPUB_FILE}"
chmod 600 "${TSD_SSH_KEYPUB_FILE}" || true
fi
processed="${processed} ${action}" && continue
fi
terraformArgs=""
terraformArgs="${terraformArgs} -var TSD_AWS_SECRET_KEY=${TSD_AWS_SECRET_KEY}"
terraformArgs="${terraformArgs} -var TSD_AWS_ACCESS_KEY=${TSD_AWS_ACCESS_KEY}"
terraformArgs="${terraformArgs} -var TSD_AWS_ZONE_ID=${TSD_AWS_ZONE_ID}"
terraformArgs="${terraformArgs} -var TSD_DOMAIN=${TSD_DOMAIN}"
terraformArgs="${terraformArgs} -var TSD_SSH_KEYPUB=\"${TSD_SSH_KEYPUB}\""
terraformArgs="${terraformArgs} -var TSD_SSH_USER=${TSD_SSH_USER}"
terraformArgs="${terraformArgs} -var TSD_SSH_KEY_FILE=${TSD_SSH_KEY_FILE}"
terraformArgs="${terraformArgs} -var TSD_SSH_KEY_NAME=${TSD_SSH_KEY_NAME}"
if [ "${action}" = "init" ]; then
# if [ ! -f ${__statefile} ]; then
# echo "Nothing to refresh yet."
# else
bash -c "${__terraformExe} refresh ${terraformArgs}" || true
# fi
fi
if [ "${action}" = "plan" ]; then
rm -f ${__planfile}
bash -c "${__terraformExe} plan -refresh=false ${terraformArgs} -out ${__planfile}"
processed="${processed} ${action}" && continue
fi
if [ "${action}" = "backup" ]; then
# Save state before possibly destroying machine
processed="${processed} ${action}" && continue
fi
if [ "${action}" = "launch" ]; then
if [ -f ${__planfile} ]; then
echo "--> Press CTRL+C now if you are unsure! Executing plan in ${TSD_VERIFY_TIMEOUT}s..."
[ "${dryRun}" -eq 1 ] && echo "--> Dry run break" && exit 1
sleep ${TSD_VERIFY_TIMEOUT}
# exit 1
${__terraformExe} apply ${__planfile}
git add "${__statefile}" || true
git add "${__statefile}.backup" || true
git commit -m "Save infra state" || true
else
echo "Skipping, no changes. "
fi
processed="${processed} ${action}" && continue
fi
if [ "${action}" = "install" ]; then
ANSIBLE_HOST_KEY_CHECKING=False \
TF_STATE="${__statefile}" \
ansible-playbook \
--user="${TSD_SSH_USER}" \
--private-key="${TSD_SSH_KEY_FILE}" \
--inventory-file="$(which terraform-inventory)" \
--sudo \
"${__playbookfile}"
# inParallel "remote" "bash -c \"source ~/playbook/env/config.sh && sudo -E bash ~/playbook/install.sh\""
processed="${processed} ${action}" && continue
fi
if [ "${action}" = "upload" ]; then
# Upload/Download app here
processed="${processed} ${action}" && continue
fi
if [ "${action}" = "setup" ]; then
# Restart services
processed="${processed} ${action}" && continue
fi
if [ "${action}" = "show" ]; then
echo "http://${TSD_DOMAIN}:${TSD_APP_PORT}"
# for host in $(${__terraformExe} output public_addresses); do
# echo " - http://${host}:${TSD_APP_PORT}"
# done
processed="${processed} ${action}" && continue
fi
done
popd > /dev/null
echo "--> ${TSD_HOSTNAME} - completed:${processed} : )"

View File

@ -1,17 +0,0 @@
# Rename this file to env.sh, it will be kept out of Git.
# So suitable for adding secret keys and such
# Set magic variables for current FILE & DIR
__dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
__file="${__dir}/$(basename "${BASH_SOURCE[0]}")"
__base="$(basename ${__file} .sh)"
__rootdir="${__dir}"
export DEPLOY_ENV="${DEPLOY_ENV:-production}"
source "${__rootdir}/envs/${DEPLOY_ENV}/config.sh"
# Secret keys here:
# export TSD_AWS_ACCESS_KEY="xyz"
# export TSD_AWS_SECRET_KEY="xyz123"
# export TSD_AWS_ZONE_ID="Z123"

View File

@ -1,47 +0,0 @@
#!/usr/bin/env bash
# Environment tree:
#
# _default.sh
# ├── development.sh
# │   └── test.sh
# └── production.sh
#    └── staging.sh
#
# This provides DRY flexibilty, but in practice I recommend using mainly
# development.sh and production.sh, and duplication keys between them
# so you can easily compare side by side.
# Then just use _default.sh, test.sh, staging.sh for tweaks, to keep things
# clear.
#
# These variables are mandatory and have special meaning
#
# - NODE_APP_PREFIX="MYAPP" # filter and nest vars starting with MYAPP right into your app
# - NODE_ENV="production" # the environment your program thinks it's running
# - DEPLOY_ENV="staging" # the machine you are actually running on
# - DEBUG=*.* # Used to control debug levels per module
#
# After getting that out of the way, feel free to start hacking on, prefixing all
# vars with MYAPP a.k.a an actuall short abbreviation of your app name.
export APP_PREFIX="TSD"
export NODE_APP_PREFIX="${APP_PREFIX}"
export TSD_DOMAIN="master.tus.io"
export TSD_APP_DIR="/srv/current"
export TSD_APP_NAME="infra-tusd"
export TSD_APP_PORT="8080"
export TSD_HOSTNAME="$(uname -n)"
export TSD_SERVICE_USER="www-data"
export TSD_SERVICE_GROUP="www-data"
export TSD_SSH_KEY_NAME="infra-tusd"
export TSD_SSH_USER="ubuntu"
export TSD_SSH_EMAIL="hello@infra-tusd"
export TSD_SSH_KEY_FILE="${__envdir}/infra-tusd.pem"
export TSD_SSH_KEYPUB_FILE="${__envdir}/infra-tusd.pub"
export TSD_SSH_KEYPUB=$(echo "$(cat "${TSD_SSH_KEYPUB_FILE}" 2>/dev/null)") || true
export TSD_SSH_KEYPUB_FINGERPRINT="$(ssh-keygen -lf ${TSD_SSH_KEYPUB_FILE} | awk '{print $2}')"
export TSD_VERIFY_TIMEOUT=5

View File

@ -1,7 +0,0 @@
#!/usr/bin/env bash
__envdir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source ${__envdir}/../_default.sh 2>/dev/null || source ${__envdir}/_default.sh
export DEPLOY_ENV="development"
export NODE_ENV="development"
export DEBUG=tsd:*,express:application

View File

@ -1,7 +0,0 @@
#!/usr/bin/env bash
__envdir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source ${__envdir}/../_default.sh 2>/dev/null || source ${__envdir}/_default.sh
export DEPLOY_ENV="production"
export NODE_ENV="production"
export DEBUG=""

View File

@ -1,95 +0,0 @@
variable "TSD_AWS_ACCESS_KEY" {}
variable "TSD_AWS_SECRET_KEY" {}
variable "TSD_AWS_ZONE_ID" {}
variable "TSD_DOMAIN" {}
variable "TSD_SSH_USER" {}
variable "TSD_SSH_KEY_FILE" {}
variable "TSD_SSH_KEY_NAME" {}
variable "ip_kevin" {
default = "62.163.187.106/32"
}
variable "ip_marius" {
default = "84.146.5.70/32"
}
variable "ip_tim" {
default = "24.134.75.132/32"
}
variable "ip_all" {
default = "0.0.0.0/0"
}
provider "aws" {
access_key = "${var.TSD_AWS_ACCESS_KEY}"
secret_key = "${var.TSD_AWS_SECRET_KEY}"
region = "us-east-1"
}
variable "ami" {
// http://cloud-images.ubuntu.com/locator/ec2/
default = {
us-east-1 = "ami-9bce7af0" // us-east-1 trusty 14.04 LTS amd64 ebs-ssd 20150814 ami-9bce7af0
}
}
variable "region" {
default = "us-east-1"
description = "The region of AWS, for AMI lookups."
}
resource "aws_instance" "infra-tusd-server" {
ami = "${lookup(var.ami, var.region)}"
instance_type = "c3.large"
key_name = "${var.TSD_SSH_KEY_NAME}"
security_groups = [
"fw-infra-tusd-main"
]
connection {
user = "ubuntu"
key_file = "${var.TSD_SSH_KEY_FILE}"
}
}
resource "aws_route53_record" "www" {
zone_id = "${var.TSD_AWS_ZONE_ID}"
name = "${var.TSD_DOMAIN}"
type = "CNAME"
ttl = "300"
records = [ "${aws_instance.infra-tusd-server.public_dns}" ]
}
resource "aws_security_group" "fw-infra-tusd-main" {
name = "fw-infra-tusd-main"
description = "Infra tusd"
// SSH
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [
"${var.ip_kevin}",
"${var.ip_marius}",
"${var.ip_tim}"
]
}
// Web
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = [
"${var.ip_all}"
]
}
}
output "public_address" {
value = "${aws_instance.infra-tusd-server.0.public_dns}"
}
output "public_addresses" {
value = "${join(\"\n\", aws_instance.infra-tusd-server.*.public_dns)}"
}

View File

@ -1,102 +0,0 @@
{
"version": 1,
"serial": 3,
"modules": [
{
"path": [
"root"
],
"outputs": {
"public_address": "ec2-54-221-68-232.compute-1.amazonaws.com",
"public_addresses": "ec2-54-221-68-232.compute-1.amazonaws.com"
},
"resources": {
"aws_instance.infra-tusd-server": {
"type": "aws_instance",
"primary": {
"id": "i-cfbac61d",
"attributes": {
"ami": "ami-9bce7af0",
"availability_zone": "us-east-1a",
"ebs_block_device.#": "0",
"ebs_optimized": "false",
"ephemeral_block_device.#": "0",
"id": "i-cfbac61d",
"instance_type": "c3.large",
"key_name": "infra-tusd",
"monitoring": "false",
"private_dns": "ip-10-79-180-222.ec2.internal",
"private_ip": "10.79.180.222",
"public_dns": "ec2-54-221-68-232.compute-1.amazonaws.com",
"public_ip": "54.221.68.232",
"root_block_device.#": "1",
"root_block_device.0.delete_on_termination": "true",
"root_block_device.0.iops": "24",
"root_block_device.0.volume_size": "8",
"root_block_device.0.volume_type": "gp2",
"security_groups.#": "1",
"security_groups.1246499019": "fw-infra-tusd-main",
"source_dest_check": "true",
"tags.#": "0",
"tenancy": "default",
"vpc_security_group_ids.#": "0"
},
"meta": {
"schema_version": "1"
}
}
},
"aws_route53_record.www": {
"type": "aws_route53_record",
"depends_on": [
"aws_instance.infra-tusd-server"
],
"primary": {
"id": "Z3IT8X6U91XY1P_master.tus.io_CNAME",
"attributes": {
"fqdn": "master.tus.io",
"id": "Z3IT8X6U91XY1P_master.tus.io_CNAME",
"name": "master.tus.io",
"records.#": "1",
"records.1559747290": "ec2-54-221-68-232.compute-1.amazonaws.com",
"ttl": "300",
"type": "CNAME",
"zone_id": "Z3IT8X6U91XY1P"
}
}
},
"aws_security_group.fw-infra-tusd-main": {
"type": "aws_security_group",
"primary": {
"id": "sg-2ff78c42",
"attributes": {
"description": "Infra tusd",
"egress.#": "0",
"id": "sg-2ff78c42",
"ingress.#": "2",
"ingress.2968898949.cidr_blocks.#": "3",
"ingress.2968898949.cidr_blocks.0": "24.134.75.132/32",
"ingress.2968898949.cidr_blocks.1": "62.163.187.106/32",
"ingress.2968898949.cidr_blocks.2": "84.146.5.70/32",
"ingress.2968898949.from_port": "22",
"ingress.2968898949.protocol": "tcp",
"ingress.2968898949.security_groups.#": "0",
"ingress.2968898949.self": "false",
"ingress.2968898949.to_port": "22",
"ingress.516175195.cidr_blocks.#": "1",
"ingress.516175195.cidr_blocks.0": "0.0.0.0/0",
"ingress.516175195.from_port": "8080",
"ingress.516175195.protocol": "tcp",
"ingress.516175195.security_groups.#": "0",
"ingress.516175195.self": "false",
"ingress.516175195.to_port": "8080",
"name": "fw-infra-tusd-main",
"owner_id": "402421253186",
"tags.#": "0"
}
}
}
}
}
]
}

View File

@ -1,102 +0,0 @@
{
"version": 1,
"serial": 3,
"modules": [
{
"path": [
"root"
],
"outputs": {
"public_address": "ec2-54-221-68-232.compute-1.amazonaws.com",
"public_addresses": "ec2-54-221-68-232.compute-1.amazonaws.com"
},
"resources": {
"aws_instance.infra-tusd-server": {
"type": "aws_instance",
"primary": {
"id": "i-cfbac61d",
"attributes": {
"ami": "ami-9bce7af0",
"availability_zone": "us-east-1a",
"ebs_block_device.#": "0",
"ebs_optimized": "false",
"ephemeral_block_device.#": "0",
"id": "i-cfbac61d",
"instance_type": "c3.large",
"key_name": "infra-tusd",
"monitoring": "false",
"private_dns": "ip-10-79-180-222.ec2.internal",
"private_ip": "10.79.180.222",
"public_dns": "ec2-54-221-68-232.compute-1.amazonaws.com",
"public_ip": "54.221.68.232",
"root_block_device.#": "1",
"root_block_device.0.delete_on_termination": "true",
"root_block_device.0.iops": "24",
"root_block_device.0.volume_size": "8",
"root_block_device.0.volume_type": "gp2",
"security_groups.#": "1",
"security_groups.1246499019": "fw-infra-tusd-main",
"source_dest_check": "true",
"tags.#": "0",
"tenancy": "default",
"vpc_security_group_ids.#": "0"
},
"meta": {
"schema_version": "1"
}
}
},
"aws_route53_record.www": {
"type": "aws_route53_record",
"depends_on": [
"aws_instance.infra-tusd-server"
],
"primary": {
"id": "Z3IT8X6U91XY1P_master.tus.io_CNAME",
"attributes": {
"fqdn": "master.tus.io",
"id": "Z3IT8X6U91XY1P_master.tus.io_CNAME",
"name": "master.tus.io",
"records.#": "1",
"records.1559747290": "ec2-54-221-68-232.compute-1.amazonaws.com",
"ttl": "300",
"type": "CNAME",
"zone_id": "Z3IT8X6U91XY1P"
}
}
},
"aws_security_group.fw-infra-tusd-main": {
"type": "aws_security_group",
"primary": {
"id": "sg-2ff78c42",
"attributes": {
"description": "Infra tusd",
"egress.#": "0",
"id": "sg-2ff78c42",
"ingress.#": "2",
"ingress.2968898949.cidr_blocks.#": "3",
"ingress.2968898949.cidr_blocks.0": "24.134.75.132/32",
"ingress.2968898949.cidr_blocks.1": "62.163.187.106/32",
"ingress.2968898949.cidr_blocks.2": "84.146.5.70/32",
"ingress.2968898949.from_port": "22",
"ingress.2968898949.protocol": "tcp",
"ingress.2968898949.security_groups.#": "0",
"ingress.2968898949.self": "false",
"ingress.2968898949.to_port": "22",
"ingress.516175195.cidr_blocks.#": "1",
"ingress.516175195.cidr_blocks.0": "0.0.0.0/0",
"ingress.516175195.from_port": "8080",
"ingress.516175195.protocol": "tcp",
"ingress.516175195.security_groups.#": "0",
"ingress.516175195.self": "false",
"ingress.516175195.to_port": "8080",
"name": "fw-infra-tusd-main",
"owner_id": "402421253186",
"tags.#": "0"
}
}
}
}
}
]
}

View File

@ -1,7 +0,0 @@
#!/usr/bin/env bash
__envdir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source ${__envdir}/../_default.sh 2>/dev/null || source ${__envdir}/_default.sh
export NODE_ENV="production"
export DEPLOY_ENV="staging"
export DEBUG=""

View File

@ -1,7 +0,0 @@
#!/usr/bin/env bash
__envdir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source ${__envdir}/../_default.sh 2>/dev/null || source ${__envdir}/_default.sh
export DEPLOY_ENV="test"
export NODE_ENV="test"
export DEBUG="tsd:*"

View File

@ -1,47 +0,0 @@
- name: Install infra-tusd-server
hosts: infra-tusd-server
tasks:
- name: Common | Add US APT Mirrors
action: template src=templates/sources.list dest=/etc/apt/sources.list
register: apt_sources
- name: Common | Update APT
apt: upgrade=dist cache_valid_time=3600 update_cache=yes dpkg_options='force-confold,force-confdef'
when: apt_sources|changed
- name: Common | Install Packages
apt: pkg={{ item }} state=present
with_items:
- apg
- build-essential
- curl
- git-core
- htop
- iotop
- libpcre3
- logtail
- mlocate
- mtr
- mysql-client
- psmisc
- telnet
- vim
- wget
- name: Common | Add convenience shortcut wtf
action: lineinfile dest=/home/ubuntu/.bashrc line="alias wtf='sudo tail -f /var/log/*{log,err} /var/log/{dmesg,messages,*{,/*}{log,err}}'"
register: bashrc
# - name: Tus | Download tusd
# action: foo bar
# register: foo bar
#
# - name: Tus | Install upstart/systemd file
# action: foo bar
# register: foo bar
#
# - name: Tus | Start service
# action: foo bar
# register: foo bar

View File

@ -1,7 +0,0 @@
deb http://us.archive.ubuntu.com/ubuntu/ {{ ansible_distribution_release }} main restricted universe multiverse
deb http://us.archive.ubuntu.com/ubuntu/ {{ ansible_distribution_release }}-updates main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu {{ ansible_distribution_release }}-security main restricted universe multiverse
#deb-src http://us.archive.ubuntu.com/ubuntu/ {{ ansible_distribution_release }} main restricted universe multiverse
#deb-src http://us.archive.ubuntu.com/ubuntu/ {{ ansible_distribution_release }}-updates main restricted universe multiverse
#deb-src http://security.ubuntu.com/ubuntu {{ ansible_distribution_release }}-security main restricted universe multiverse