Infra construction /cc @Acconut
All this needs now is a wget for the tus binary, and running it.
This commit is contained in:
parent
d36573d956
commit
9e5bf0fbe7
|
@ -1,3 +1,9 @@
|
||||||
tusd/data
|
tusd/data
|
||||||
cover.out
|
cover.out
|
||||||
data/
|
data/
|
||||||
|
|
||||||
|
# Infra
|
||||||
|
env.sh
|
||||||
|
terraform/
|
||||||
|
.DS_Store
|
||||||
|
envs/production/infra-tusd.pem
|
||||||
|
|
|
@ -0,0 +1,88 @@
|
||||||
|
# Deploying Infra
|
||||||
|
|
||||||
|
## Intro
|
||||||
|
|
||||||
|
This repository can create a tusd server from scratch, following this flow:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- prepare: Install prerequisites
|
||||||
|
- init : Refreshes current infra state and saves to terraform.tfstate
|
||||||
|
- plan : Shows infra changes and saves in an executable plan
|
||||||
|
- backup : Backs up server state
|
||||||
|
- launch : Launches virtual machines at a provider (if needed) using Terraform's ./infra.tf
|
||||||
|
- install: Runs Ansible to install software packages & configuration templates
|
||||||
|
- upload : Upload the application
|
||||||
|
- setup : Runs the ./playbook/setup.sh remotely, installing app dependencies and starting it
|
||||||
|
- show : Displays active platform
|
||||||
|
```
|
||||||
|
|
||||||
|
## Important files
|
||||||
|
|
||||||
|
- [envs/production/infra.tf](envs/production/infra.tf) responsible for creating server/ram/cpu/dns
|
||||||
|
- [playbook/playbook.yml](playbook/playbook.yml) responsible for installing APT packages
|
||||||
|
- [control.sh](control.sh) executes each step of the flow in a logical order. Relies on Terraform and Ansible.
|
||||||
|
- [Makefile](Makefile) provides convenience shortcuts such as `make deploy`. [Bash autocomplete](http://blog.jeffterrace.com/2012/09/bash-completion-for-mac-os-x.html) makes this sweet.
|
||||||
|
- [env.example.sh](env.example.sh) should be copied to `env.sh` and contains the secret keys to the infra provider (amazon, google, digitalocean, etc). These keys are necessary to change infra, but not packers & config, as the SSH keys are included in this repo
|
||||||
|
|
||||||
|
|
||||||
|
Not included with this repo:
|
||||||
|
|
||||||
|
- `envs/production/infra-tusd.pem`
|
||||||
|
- `env.sh`
|
||||||
|
|
||||||
|
As these contain the keys to create new infra and ssh into the created servers.
|
||||||
|
|
||||||
|
|
||||||
|
## Demo
|
||||||
|
|
||||||
|
In this 2 minute demo:
|
||||||
|
|
||||||
|
- first a server is provisioned
|
||||||
|
- the machine-type is changed from `c3.large` (2 cores) to `c3.xlarge` (4 cores)
|
||||||
|
- `make deploy-infra`
|
||||||
|
- it detects a change, replaces the server, and provisions it
|
||||||
|
|
||||||
|
![terrible](https://cloud.githubusercontent.com/assets/26752/9314635/64b6be5c-452a-11e5-8d00-74e0b023077e.gif)
|
||||||
|
|
||||||
|
as you see this is a very powerful way to set up many more servers, or deal with calamities. Since everything is in Git, changes can be reviewed, reverted, etc. `make deploy-infra`, done.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
### Terraform
|
||||||
|
|
||||||
|
> Terraform can set up machines & other resources at nearly all cloud providers
|
||||||
|
|
||||||
|
Installed automatically by `control.sh prepare` if missing.
|
||||||
|
|
||||||
|
### terraform-inventory
|
||||||
|
|
||||||
|
> Passes the hosts that Terraform created to Ansible.
|
||||||
|
> Parses state file, converts that to Ansible inventory.
|
||||||
|
|
||||||
|
**On OSX**
|
||||||
|
|
||||||
|
brew install https://raw.github.com/adammck/terraform-inventory/master/homebrew/terraform-inventory.rb
|
||||||
|
|
||||||
|
**On Linux**
|
||||||
|
|
||||||
|
Either compile the Go build, or use https://github.com/Homebrew/linuxbrew and `brew install` as well.
|
||||||
|
|
||||||
|
### Ansible
|
||||||
|
|
||||||
|
> A pragmatic, standardized way of provisioning servers with software & configuration.
|
||||||
|
|
||||||
|
**On OSX**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo -HE easy_install pip
|
||||||
|
sudo -HE pip install --upgrade pip
|
||||||
|
sudo -HE CFLAGS=-Qunused-arguments CPPFLAGS=-Qunused-arguments pip install --upgrade ansible
|
||||||
|
```
|
||||||
|
|
||||||
|
**On Linux**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo -HE easy_install pip
|
||||||
|
sudo -HE pip install --upgrade pip
|
||||||
|
sudo -HE pip install --upgrade ansible
|
||||||
|
```
|
|
@ -0,0 +1,49 @@
|
||||||
|
|
||||||
|
.PHONY: deploy-infra
|
||||||
|
deploy-infra:
|
||||||
|
# Sets up all local & remote dependencies. Useful for first-time uses
|
||||||
|
# and to apply infra / software changes.
|
||||||
|
@git checkout master
|
||||||
|
@test -z "$$(git status --porcelain)" || (echo "Please first commit/clean your Git working directory" && false)
|
||||||
|
@git pull
|
||||||
|
source env.sh && ./control.sh prepare
|
||||||
|
|
||||||
|
.PHONY: deploy-infra-unsafe
|
||||||
|
deploy-infra-unsafe:
|
||||||
|
# Sets up all local & remote dependencies. Useful for first-time uses
|
||||||
|
# and to apply infra / software changes.
|
||||||
|
# Does not check git index
|
||||||
|
@git checkout master
|
||||||
|
@git pull
|
||||||
|
source env.sh && ./control.sh prepare
|
||||||
|
|
||||||
|
.PHONY: deploy
|
||||||
|
deploy:
|
||||||
|
# For regular use. Just uploads the code and restarts the services
|
||||||
|
@git checkout master
|
||||||
|
@test -z "$$(git status --porcelain)" || (echo "Please first commit/clean your Git working directory" && false)
|
||||||
|
@git pull
|
||||||
|
source env.sh && ./control.sh install
|
||||||
|
|
||||||
|
.PHONY: deploy-unsafe
|
||||||
|
deploy-unsafe:
|
||||||
|
# Does not check git index
|
||||||
|
@git checkout master
|
||||||
|
@git pull
|
||||||
|
source env.sh && ./control.sh install
|
||||||
|
|
||||||
|
.PHONY: backup
|
||||||
|
backup:
|
||||||
|
source env.sh && ./control.sh backup
|
||||||
|
|
||||||
|
.PHONY: restore
|
||||||
|
restore:
|
||||||
|
source env.sh && ./control.sh restore
|
||||||
|
|
||||||
|
.PHONY: facts
|
||||||
|
facts:
|
||||||
|
source env.sh && ./control.sh facts
|
||||||
|
|
||||||
|
.PHONY: console
|
||||||
|
console:
|
||||||
|
source env.sh && ./control.sh remote
|
|
@ -0,0 +1,350 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
# infra-tusd. Copyright (c) 2015, Transloadit Ltd.
|
||||||
|
#
|
||||||
|
# This file:
|
||||||
|
#
|
||||||
|
# - Runs on a workstation
|
||||||
|
# - Looks at environment for cloud provider credentials, keys and their locations
|
||||||
|
# - Takes a 1st argument, the step:
|
||||||
|
# - prepare: Install prerequisites
|
||||||
|
# - init : Refreshes current infra state and saves to terraform.tfstate
|
||||||
|
# - plan : Shows infra changes and saves in an executable plan
|
||||||
|
# - backup : Backs up server state
|
||||||
|
# - launch : Launches virtual machines at a provider (if needed) using Terraform's ./infra.tf
|
||||||
|
# - install: Runs Ansible to install software packages & configuration templates
|
||||||
|
# - upload : Upload the application
|
||||||
|
# - setup : Runs the ./playbook/setup.sh remotely, installing app dependencies and starting it
|
||||||
|
# - show : Displays active platform
|
||||||
|
# - Takes an optional 2nd argument: "done". If it's set, only 1 step will execute
|
||||||
|
# - Will cycle through all subsequential steps. So if you choose 'upload', 'setup' will
|
||||||
|
# automatically be executed
|
||||||
|
# - Looks at TSD_DRY_RUN environment var. Set it to 1 to just show what will happen
|
||||||
|
#
|
||||||
|
# Run as:
|
||||||
|
#
|
||||||
|
# ./control.sh upload
|
||||||
|
#
|
||||||
|
# Authors:
|
||||||
|
#
|
||||||
|
# - Kevin van Zonneveld <kevin@infra-tusd>
|
||||||
|
|
||||||
|
set -o pipefail
|
||||||
|
set -o errexit
|
||||||
|
set -o nounset
|
||||||
|
# set -o xtrace
|
||||||
|
|
||||||
|
if [ -z "${DEPLOY_ENV}" ]; then
|
||||||
|
echo "Environment ${DEPLOY_ENV} not recognized. "
|
||||||
|
echo "Please first source envs/development/config.sh or source envs/production/config.sh"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Set magic variables for current FILE & DIR
|
||||||
|
__dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
__file="${__dir}/$(basename "${BASH_SOURCE[0]}")"
|
||||||
|
__base="$(basename ${__file} .sh)"
|
||||||
|
|
||||||
|
__rootdir="${__dir}"
|
||||||
|
__terraformDir="${__rootdir}/terraform"
|
||||||
|
__envdir="${__rootdir}/envs/${DEPLOY_ENV}"
|
||||||
|
__playbookdir="${__rootdir}/playbook"
|
||||||
|
__terraformExe="${__terraformDir}/terraform"
|
||||||
|
|
||||||
|
__planfile="${__envdir}/terraform.plan"
|
||||||
|
__statefile="${__envdir}/terraform.tfstate"
|
||||||
|
__playbookfile="${__playbookdir}/main.yml"
|
||||||
|
|
||||||
|
__terraformVersion="0.6.3"
|
||||||
|
|
||||||
|
### Functions
|
||||||
|
####################################################################################
|
||||||
|
|
||||||
|
function syncUp() {
|
||||||
|
[ -z "${host:-}" ] && host="$(${__terraformExe} output public_address)"
|
||||||
|
chmod 600 ${TSD_SSH_KEYPUB_FILE}
|
||||||
|
chmod 600 ${TSD_SSH_KEY_FILE}
|
||||||
|
rsync \
|
||||||
|
--archive \
|
||||||
|
--delete \
|
||||||
|
--exclude=.git* \
|
||||||
|
--exclude=.DS_Store \
|
||||||
|
--exclude=node_modules \
|
||||||
|
--exclude=terraform.* \
|
||||||
|
--itemize-changes \
|
||||||
|
--checksum \
|
||||||
|
--no-times \
|
||||||
|
--no-group \
|
||||||
|
--no-motd \
|
||||||
|
--no-owner \
|
||||||
|
--rsh="ssh \
|
||||||
|
-i \"${TSD_SSH_KEY_FILE}\" \
|
||||||
|
-l ${TSD_SSH_USER} \
|
||||||
|
-o CheckHostIP=no \
|
||||||
|
-o UserKnownHostsFile=/dev/null \
|
||||||
|
-o StrictHostKeyChecking=no" \
|
||||||
|
${@:2} \
|
||||||
|
${host}:${1}
|
||||||
|
}
|
||||||
|
|
||||||
|
function syncDown() {
|
||||||
|
[ -z "${host:-}" ] && host="$(${__terraformExe} output public_address)"
|
||||||
|
chmod 600 ${TSD_SSH_KEYPUB_FILE}
|
||||||
|
chmod 600 ${TSD_SSH_KEY_FILE}
|
||||||
|
rsync \
|
||||||
|
--archive \
|
||||||
|
--delete \
|
||||||
|
--exclude=.git* \
|
||||||
|
--exclude=.java* \
|
||||||
|
--exclude=.* \
|
||||||
|
--exclude=*.log \
|
||||||
|
--exclude=*.log.* \
|
||||||
|
--exclude=*.txt \
|
||||||
|
--exclude=org.jenkinsci.plugins.ghprb.GhprbTrigger.xml \
|
||||||
|
--exclude=*.bak \
|
||||||
|
--exclude=*.hpi \
|
||||||
|
--exclude=node_modules \
|
||||||
|
--exclude=.DS_Store \
|
||||||
|
--exclude=plugins \
|
||||||
|
--exclude=builds \
|
||||||
|
--exclude=lastStable \
|
||||||
|
--exclude=lastSuccessful \
|
||||||
|
--exclude=*secret* \
|
||||||
|
--exclude=*identity* \
|
||||||
|
--exclude=nextBuildNumber \
|
||||||
|
--exclude=userContent \
|
||||||
|
--exclude=nodes \
|
||||||
|
--exclude=updates \
|
||||||
|
--exclude=terraform.* \
|
||||||
|
--itemize-changes \
|
||||||
|
--checksum \
|
||||||
|
--no-times \
|
||||||
|
--no-group \
|
||||||
|
--no-motd \
|
||||||
|
--no-owner \
|
||||||
|
--no-perms \
|
||||||
|
--rsh="ssh \
|
||||||
|
-i \"${TSD_SSH_KEY_FILE}\" \
|
||||||
|
-l ${TSD_SSH_USER} \
|
||||||
|
-o CheckHostIP=no \
|
||||||
|
-o UserKnownHostsFile=/dev/null \
|
||||||
|
-o StrictHostKeyChecking=no" \
|
||||||
|
${host}:${1} \
|
||||||
|
${2}
|
||||||
|
}
|
||||||
|
|
||||||
|
function remote() {
|
||||||
|
[ -z "${host:-}" ] && host="$(${__terraformExe} output public_address)"
|
||||||
|
chmod 600 ${TSD_SSH_KEYPUB_FILE}
|
||||||
|
chmod 600 ${TSD_SSH_KEY_FILE}
|
||||||
|
ssh ${host} \
|
||||||
|
-i "${TSD_SSH_KEY_FILE}" \
|
||||||
|
-l ${TSD_SSH_USER} \
|
||||||
|
-o UserKnownHostsFile=/dev/null \
|
||||||
|
-o CheckHostIP=no \
|
||||||
|
-o StrictHostKeyChecking=no "${@:-}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Waits on first host, then does the rest in parallel
|
||||||
|
# This is so that the leader can be setup, and then all the followers can join
|
||||||
|
function inParallel () {
|
||||||
|
cnt=0
|
||||||
|
for host in $(${__terraformExe} output public_addresses); do
|
||||||
|
let "cnt = cnt + 1"
|
||||||
|
if [ "${cnt}" = 1 ]; then
|
||||||
|
# wait on leader leader
|
||||||
|
${@}
|
||||||
|
else
|
||||||
|
${@} &
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
fail=0
|
||||||
|
for job in $(jobs -p); do
|
||||||
|
# echo ${job}
|
||||||
|
wait ${job} || let "fail = fail + 1"
|
||||||
|
done
|
||||||
|
if [ "${fail}" -ne 0 ]; then
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
### Vars
|
||||||
|
####################################################################################
|
||||||
|
|
||||||
|
dryRun="${TSD_DRY_RUN:-0}"
|
||||||
|
step="${1:-prepare}"
|
||||||
|
afterone="${2:-}"
|
||||||
|
enabled=0
|
||||||
|
|
||||||
|
|
||||||
|
### Runtime
|
||||||
|
####################################################################################
|
||||||
|
|
||||||
|
pushd "${__envdir}" > /dev/null
|
||||||
|
|
||||||
|
if [ "${step}" = "remote" ]; then
|
||||||
|
remote ${@:2}
|
||||||
|
exit ${?}
|
||||||
|
fi
|
||||||
|
if [ "${step}" = "facts" ]; then
|
||||||
|
ANSIBLE_HOST_KEY_CHECKING=False \
|
||||||
|
TF_STATE="${__statefile}" \
|
||||||
|
ansible all \
|
||||||
|
--user="${TSD_SSH_USER}" \
|
||||||
|
--private-key="${TSD_SSH_KEY_FILE}" \
|
||||||
|
--inventory-file="$(which terraform-inventory)" \
|
||||||
|
--module-name=setup \
|
||||||
|
--args='filter=ansible_*'
|
||||||
|
|
||||||
|
exit ${?}
|
||||||
|
fi
|
||||||
|
if [ "${step}" = "backup" ]; then
|
||||||
|
syncDown "/var/lib/jenkins/" "${__dir}/playbook/templates/"
|
||||||
|
exit ${?}
|
||||||
|
fi
|
||||||
|
if [ "${step}" = "restore" ]; then
|
||||||
|
remote "sudo /etc/init.d/redis-server stop || true"
|
||||||
|
remote "sudo addgroup ubuntu redis || true"
|
||||||
|
remote "sudo chmod -R g+wr /var/lib/redis"
|
||||||
|
syncUp "/var/lib/redis/dump.rdb" "./data/redis-dump.rdb"
|
||||||
|
remote "sudo chown -R redis.redis /var/lib/redis"
|
||||||
|
remote "sudo /etc/init.d/redis-server start"
|
||||||
|
exit ${?}
|
||||||
|
fi
|
||||||
|
|
||||||
|
processed=""
|
||||||
|
for action in "prepare" "init" "plan" "backup" "launch" "install" "upload" "setup" "show"; do
|
||||||
|
[ "${action}" = "${step}" ] && enabled=1
|
||||||
|
[ "${enabled}" -eq 0 ] && continue
|
||||||
|
if [ -n "${processed}" ] && [ "${afterone}" = "done" ]; then
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "--> ${TSD_HOSTNAME} - ${action}"
|
||||||
|
|
||||||
|
if [ "${action}" = "prepare" ]; then
|
||||||
|
os="linux"
|
||||||
|
if [[ "${OSTYPE}" == "darwin"* ]]; then
|
||||||
|
os="darwin"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Install brew/wget on OSX
|
||||||
|
if [ "${os}" = "darwin" ]; then
|
||||||
|
[ -z "$(which brew 2>/dev/null)" ] && ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
|
||||||
|
[ -z "$(which wget 2>/dev/null)" ] && brew install wget
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Install Terraform
|
||||||
|
arch="amd64"
|
||||||
|
zipFile="terraform_${__terraformVersion}_${os}_${arch}.zip"
|
||||||
|
url="https://dl.bintray.com/mitchellh/terraform/${zipFile}"
|
||||||
|
dir="${__terraformDir}"
|
||||||
|
mkdir -p "${dir}"
|
||||||
|
pushd "${dir}" > /dev/null
|
||||||
|
if [ "$(echo $("${__terraformExe}" version))" != "Terraform v${__terraformVersion}" ]; then
|
||||||
|
rm -f "${zipFile}" || true
|
||||||
|
wget "${url}"
|
||||||
|
unzip -o "${zipFile}"
|
||||||
|
rm -f "${zipFile}"
|
||||||
|
fi
|
||||||
|
"${__terraformExe}" version |grep "Terraform v${__terraformVersion}"
|
||||||
|
popd > /dev/null
|
||||||
|
|
||||||
|
# Install SSH Keys
|
||||||
|
if [ ! -f "${TSD_SSH_KEY_FILE}" ]; then
|
||||||
|
echo -e "\n\n" | ssh-keygen -t rsa -C "${TSD_SSH_EMAIL}" -f "${TSD_SSH_KEY_FILE}"
|
||||||
|
echo "You may need to add ${TSD_SSH_KEYPUB_FILE} to the Digital Ocean"
|
||||||
|
export TSD_SSH_KEYPUB=$(echo "$(cat "${TSD_SSH_KEYPUB_FILE}")") || true
|
||||||
|
# Digital ocean requires this:
|
||||||
|
export TSD_SSH_KEYPUB_FINGERPRINT="$(ssh-keygen -lf ${TSD_SSH_KEYPUB_FILE} | awk '{print $2}')"
|
||||||
|
fi
|
||||||
|
if [ ! -f "${TSD_SSH_KEYPUB_FILE}" ]; then
|
||||||
|
chmod 600 "${TSD_SSH_KEY_FILE}" || true
|
||||||
|
ssh-keygen -yf "${IIM_SSH_KEY_FILE}" > "${IIM_SSH_KEYPUB_FILE}"
|
||||||
|
chmod 600 "${TSD_SSH_KEYPUB_FILE}" || true
|
||||||
|
fi
|
||||||
|
|
||||||
|
processed="${processed} ${action}" && continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
terraformArgs=""
|
||||||
|
terraformArgs="${terraformArgs} -var TSD_AWS_SECRET_KEY=${TSD_AWS_SECRET_KEY}"
|
||||||
|
terraformArgs="${terraformArgs} -var TSD_AWS_ACCESS_KEY=${TSD_AWS_ACCESS_KEY}"
|
||||||
|
terraformArgs="${terraformArgs} -var TSD_AWS_ZONE_ID=${TSD_AWS_ZONE_ID}"
|
||||||
|
terraformArgs="${terraformArgs} -var TSD_DOMAIN=${TSD_DOMAIN}"
|
||||||
|
terraformArgs="${terraformArgs} -var TSD_SSH_KEYPUB=\"${TSD_SSH_KEYPUB}\""
|
||||||
|
terraformArgs="${terraformArgs} -var TSD_SSH_USER=${TSD_SSH_USER}"
|
||||||
|
terraformArgs="${terraformArgs} -var TSD_SSH_KEY_FILE=${TSD_SSH_KEY_FILE}"
|
||||||
|
terraformArgs="${terraformArgs} -var TSD_SSH_KEY_NAME=${TSD_SSH_KEY_NAME}"
|
||||||
|
|
||||||
|
if [ "${action}" = "init" ]; then
|
||||||
|
# if [ ! -f ${__statefile} ]; then
|
||||||
|
# echo "Nothing to refresh yet."
|
||||||
|
# else
|
||||||
|
bash -c "${__terraformExe} refresh ${terraformArgs}" || true
|
||||||
|
# fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${action}" = "plan" ]; then
|
||||||
|
rm -f ${__planfile}
|
||||||
|
bash -c "${__terraformExe} plan -refresh=false ${terraformArgs} -out ${__planfile}"
|
||||||
|
processed="${processed} ${action}" && continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${action}" = "backup" ]; then
|
||||||
|
# Save state before possibly destroying machine
|
||||||
|
processed="${processed} ${action}" && continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${action}" = "launch" ]; then
|
||||||
|
if [ -f ${__planfile} ]; then
|
||||||
|
echo "--> Press CTRL+C now if you are unsure! Executing plan in ${TSD_VERIFY_TIMEOUT}s..."
|
||||||
|
[ "${dryRun}" -eq 1 ] && echo "--> Dry run break" && exit 1
|
||||||
|
sleep ${TSD_VERIFY_TIMEOUT}
|
||||||
|
# exit 1
|
||||||
|
${__terraformExe} apply ${__planfile}
|
||||||
|
git add "${__statefile}" || true
|
||||||
|
git add "${__statefile}.backup" || true
|
||||||
|
git commit -m "Save infra state" || true
|
||||||
|
else
|
||||||
|
echo "Skipping, no changes. "
|
||||||
|
fi
|
||||||
|
processed="${processed} ${action}" && continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${action}" = "install" ]; then
|
||||||
|
ANSIBLE_HOST_KEY_CHECKING=False \
|
||||||
|
TF_STATE="${__statefile}" \
|
||||||
|
ansible-playbook \
|
||||||
|
--user="${TSD_SSH_USER}" \
|
||||||
|
--private-key="${TSD_SSH_KEY_FILE}" \
|
||||||
|
--inventory-file="$(which terraform-inventory)" \
|
||||||
|
--sudo \
|
||||||
|
"${__playbookfile}"
|
||||||
|
|
||||||
|
# inParallel "remote" "bash -c \"source ~/playbook/env/config.sh && sudo -E bash ~/playbook/install.sh\""
|
||||||
|
processed="${processed} ${action}" && continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${action}" = "upload" ]; then
|
||||||
|
# Upload/Download app here
|
||||||
|
processed="${processed} ${action}" && continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${action}" = "setup" ]; then
|
||||||
|
# Restart services
|
||||||
|
processed="${processed} ${action}" && continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "${action}" = "show" ]; then
|
||||||
|
echo "http://${TSD_DOMAIN}:${TSD_APP_PORT}"
|
||||||
|
# for host in $(${__terraformExe} output public_addresses); do
|
||||||
|
# echo " - http://${host}:${TSD_APP_PORT}"
|
||||||
|
# done
|
||||||
|
processed="${processed} ${action}" && continue
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
popd > /dev/null
|
||||||
|
|
||||||
|
echo "--> ${TSD_HOSTNAME} - completed:${processed} : )"
|
|
@ -0,0 +1,17 @@
|
||||||
|
# Rename this file to env.sh, it will be kept out of Git.
|
||||||
|
# So suitable for adding secret keys and such
|
||||||
|
|
||||||
|
# Set magic variables for current FILE & DIR
|
||||||
|
__dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
__file="${__dir}/$(basename "${BASH_SOURCE[0]}")"
|
||||||
|
__base="$(basename ${__file} .sh)"
|
||||||
|
__rootdir="${__dir}"
|
||||||
|
|
||||||
|
export DEPLOY_ENV="${DEPLOY_ENV:-production}"
|
||||||
|
|
||||||
|
source "${__rootdir}/envs/${DEPLOY_ENV}/config.sh"
|
||||||
|
|
||||||
|
# Secret keys here:
|
||||||
|
# export TSD_AWS_ACCESS_KEY="xyz"
|
||||||
|
# export TSD_AWS_SECRET_KEY="xyz123"
|
||||||
|
# export TSD_AWS_ZONE_ID="Z123"
|
|
@ -0,0 +1,47 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
# Environment tree:
|
||||||
|
#
|
||||||
|
# _default.sh
|
||||||
|
# ├── development.sh
|
||||||
|
# │ └── test.sh
|
||||||
|
# └── production.sh
|
||||||
|
# └── staging.sh
|
||||||
|
#
|
||||||
|
# This provides DRY flexibilty, but in practice I recommend using mainly
|
||||||
|
# development.sh and production.sh, and duplication keys between them
|
||||||
|
# so you can easily compare side by side.
|
||||||
|
# Then just use _default.sh, test.sh, staging.sh for tweaks, to keep things
|
||||||
|
# clear.
|
||||||
|
#
|
||||||
|
# These variables are mandatory and have special meaning
|
||||||
|
#
|
||||||
|
# - NODE_APP_PREFIX="MYAPP" # filter and nest vars starting with MYAPP right into your app
|
||||||
|
# - NODE_ENV="production" # the environment your program thinks it's running
|
||||||
|
# - DEPLOY_ENV="staging" # the machine you are actually running on
|
||||||
|
# - DEBUG=*.* # Used to control debug levels per module
|
||||||
|
#
|
||||||
|
# After getting that out of the way, feel free to start hacking on, prefixing all
|
||||||
|
# vars with MYAPP a.k.a an actuall short abbreviation of your app name.
|
||||||
|
|
||||||
|
export APP_PREFIX="TSD"
|
||||||
|
export NODE_APP_PREFIX="${APP_PREFIX}"
|
||||||
|
|
||||||
|
export TSD_DOMAIN="master.tus.io"
|
||||||
|
|
||||||
|
export TSD_APP_DIR="/srv/current"
|
||||||
|
export TSD_APP_NAME="infra-tusd"
|
||||||
|
export TSD_APP_PORT="8080"
|
||||||
|
export TSD_HOSTNAME="$(uname -n)"
|
||||||
|
|
||||||
|
export TSD_SERVICE_USER="www-data"
|
||||||
|
export TSD_SERVICE_GROUP="www-data"
|
||||||
|
|
||||||
|
export TSD_SSH_KEY_NAME="infra-tusd"
|
||||||
|
export TSD_SSH_USER="ubuntu"
|
||||||
|
export TSD_SSH_EMAIL="hello@infra-tusd"
|
||||||
|
export TSD_SSH_KEY_FILE="${__envdir}/infra-tusd.pem"
|
||||||
|
export TSD_SSH_KEYPUB_FILE="${__envdir}/infra-tusd.pub"
|
||||||
|
export TSD_SSH_KEYPUB=$(echo "$(cat "${TSD_SSH_KEYPUB_FILE}" 2>/dev/null)") || true
|
||||||
|
export TSD_SSH_KEYPUB_FINGERPRINT="$(ssh-keygen -lf ${TSD_SSH_KEYPUB_FILE} | awk '{print $2}')"
|
||||||
|
|
||||||
|
export TSD_VERIFY_TIMEOUT=5
|
|
@ -0,0 +1,7 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
__envdir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source ${__envdir}/../_default.sh 2>/dev/null || source ${__envdir}/_default.sh
|
||||||
|
|
||||||
|
export DEPLOY_ENV="development"
|
||||||
|
export NODE_ENV="development"
|
||||||
|
export DEBUG=tsd:*,express:application
|
|
@ -0,0 +1,7 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
__envdir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source ${__envdir}/../_default.sh 2>/dev/null || source ${__envdir}/_default.sh
|
||||||
|
|
||||||
|
export DEPLOY_ENV="production"
|
||||||
|
export NODE_ENV="production"
|
||||||
|
export DEBUG=""
|
|
@ -0,0 +1,95 @@
|
||||||
|
variable "TSD_AWS_ACCESS_KEY" {}
|
||||||
|
variable "TSD_AWS_SECRET_KEY" {}
|
||||||
|
variable "TSD_AWS_ZONE_ID" {}
|
||||||
|
variable "TSD_DOMAIN" {}
|
||||||
|
variable "TSD_SSH_USER" {}
|
||||||
|
variable "TSD_SSH_KEY_FILE" {}
|
||||||
|
variable "TSD_SSH_KEY_NAME" {}
|
||||||
|
|
||||||
|
variable "ip_kevin" {
|
||||||
|
default = "62.163.187.106/32"
|
||||||
|
}
|
||||||
|
variable "ip_marius" {
|
||||||
|
default = "84.146.5.70/32"
|
||||||
|
}
|
||||||
|
variable "ip_tim" {
|
||||||
|
default = "24.134.75.132/32"
|
||||||
|
}
|
||||||
|
variable "ip_all" {
|
||||||
|
default = "0.0.0.0/0"
|
||||||
|
}
|
||||||
|
|
||||||
|
provider "aws" {
|
||||||
|
access_key = "${var.TSD_AWS_ACCESS_KEY}"
|
||||||
|
secret_key = "${var.TSD_AWS_SECRET_KEY}"
|
||||||
|
region = "us-east-1"
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "ami" {
|
||||||
|
// http://cloud-images.ubuntu.com/locator/ec2/
|
||||||
|
default = {
|
||||||
|
us-east-1 = "ami-9bce7af0" // us-east-1 trusty 14.04 LTS amd64 ebs-ssd 20150814 ami-9bce7af0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
variable "region" {
|
||||||
|
default = "us-east-1"
|
||||||
|
description = "The region of AWS, for AMI lookups."
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_instance" "infra-tusd-server" {
|
||||||
|
ami = "${lookup(var.ami, var.region)}"
|
||||||
|
instance_type = "c3.large"
|
||||||
|
key_name = "${var.TSD_SSH_KEY_NAME}"
|
||||||
|
security_groups = [
|
||||||
|
"fw-infra-tusd-main"
|
||||||
|
]
|
||||||
|
|
||||||
|
connection {
|
||||||
|
user = "ubuntu"
|
||||||
|
key_file = "${var.TSD_SSH_KEY_FILE}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_route53_record" "www" {
|
||||||
|
zone_id = "${var.TSD_AWS_ZONE_ID}"
|
||||||
|
name = "${var.TSD_DOMAIN}"
|
||||||
|
type = "CNAME"
|
||||||
|
ttl = "300"
|
||||||
|
records = [ "${aws_instance.infra-tusd-server.public_dns}" ]
|
||||||
|
}
|
||||||
|
|
||||||
|
resource "aws_security_group" "fw-infra-tusd-main" {
|
||||||
|
name = "fw-infra-tusd-main"
|
||||||
|
description = "Infra tusd"
|
||||||
|
|
||||||
|
// SSH
|
||||||
|
ingress {
|
||||||
|
from_port = 22
|
||||||
|
to_port = 22
|
||||||
|
protocol = "tcp"
|
||||||
|
cidr_blocks = [
|
||||||
|
"${var.ip_kevin}",
|
||||||
|
"${var.ip_marius}",
|
||||||
|
"${var.ip_tim}"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
// Web
|
||||||
|
ingress {
|
||||||
|
from_port = 8080
|
||||||
|
to_port = 8080
|
||||||
|
protocol = "tcp"
|
||||||
|
cidr_blocks = [
|
||||||
|
"${var.ip_all}"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
output "public_address" {
|
||||||
|
value = "${aws_instance.infra-tusd-server.0.public_dns}"
|
||||||
|
}
|
||||||
|
|
||||||
|
output "public_addresses" {
|
||||||
|
value = "${join(\"\n\", aws_instance.infra-tusd-server.*.public_dns)}"
|
||||||
|
}
|
|
@ -0,0 +1,102 @@
|
||||||
|
{
|
||||||
|
"version": 1,
|
||||||
|
"serial": 3,
|
||||||
|
"modules": [
|
||||||
|
{
|
||||||
|
"path": [
|
||||||
|
"root"
|
||||||
|
],
|
||||||
|
"outputs": {
|
||||||
|
"public_address": "ec2-54-221-68-232.compute-1.amazonaws.com",
|
||||||
|
"public_addresses": "ec2-54-221-68-232.compute-1.amazonaws.com"
|
||||||
|
},
|
||||||
|
"resources": {
|
||||||
|
"aws_instance.infra-tusd-server": {
|
||||||
|
"type": "aws_instance",
|
||||||
|
"primary": {
|
||||||
|
"id": "i-cfbac61d",
|
||||||
|
"attributes": {
|
||||||
|
"ami": "ami-9bce7af0",
|
||||||
|
"availability_zone": "us-east-1a",
|
||||||
|
"ebs_block_device.#": "0",
|
||||||
|
"ebs_optimized": "false",
|
||||||
|
"ephemeral_block_device.#": "0",
|
||||||
|
"id": "i-cfbac61d",
|
||||||
|
"instance_type": "c3.large",
|
||||||
|
"key_name": "infra-tusd",
|
||||||
|
"monitoring": "false",
|
||||||
|
"private_dns": "ip-10-79-180-222.ec2.internal",
|
||||||
|
"private_ip": "10.79.180.222",
|
||||||
|
"public_dns": "ec2-54-221-68-232.compute-1.amazonaws.com",
|
||||||
|
"public_ip": "54.221.68.232",
|
||||||
|
"root_block_device.#": "1",
|
||||||
|
"root_block_device.0.delete_on_termination": "true",
|
||||||
|
"root_block_device.0.iops": "24",
|
||||||
|
"root_block_device.0.volume_size": "8",
|
||||||
|
"root_block_device.0.volume_type": "gp2",
|
||||||
|
"security_groups.#": "1",
|
||||||
|
"security_groups.1246499019": "fw-infra-tusd-main",
|
||||||
|
"source_dest_check": "true",
|
||||||
|
"tags.#": "0",
|
||||||
|
"tenancy": "default",
|
||||||
|
"vpc_security_group_ids.#": "0"
|
||||||
|
},
|
||||||
|
"meta": {
|
||||||
|
"schema_version": "1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"aws_route53_record.www": {
|
||||||
|
"type": "aws_route53_record",
|
||||||
|
"depends_on": [
|
||||||
|
"aws_instance.infra-tusd-server"
|
||||||
|
],
|
||||||
|
"primary": {
|
||||||
|
"id": "Z3IT8X6U91XY1P_master.tus.io_CNAME",
|
||||||
|
"attributes": {
|
||||||
|
"fqdn": "master.tus.io",
|
||||||
|
"id": "Z3IT8X6U91XY1P_master.tus.io_CNAME",
|
||||||
|
"name": "master.tus.io",
|
||||||
|
"records.#": "1",
|
||||||
|
"records.1559747290": "ec2-54-221-68-232.compute-1.amazonaws.com",
|
||||||
|
"ttl": "300",
|
||||||
|
"type": "CNAME",
|
||||||
|
"zone_id": "Z3IT8X6U91XY1P"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"aws_security_group.fw-infra-tusd-main": {
|
||||||
|
"type": "aws_security_group",
|
||||||
|
"primary": {
|
||||||
|
"id": "sg-2ff78c42",
|
||||||
|
"attributes": {
|
||||||
|
"description": "Infra tusd",
|
||||||
|
"egress.#": "0",
|
||||||
|
"id": "sg-2ff78c42",
|
||||||
|
"ingress.#": "2",
|
||||||
|
"ingress.2968898949.cidr_blocks.#": "3",
|
||||||
|
"ingress.2968898949.cidr_blocks.0": "24.134.75.132/32",
|
||||||
|
"ingress.2968898949.cidr_blocks.1": "62.163.187.106/32",
|
||||||
|
"ingress.2968898949.cidr_blocks.2": "84.146.5.70/32",
|
||||||
|
"ingress.2968898949.from_port": "22",
|
||||||
|
"ingress.2968898949.protocol": "tcp",
|
||||||
|
"ingress.2968898949.security_groups.#": "0",
|
||||||
|
"ingress.2968898949.self": "false",
|
||||||
|
"ingress.2968898949.to_port": "22",
|
||||||
|
"ingress.516175195.cidr_blocks.#": "1",
|
||||||
|
"ingress.516175195.cidr_blocks.0": "0.0.0.0/0",
|
||||||
|
"ingress.516175195.from_port": "8080",
|
||||||
|
"ingress.516175195.protocol": "tcp",
|
||||||
|
"ingress.516175195.security_groups.#": "0",
|
||||||
|
"ingress.516175195.self": "false",
|
||||||
|
"ingress.516175195.to_port": "8080",
|
||||||
|
"name": "fw-infra-tusd-main",
|
||||||
|
"owner_id": "402421253186",
|
||||||
|
"tags.#": "0"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
|
@ -0,0 +1,102 @@
|
||||||
|
{
|
||||||
|
"version": 1,
|
||||||
|
"serial": 3,
|
||||||
|
"modules": [
|
||||||
|
{
|
||||||
|
"path": [
|
||||||
|
"root"
|
||||||
|
],
|
||||||
|
"outputs": {
|
||||||
|
"public_address": "ec2-54-221-68-232.compute-1.amazonaws.com",
|
||||||
|
"public_addresses": "ec2-54-221-68-232.compute-1.amazonaws.com"
|
||||||
|
},
|
||||||
|
"resources": {
|
||||||
|
"aws_instance.infra-tusd-server": {
|
||||||
|
"type": "aws_instance",
|
||||||
|
"primary": {
|
||||||
|
"id": "i-cfbac61d",
|
||||||
|
"attributes": {
|
||||||
|
"ami": "ami-9bce7af0",
|
||||||
|
"availability_zone": "us-east-1a",
|
||||||
|
"ebs_block_device.#": "0",
|
||||||
|
"ebs_optimized": "false",
|
||||||
|
"ephemeral_block_device.#": "0",
|
||||||
|
"id": "i-cfbac61d",
|
||||||
|
"instance_type": "c3.large",
|
||||||
|
"key_name": "infra-tusd",
|
||||||
|
"monitoring": "false",
|
||||||
|
"private_dns": "ip-10-79-180-222.ec2.internal",
|
||||||
|
"private_ip": "10.79.180.222",
|
||||||
|
"public_dns": "ec2-54-221-68-232.compute-1.amazonaws.com",
|
||||||
|
"public_ip": "54.221.68.232",
|
||||||
|
"root_block_device.#": "1",
|
||||||
|
"root_block_device.0.delete_on_termination": "true",
|
||||||
|
"root_block_device.0.iops": "24",
|
||||||
|
"root_block_device.0.volume_size": "8",
|
||||||
|
"root_block_device.0.volume_type": "gp2",
|
||||||
|
"security_groups.#": "1",
|
||||||
|
"security_groups.1246499019": "fw-infra-tusd-main",
|
||||||
|
"source_dest_check": "true",
|
||||||
|
"tags.#": "0",
|
||||||
|
"tenancy": "default",
|
||||||
|
"vpc_security_group_ids.#": "0"
|
||||||
|
},
|
||||||
|
"meta": {
|
||||||
|
"schema_version": "1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"aws_route53_record.www": {
|
||||||
|
"type": "aws_route53_record",
|
||||||
|
"depends_on": [
|
||||||
|
"aws_instance.infra-tusd-server"
|
||||||
|
],
|
||||||
|
"primary": {
|
||||||
|
"id": "Z3IT8X6U91XY1P_master.tus.io_CNAME",
|
||||||
|
"attributes": {
|
||||||
|
"fqdn": "master.tus.io",
|
||||||
|
"id": "Z3IT8X6U91XY1P_master.tus.io_CNAME",
|
||||||
|
"name": "master.tus.io",
|
||||||
|
"records.#": "1",
|
||||||
|
"records.1559747290": "ec2-54-221-68-232.compute-1.amazonaws.com",
|
||||||
|
"ttl": "300",
|
||||||
|
"type": "CNAME",
|
||||||
|
"zone_id": "Z3IT8X6U91XY1P"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"aws_security_group.fw-infra-tusd-main": {
|
||||||
|
"type": "aws_security_group",
|
||||||
|
"primary": {
|
||||||
|
"id": "sg-2ff78c42",
|
||||||
|
"attributes": {
|
||||||
|
"description": "Infra tusd",
|
||||||
|
"egress.#": "0",
|
||||||
|
"id": "sg-2ff78c42",
|
||||||
|
"ingress.#": "2",
|
||||||
|
"ingress.2968898949.cidr_blocks.#": "3",
|
||||||
|
"ingress.2968898949.cidr_blocks.0": "24.134.75.132/32",
|
||||||
|
"ingress.2968898949.cidr_blocks.1": "62.163.187.106/32",
|
||||||
|
"ingress.2968898949.cidr_blocks.2": "84.146.5.70/32",
|
||||||
|
"ingress.2968898949.from_port": "22",
|
||||||
|
"ingress.2968898949.protocol": "tcp",
|
||||||
|
"ingress.2968898949.security_groups.#": "0",
|
||||||
|
"ingress.2968898949.self": "false",
|
||||||
|
"ingress.2968898949.to_port": "22",
|
||||||
|
"ingress.516175195.cidr_blocks.#": "1",
|
||||||
|
"ingress.516175195.cidr_blocks.0": "0.0.0.0/0",
|
||||||
|
"ingress.516175195.from_port": "8080",
|
||||||
|
"ingress.516175195.protocol": "tcp",
|
||||||
|
"ingress.516175195.security_groups.#": "0",
|
||||||
|
"ingress.516175195.self": "false",
|
||||||
|
"ingress.516175195.to_port": "8080",
|
||||||
|
"name": "fw-infra-tusd-main",
|
||||||
|
"owner_id": "402421253186",
|
||||||
|
"tags.#": "0"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
|
@ -0,0 +1,7 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
__envdir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source ${__envdir}/../_default.sh 2>/dev/null || source ${__envdir}/_default.sh
|
||||||
|
|
||||||
|
export NODE_ENV="production"
|
||||||
|
export DEPLOY_ENV="staging"
|
||||||
|
export DEBUG=""
|
|
@ -0,0 +1,7 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
__envdir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source ${__envdir}/../_default.sh 2>/dev/null || source ${__envdir}/_default.sh
|
||||||
|
|
||||||
|
export DEPLOY_ENV="test"
|
||||||
|
export NODE_ENV="test"
|
||||||
|
export DEBUG="tsd:*"
|
|
@ -0,0 +1,34 @@
|
||||||
|
- name: Install infra-tusd-server
|
||||||
|
hosts: infra-tusd-server
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
- name: Add US APT Mirrors
|
||||||
|
action: template src=templates/sources.list dest=/etc/apt/sources.list
|
||||||
|
register: apt_sources
|
||||||
|
|
||||||
|
- name: Update APT
|
||||||
|
apt: upgrade=dist cache_valid_time=3600 update_cache=yes dpkg_options='force-confold,force-confdef'
|
||||||
|
when: apt_sources|changed
|
||||||
|
|
||||||
|
- name: Install Packages
|
||||||
|
apt: pkg={{ item }} state=present
|
||||||
|
with_items:
|
||||||
|
- apg
|
||||||
|
- build-essential
|
||||||
|
- curl
|
||||||
|
- git-core
|
||||||
|
- htop
|
||||||
|
- iotop
|
||||||
|
- libpcre3
|
||||||
|
- logtail
|
||||||
|
- mlocate
|
||||||
|
- mtr
|
||||||
|
- mysql-client
|
||||||
|
- psmisc
|
||||||
|
- telnet
|
||||||
|
- vim
|
||||||
|
- wget
|
||||||
|
|
||||||
|
- name: Set bashrc
|
||||||
|
action: template src=templates/bashrc dest=/home/ubuntu/.bashrc
|
||||||
|
register: bashrc
|
|
@ -0,0 +1,116 @@
|
||||||
|
# ~/.bashrc: executed by bash(1) for non-login shells.
|
||||||
|
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
|
||||||
|
# for examples
|
||||||
|
|
||||||
|
# If not running interactively, don't do anything
|
||||||
|
case $- in
|
||||||
|
*i*) ;;
|
||||||
|
*) return;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
# don't put duplicate lines or lines starting with space in the history.
|
||||||
|
# See bash(1) for more options
|
||||||
|
HISTCONTROL=ignoreboth
|
||||||
|
|
||||||
|
# append to the history file, don't overwrite it
|
||||||
|
shopt -s histappend
|
||||||
|
|
||||||
|
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
|
||||||
|
HISTSIZE=1000
|
||||||
|
HISTFILESIZE=2000
|
||||||
|
|
||||||
|
# check the window size after each command and, if necessary,
|
||||||
|
# update the values of LINES and COLUMNS.
|
||||||
|
shopt -s checkwinsize
|
||||||
|
|
||||||
|
# If set, the pattern "**" used in a pathname expansion context will
|
||||||
|
# match all files and zero or more directories and subdirectories.
|
||||||
|
#shopt -s globstar
|
||||||
|
|
||||||
|
# make less more friendly for non-text input files, see lesspipe(1)
|
||||||
|
[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
|
||||||
|
|
||||||
|
# set variable identifying the chroot you work in (used in the prompt below)
|
||||||
|
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
|
||||||
|
debian_chroot=$(cat /etc/debian_chroot)
|
||||||
|
fi
|
||||||
|
|
||||||
|
# set a fancy prompt (non-color, unless we know we "want" color)
|
||||||
|
case "$TERM" in
|
||||||
|
xterm-color) color_prompt=yes;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
# uncomment for a colored prompt, if the terminal has the capability; turned
|
||||||
|
# off by default to not distract the user: the focus in a terminal window
|
||||||
|
# should be on the output of commands, not on the prompt
|
||||||
|
#force_color_prompt=yes
|
||||||
|
|
||||||
|
if [ -n "$force_color_prompt" ]; then
|
||||||
|
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
|
||||||
|
# We have color support; assume it's compliant with Ecma-48
|
||||||
|
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
|
||||||
|
# a case would tend to support setf rather than setaf.)
|
||||||
|
color_prompt=yes
|
||||||
|
else
|
||||||
|
color_prompt=
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$color_prompt" = yes ]; then
|
||||||
|
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
|
||||||
|
else
|
||||||
|
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
|
||||||
|
fi
|
||||||
|
unset color_prompt force_color_prompt
|
||||||
|
|
||||||
|
# If this is an xterm set the title to user@host:dir
|
||||||
|
case "$TERM" in
|
||||||
|
xterm*|rxvt*)
|
||||||
|
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
# enable color support of ls and also add handy aliases
|
||||||
|
if [ -x /usr/bin/dircolors ]; then
|
||||||
|
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
|
||||||
|
alias ls='ls --color=auto'
|
||||||
|
#alias dir='dir --color=auto'
|
||||||
|
#alias vdir='vdir --color=auto'
|
||||||
|
|
||||||
|
alias grep='grep --color=auto'
|
||||||
|
alias fgrep='fgrep --color=auto'
|
||||||
|
alias egrep='egrep --color=auto'
|
||||||
|
fi
|
||||||
|
|
||||||
|
# some more ls aliases
|
||||||
|
alias ll='ls -alF'
|
||||||
|
alias la='ls -A'
|
||||||
|
alias l='ls -CF'
|
||||||
|
|
||||||
|
# Add an "alert" alias for long running commands. Use like so:
|
||||||
|
# sleep 10; alert
|
||||||
|
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
|
||||||
|
|
||||||
|
# Alias definitions.
|
||||||
|
# You may want to put all your additions into a separate file like
|
||||||
|
# ~/.bash_aliases, instead of adding them here directly.
|
||||||
|
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
|
||||||
|
|
||||||
|
if [ -f ~/.bash_aliases ]; then
|
||||||
|
. ~/.bash_aliases
|
||||||
|
fi
|
||||||
|
|
||||||
|
# enable programmable completion features (you don't need to enable
|
||||||
|
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
|
||||||
|
# sources /etc/bash.bashrc).
|
||||||
|
if ! shopt -oq posix; then
|
||||||
|
if [ -f /usr/share/bash-completion/bash_completion ]; then
|
||||||
|
. /usr/share/bash-completion/bash_completion
|
||||||
|
elif [ -f /etc/bash_completion ]; then
|
||||||
|
. /etc/bash_completion
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
alias wtf='sudo tail -f /var/log/*{log,err} /var/log/{dmesg,messages,*{,/*}{log,err}}'
|
|
@ -0,0 +1,7 @@
|
||||||
|
deb http://us.archive.ubuntu.com/ubuntu/ {{ ansible_distribution_release }} main restricted universe multiverse
|
||||||
|
deb http://us.archive.ubuntu.com/ubuntu/ {{ ansible_distribution_release }}-updates main restricted universe multiverse
|
||||||
|
deb http://security.ubuntu.com/ubuntu {{ ansible_distribution_release }}-security main restricted universe multiverse
|
||||||
|
|
||||||
|
#deb-src http://us.archive.ubuntu.com/ubuntu/ {{ ansible_distribution_release }} main restricted universe multiverse
|
||||||
|
#deb-src http://us.archive.ubuntu.com/ubuntu/ {{ ansible_distribution_release }}-updates main restricted universe multiverse
|
||||||
|
#deb-src http://security.ubuntu.com/ubuntu {{ ansible_distribution_release }}-security main restricted universe multiverse
|
Loading…
Reference in New Issue