1
1
Fork 1

Compare commits

...

97 Commits

Author SHA1 Message Date
Derek Smith 29f5b56223
Update project. 2024-02-15 10:22:48 -05:00
Derek Smith 1a0ef1914d
Update project. 2024-02-15 10:09:59 -05:00
Derek Smith aa30705933
Update project head. 2024-02-15 10:05:16 -05:00
Derek Smith f3137873f0
Update project head. 2024-02-11 20:31:35 -05:00
Derek Smith afcc6b5fec
Make LNPLAY server updates. 2024-02-11 20:31:27 -05:00
Derek Smith 01c86c6ed0
Switch to using loop device instead of partition. 2024-02-11 20:14:38 -05:00
Derek Smith cb93e58591
Update project head. 2024-02-09 11:33:58 -05:00
Derek Smith e34a765c69
DISABLE reset.sh 2024-02-09 11:33:44 -05:00
Derek Smith 091e34462e
Update remote.sh for incus. 2024-02-09 11:33:24 -05:00
Derek Smith 10801851c7
Check for prod. 2024-02-09 11:33:01 -05:00
Derek Smith ea7bd1d42c
Add some snapshot logic there. 2024-02-09 11:32:33 -05:00
Derek Smith c457118aec
Fix down script bug. 2024-02-09 11:31:35 -05:00
Derek Smith 5828fd1a38
Run incus init in a pseudo shell 2024-02-09 11:31:09 -05:00
Derek Smith 1973ee54f8
Wire up lnplayserver deployment correctly. 2024-02-09 11:30:42 -05:00
Derek Smith 17e9398588
Nitpicks. 2024-02-09 11:28:29 -05:00
Derek Smith 0edb94cdea
Remove target.sh 2024-02-09 11:26:10 -05:00
Derek Smith 355adb13ab
Revert to apt-get per Zabbly install instructions 2023-12-19 11:29:45 -05:00
Derek Smith 17d7e9c75a
Do not stub out LNPLAY_ENV_FILE if exists 2023-12-19 09:53:59 -05:00
Derek Smith 575deff1d2
Starting of zfs storage volume-based backups. 2023-12-19 09:53:29 -05:00
Derek Smith 6fef36096e
Nits 2023-12-19 09:53:01 -05:00
Derek Smith 0c6e00deb2
Switch to apt and force true on incus install. 2023-12-19 09:51:11 -05:00
Derek Smith 8b6db37380
Restart lnplayserver after image publish. 2023-12-17 16:52:05 -05:00
Derek Smith 5dd4de442d
Fix mac address defaults. 2023-12-17 16:51:21 -05:00
Derek Smith 7abf79e0a7
Spawn lnplay from image if exists. 2023-12-17 16:50:41 -05:00
Derek Smith 56916a7d33
Nitpicks 2023-12-17 16:50:24 -05:00
Derek Smith c4183a0b63
Install urlencode in ss-mgmt. 2023-12-17 16:49:54 -05:00
Derek Smith 4fde7a8a70
Update project. 2023-12-15 16:47:27 -05:00
Derek Smith 7007891e1d
Disable www duplicity backups; switching to storag 2023-12-15 16:47:13 -05:00
Derek Smith a33a0bc459
Update project. 2023-12-14 19:54:25 -05:00
Derek Smith 6f83bf5195
Fix execution bug. 2023-12-14 19:54:18 -05:00
Derek Smith aaab6f3515
Only allow volume sizing after creation. 2023-12-14 19:54:09 -05:00
Derek Smith 6f1f2df6dd
Update mainnet storage volume sizes. 2023-12-14 16:19:42 -05:00
Derek Smith f3ec53521d
Nitpicks and update project. 2023-12-14 12:52:16 -05:00
Derek Smith 042cdb5b7e
Fix storage volume size 2023-12-14 12:51:52 -05:00
Derek Smith 882b5c6bf0
Export BITCOIN_CHAIN 2023-12-14 12:51:18 -05:00
Derek Smith 60efc0a6fc
Remove lnplay storage volumes. 2023-12-14 12:50:49 -05:00
Derek Smith 50d65ebe7c
Fix Server SKIP flags. 2023-12-14 12:47:16 -05:00
Derek Smith 9d7d08b0ed
Update project. 2023-12-12 15:10:07 -05:00
Derek Smith 1eff32efed
Update incus snapshot sytax. 2023-12-12 15:09:19 -05:00
Derek Smith 3f30fa11e2
Fix storage volume names. 2023-12-12 15:09:07 -05:00
Derek Smith 7512cbfd90
Updates Undo 2023-11-30 16:34:44 -05:00
Derek Smith 3004a167c8
Update project 2023-11-29 21:34:29 -05:00
Derek Smith 57e75a88ec
More incus updates. 2023-11-29 21:33:45 -05:00
Derek Smith a2c29b189e
Incus updates for ss-mgmt 2023-11-29 14:04:24 -05:00
Derek Smith 44c2859a84
Update project. 2023-11-29 11:44:40 -05:00
Derek Smith 19997bcf89
Remove install_incus.sh from tabconf branch. 2023-09-23 12:33:31 -06:00
Derek Smith 27e0dc6ccf
rename to incusbr0 2023-09-23 10:26:51 -06:00
Derek Smith f75f8d1bf7
Update project head. 2023-09-22 17:55:38 -06:00
Derek Smith a273488646
Initial Switch to incus. 2023-09-22 17:46:07 -06:00
Derek Smith 559d5f11f4
Scope to default. 2023-09-15 19:52:15 -04:00
Derek Smith 3f15800635
Ensure default project. 2023-09-15 19:34:53 -04:00
Derek Smith a44832c9ba
Add controls over base image creation. 2023-09-15 19:28:48 -04:00
Derek Smith 7e76d5ca43
Update project. 2023-09-15 10:42:50 -04:00
Derek Smith 643fc70873
Update project 2023-09-15 10:28:44 -04:00
Derek Smith 65dfcf7aae
Update project. 2023-09-15 10:25:53 -04:00
Derek Smith a2050f4796
Use '-q' with lxc init commands. 2023-09-15 09:51:25 -04:00
Derek Smith e5bdd0d6a5
Ensure initial image copy goes to default project. 2023-09-15 09:46:32 -04:00
Derek Smith 972a2df31e
Update project 2023-09-15 09:38:24 -04:00
Derek Smith e78656d191
Enable debuggin on create_lxc_base.sh 2023-09-15 09:32:49 -04:00
Derek Smith ab95b46745
Enable debugging. 2023-09-15 09:30:01 -04:00
Derek Smith 636d4c873d
Update project. 2023-09-15 09:26:28 -04:00
Derek Smith 75086610ca
Update project/ remove warnings. 2023-09-14 22:35:59 -04:00
Derek Smith c02ed8e6f2
Update project head. 2023-09-14 22:27:34 -04:00
Derek Smith 9ffeb57981
Update project head. 2023-09-14 22:19:08 -04:00
Derek Smith 9182c7eb0a
Update project head. 2023-09-14 16:57:06 -04:00
Derek Smith 8fa432672a
Update project head. 2023-09-14 16:46:49 -04:00
Derek Smith f952b05de3
Update project head. 2023-09-14 12:58:06 -04:00
Derek Smith c02bb690e8
Update project head. 2023-09-14 12:04:25 -04:00
Derek Smith 855a91beb7
Update project head. 2023-09-14 09:49:05 -04:00
Derek Smith 8780f9e908
SKIP proj provis if is DEPLOYMENT_STRING is null 2023-09-14 09:48:55 -04:00
Derek Smith f8d6d2acfc
Add the project git submodule back in there. 2023-09-13 10:40:25 -04:00
Derek Smith 0c253e9a49
Remove git submodules file. 2023-09-13 10:17:57 -04:00
Derek Smith 8d28a68bd3
Remove the project git submodule. 2023-09-13 10:17:21 -04:00
Derek Smith ffd4b607c2
deleted project path in prep for submodule fix 2023-09-13 09:14:29 -04:00
Derek Smith eaa4f0abb1
changes from tabconf 2023-09-05 22:01:57 -04:00
Derek Smith 334b138770
Update Version. 2023-08-20 21:23:13 -04:00
Derek Smith 68f487a5dc
Update to latest project head. 2023-08-20 20:47:27 -04:00
Derek Smith 88291a2ee0
Update project root. 2023-08-13 12:18:06 -04:00
Derek Smith 132e32b806
Update clams-server defaults + advance project. 2023-08-13 11:58:30 -04:00
Derek Smith 7819cbe564
Add deployment/project as submodule. 2023-08-12 12:51:46 -04:00
Derek Smith 22ecde022f
Removed submodules 2023-08-12 12:45:23 -04:00
Derek Smith 9d7c9a51f6
Remove project .gitignore. 2023-08-12 12:27:55 -04:00
Derek Smith 08235de798
Nitpicks. 2023-08-12 12:25:23 -04:00
Derek Smith 5369016265
WWW updates. 2023-08-12 12:22:34 -04:00
Derek Smith 0ca1fdbea9
Add domain_list.sh and stop.sh 2023-08-12 12:21:15 -04:00
Derek Smith d88814e6e4
Clams server updates. 2023-08-12 12:19:44 -04:00
Derek Smith ddcc119030
BTCPayserver upates. 2023-08-12 12:16:44 -04:00
Derek Smith 919acbf172
Add --skip logic for VMs. 2023-08-12 12:14:00 -04:00
Derek Smith f41601ad04
Remove project git clone logic in lie of submodule 2023-08-12 12:11:50 -04:00
Derek Smith d49f3f58b6
Switch to the lxd stable channel. 2023-08-11 10:28:06 -04:00
Derek Smith 62c7fd0504
Add iptables FORWARDING due to dockerd conflict. 2023-08-11 10:26:23 -04:00
Derek Smith e602f8be90
Add --disk-or-partition flag, default loop 2023-08-11 10:24:02 -04:00
Derek Smith d8903eb5bc
Add clams-server in lieu of existing integration. 2023-08-11 09:46:08 -04:00
Derek Smith 0b38d6d635
Update to grab git commit in docker info. 2023-05-16 22:08:07 -04:00
Derek Smith beb1d7b5a4
Update project ref. 2023-05-16 22:04:47 -04:00
Derek Smith 5d15d5f690
Various updates. 2023-05-16 22:02:32 -04:00
Derek Smith 6c4d262c43
New Version. 2023-04-14 13:39:38 -04:00
33 changed files with 944 additions and 806 deletions

3
.gitmodules vendored Normal file
View File

@ -0,0 +1,3 @@
[submodule "deployment/project"]
path = deployment/project
url = https://git.sovereign-stack.org/ss/project

View File

@ -1,3 +1,3 @@
# Documentation
All documentation for this project can be found at [sovereign-stack.org](https://www.sovereign-stack.org).
All documentation for this project can be found at [sovereign-stack.org](https://www.sovereign-stack.org). To get started with this code, check out [this post](https://www.sovereign-stack.org/get/).

View File

@ -1 +0,0 @@
project

View File

@ -1,9 +1,9 @@
#!/bin/bash
# The base VM image.
export LXD_UBUNTU_BASE_VERSION="jammy"
export BASE_IMAGE_VM_NAME="ss-base-${LXD_UBUNTU_BASE_VERSION//./-}"
export BASE_LXC_IMAGE="ubuntu/$LXD_UBUNTU_BASE_VERSION/cloud"
export INCUS_UBUNTU_BASE_VERSION="jammy"
export BASE_IMAGE_VM_NAME="ss-base-${INCUS_UBUNTU_BASE_VERSION//./-}"
export BASE_INCUS_IMAGE="ubuntu/$INCUS_UBUNTU_BASE_VERSION/cloud"
WEEK_NUMBER=$(date +%U)
export UBUNTU_BASE_IMAGE_NAME="ss-ubuntu-${LXD_UBUNTU_BASE_VERSION//./-}"
export DOCKER_BASE_IMAGE_NAME="ss-docker-${LXD_UBUNTU_BASE_VERSION//./-}-$WEEK_NUMBER"
export UBUNTU_BASE_IMAGE_NAME="ss-ubuntu-${INCUS_UBUNTU_BASE_VERSION//./-}"
export DOCKER_BASE_IMAGE_NAME="ss-docker-${INCUS_UBUNTU_BASE_VERSION//./-}-$WEEK_NUMBER"

102
deployment/create_base.sh Executable file
View File

@ -0,0 +1,102 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
. ./base.sh
bash -c "./stub_profile.sh --incus-hostname=$BASE_IMAGE_VM_NAME"
if incus list -q --project default | grep -q "$BASE_IMAGE_VM_NAME" ; then
incus delete -f "$BASE_IMAGE_VM_NAME" --project default
fi
# let's download our base image.
if ! incus image list --format csv --columns l --project default | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
# copy the image down from canonical.
incus image copy "images:$BASE_INCUS_IMAGE" "$REMOTE_NAME": --alias "$UBUNTU_BASE_IMAGE_NAME" --public --vm --auto-update --target-project default
fi
# If the VM does exist, then we will delete it (so we can start fresh)
if incus list --format csv -q --project default | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
# if there's no snapshot, we dispense with the old image and try again.
if ! incus info "$BASE_IMAGE_VM_NAME" --project default | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
incus delete "$BASE_IMAGE_VM_NAME" --force --project default
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$BASE_IMAGE_VM_NAME"
fi
else
if ! incus list --project default | grep -q "$BASE_IMAGE_VM_NAME"; then
# the base image is ubuntu:22.04.
script -q -c "incus init -q --profile=$BASE_IMAGE_VM_NAME $UBUNTU_BASE_IMAGE_NAME $BASE_IMAGE_VM_NAME --vm --project default" /dev/null
fi
if incus info "$BASE_IMAGE_VM_NAME" --project default | grep -q "Status: STOPPED"; then
# TODO move this sovereign-stack-base construction VM to separate dedicated IP
incus config set "$BASE_IMAGE_VM_NAME" --project default
incus start "$BASE_IMAGE_VM_NAME" --project default
sleep 15
fi
# for CHAIN in mainnet testnet; do
# for DATA in blocks chainstate; do
# incus storage volume attach ss-base "$CHAIN-$DATA" "$BASE_IMAGE_VM_NAME" "/home/ubuntu/bitcoin/$DATA"
# done
# done
if incus info "$BASE_IMAGE_VM_NAME" --project default | grep -q "Status: RUNNING"; then
while incus exec "$BASE_IMAGE_VM_NAME" --project default -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done
# ensure the ssh service is listening at localhost
incus exec "$BASE_IMAGE_VM_NAME" --project default -- wait-for-it -t 100 127.0.0.1:22
# # If we have any chaninstate or blocks in our SSME, let's push them to the
# # remote host as a zfs volume that way deployments can share a common history
# # of chainstate/blocks.
# for CHAIN in testnet mainnet; do
# for DATA in blocks chainstate; do
# # if the storage snapshot doesn't yet exist, create it.
# if ! incus storage volume list ss-base -q --format csv -c n | grep -q "$CHAIN-$DATA/snap0"; then
# DATA_PATH="/home/ubuntu/.ss/cache/bitcoin/$CHAIN/$DATA"
# if [ -d "$DATA_PATH" ]; then
# COMPLETE_FILE_PATH="$DATA_PATH/complete"
# if incus exec "$BASE_IMAGE_VM_NAME" -- [ ! -f "$COMPLETE_FILE_PATH" ]; then
# incus file push --recursive --project default "$DATA_PATH/" "$BASE_IMAGE_VM_NAME""$DATA_PATH/"
# incus exec "$BASE_IMAGE_VM_NAME" -- su ubuntu - bash -c "echo $(date) > $COMPLETE_FILE_PATH"
# incus exec "$BASE_IMAGE_VM_NAME" -- chown -R 999:999 "$DATA_PATH/$DATA"
# else
# echo "INFO: it appears as though $CHAIN/$DATA has already been initialized. Continuing."
# fi
# fi
# fi
# done
# done
# stop the VM and get a snapshot.
incus stop "$BASE_IMAGE_VM_NAME" --project default
fi
incus snapshot create "$BASE_IMAGE_VM_NAME" "$UBUNTU_BASE_IMAGE_NAME" --project default
fi
echo "INFO: Publishing '$BASE_IMAGE_VM_NAME' as image '$DOCKER_BASE_IMAGE_NAME'. Please wait."
incus publish -q --public "$BASE_IMAGE_VM_NAME/$UBUNTU_BASE_IMAGE_NAME" \
--project default --alias="$DOCKER_BASE_IMAGE_NAME" \
--compression none
echo "INFO: Success creating the base image. Deleting artifacts from the build process."
incus delete -f "$BASE_IMAGE_VM_NAME" --project default
# # now let's get a snapshot of each of the blocks/chainstate directories.
# for CHAIN in testnet mainnet; do
# for DATA in blocks chainstate; do
# if ! incus storage volume list ss-base -q --format csv -c n | grep -q "$CHAIN-$DATA/snap0"; then
# echo "INFO: Creating a snapshot 'ss-base/$CHAIN-$DATA/snap0'."
# incus storage volume snapshot ss-base --project default "$CHAIN-$DATA"
# fi
# done
# done

View File

@ -1,92 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
. ./base.sh
bash -c "./stub_lxc_profile.sh --lxd-hostname=$BASE_IMAGE_VM_NAME"
if lxc list -q --project default | grep -q "$BASE_IMAGE_VM_NAME" ; then
lxc delete -f "$BASE_IMAGE_VM_NAME" --project=default
fi
# let's download our base image.
if ! lxc image list --format csv --columns l | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
# copy the image down from canonical.
lxc image copy "images:$BASE_LXC_IMAGE" "$REMOTE_NAME": --alias "$UBUNTU_BASE_IMAGE_NAME" --public --vm --auto-update
fi
# If the lxc VM does exist, then we will delete it (so we can start fresh)
if lxc list --format csv -q | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
# if there's no snapshot, we dispense with the old image and try again.
if ! lxc info "$BASE_IMAGE_VM_NAME" | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
lxc delete "$BASE_IMAGE_VM_NAME" --force
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$BASE_IMAGE_VM_NAME"
fi
else
# the base image is ubuntu:22.04.
lxc init --profile="$BASE_IMAGE_VM_NAME" "$UBUNTU_BASE_IMAGE_NAME" "$BASE_IMAGE_VM_NAME" --vm --project=default
# TODO move this sovereign-stack-base construction VM to separate dedicated IP
lxc config set "$BASE_IMAGE_VM_NAME" --project=default
# for CHAIN in mainnet testnet; do
# for DATA in blocks chainstate; do
# lxc storage volume attach ss-base "$CHAIN-$DATA" "$BASE_IMAGE_VM_NAME" "/home/ubuntu/bitcoin/$DATA"
# done
# done
lxc start "$BASE_IMAGE_VM_NAME" --project=default
sleep 15
while lxc exec "$BASE_IMAGE_VM_NAME" --project=default -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done
# ensure the ssh service is listening at localhost
lxc exec "$BASE_IMAGE_VM_NAME" --project=default -- wait-for-it -t 100 127.0.0.1:22
# # If we have any chaninstate or blocks in our SSME, let's push them to the
# # remote host as a zfs volume that way deployments can share a common history
# # of chainstate/blocks.
# for CHAIN in testnet mainnet; do
# for DATA in blocks chainstate; do
# # if the storage snapshot doesn't yet exist, create it.
# if ! lxc storage volume list ss-base -q --format csv -c n | grep -q "$CHAIN-$DATA/snap0"; then
# DATA_PATH="/home/ubuntu/.ss/cache/bitcoin/$CHAIN/$DATA"
# if [ -d "$DATA_PATH" ]; then
# COMPLETE_FILE_PATH="$DATA_PATH/complete"
# if lxc exec "$BASE_IMAGE_VM_NAME" -- [ ! -f "$COMPLETE_FILE_PATH" ]; then
# lxc file push --recursive --project=default "$DATA_PATH/" "$BASE_IMAGE_VM_NAME""$DATA_PATH/"
# lxc exec "$BASE_IMAGE_VM_NAME" -- su ubuntu - bash -c "echo $(date) > $COMPLETE_FILE_PATH"
# lxc exec "$BASE_IMAGE_VM_NAME" -- chown -R 999:999 "$DATA_PATH/$DATA"
# else
# echo "INFO: it appears as though $CHAIN/$DATA has already been initialized. Continuing."
# fi
# fi
# fi
# done
# done
# stop the VM and get a snapshot.
lxc stop "$BASE_IMAGE_VM_NAME" --project=default
lxc snapshot "$BASE_IMAGE_VM_NAME" "$UBUNTU_BASE_IMAGE_NAME" --project=default
fi
echo "INFO: Publishing '$BASE_IMAGE_VM_NAME' as image '$DOCKER_BASE_IMAGE_NAME'. Please wait."
lxc publish --public "$BASE_IMAGE_VM_NAME/$UBUNTU_BASE_IMAGE_NAME" --project=default --alias="$DOCKER_BASE_IMAGE_NAME" --compression none
echo "INFO: Success creating the base image. Deleting artifacts from the build process."
lxc delete -f "$BASE_IMAGE_VM_NAME" --project=default
# # now let's get a snapshot of each of the blocks/chainstate directories.
# for CHAIN in testnet mainnet; do
# for DATA in blocks chainstate; do
# if ! lxc storage volume list ss-base -q --format csv -c n | grep -q "$CHAIN-$DATA/snap0"; then
# echo "INFO: Creating a snapshot 'ss-base/$CHAIN-$DATA/snap0'."
# lxc storage volume snapshot ss-base --project=default "$CHAIN-$DATA"
# fi
# done
# done

View File

@ -1,6 +1,6 @@
#!/bin/bash
set -eu
set -exu
cd "$(dirname "$0")"
. ./base.sh
@ -21,91 +21,102 @@ EOF
fi
# if the machine doesn't exist, we create it.
if ! lxc list --format csv | grep -q "$LXD_VM_NAME"; then
if ! incus list --format csv | grep -q "$INCUS_VM_NAME"; then
# create a base image if needed and instantiate a VM.
if [ -z "$MAC_ADDRESS_TO_PROVISION" ]; then
echo "ERROR: You MUST define a MAC Address for all your machines by setting WWW_SERVER_MAC_ADDRESS, BTCPAYSERVER_MAC_ADDRESS in your site definition."
echo "ERROR: You MUST define a MAC Address for all your machines in your project definition."
echo "INFO: IMPORTANT! You MUST have DHCP Reservations for these MAC addresses. You also need records established the DNS."
exit 1
fi
# TODO ensure we are only GROWING the volume--never shrinking per zfs volume docs.
VM_ID=
BACKUP_DISK_SIZE_GB=
SSDATA_DISK_SIZE_GB=
DOCKER_DISK_SIZE_GB=
if [ "$VIRTUAL_MACHINE" = www ]; then
VM_ID="w"
if [ "$SKIP_WWW_SERVER" = true ]; then
exit 0
fi
BACKUP_DISK_SIZE_GB="$WWW_BACKUP_DISK_SIZE_GB"
SSDATA_DISK_SIZE_GB="$WWW_SSDATA_DISK_SIZE_GB"
DOCKER_DISK_SIZE_GB="$WWW_DOCKER_DISK_SIZE_GB"
fi
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
VM_ID="b"
if [ "$SKIP_BTCPAY_SERVER" = true ]; then
exit 0
fi
BACKUP_DISK_SIZE_GB="$BTCPAYSERVER_BACKUP_DISK_SIZE_GB"
SSDATA_DISK_SIZE_GB="$BTCPAYSERVER_SSDATA_DISK_SIZE_GB"
DOCKER_DISK_SIZE_GB="$BTCPAYSERVER_DOCKER_DISK_SIZE_GB"
fi
DOCKER_VOLUME_NAME="$PRIMARY_DOMAIN_IDENTIFIER-$VM_ID""d"
if ! lxc storage volume list ss-base | grep -q "$DOCKER_VOLUME_NAME"; then
lxc storage volume create ss-base "$DOCKER_VOLUME_NAME" --type=block
SSDATA_VOLUME_NAME=
BACKUP_VOLUME_NAME=
if [ "$VIRTUAL_MACHINE" != lnplayserver ]; then
DOCKER_VOLUME_NAME="$VIRTUAL_MACHINE-docker"
if ! incus storage volume list ss-base | grep -q "$DOCKER_VOLUME_NAME"; then
incus storage volume create ss-base "$DOCKER_VOLUME_NAME" --type=block
incus storage volume set ss-base "$DOCKER_VOLUME_NAME" size="${DOCKER_DISK_SIZE_GB}GB"
fi
SSDATA_VOLUME_NAME="$VIRTUAL_MACHINE-ss-data"
if ! incus storage volume list ss-base | grep -q "$SSDATA_VOLUME_NAME"; then
incus storage volume create ss-base "$SSDATA_VOLUME_NAME" --type=filesystem
incus storage volume set ss-base "$SSDATA_VOLUME_NAME" size="${SSDATA_DISK_SIZE_GB}GB"
fi
BACKUP_VOLUME_NAME="$VIRTUAL_MACHINE-backup"
if ! incus storage volume list ss-base | grep -q "$BACKUP_VOLUME_NAME"; then
incus storage volume create ss-base "$BACKUP_VOLUME_NAME" --type=filesystem
incus storage volume set ss-base "$BACKUP_VOLUME_NAME" size="${BACKUP_DISK_SIZE_GB}GB"
fi
fi
# TODO ensure we are only GROWING the volume--never shrinking
lxc storage volume set ss-base "$DOCKER_VOLUME_NAME" size="${DOCKER_DISK_SIZE_GB}GB"
SSDATA_VOLUME_NAME="$PRIMARY_DOMAIN_IDENTIFIER-$VM_ID""s"
if ! lxc storage volume list ss-base | grep -q "$SSDATA_VOLUME_NAME"; then
lxc storage volume create ss-base "$SSDATA_VOLUME_NAME" --type=filesystem
bash -c "./stub_profile.sh --vm=$VIRTUAL_MACHINE --incus-hostname=$INCUS_VM_NAME --ss-volume-name=$SSDATA_VOLUME_NAME --backup-volume-name=$BACKUP_VOLUME_NAME"
INCUS_LNPLAYSERVER_IMAGE_NAME="lnplayserver-$DOMAIN_NAME"
if ! incus image list -q --format csv | grep -q "$INCUS_LNPLAYSERVER_IMAGE_NAME"; then
script -q -c "incus init -q $DOCKER_BASE_IMAGE_NAME $INCUS_VM_NAME --vm --profile=$INCUS_VM_NAME" /dev/null
elif [ "$VIRTUAL_MACHINE" = lnplayserver ]; then
script -q -c "incus init -q $INCUS_LNPLAYSERVER_IMAGE_NAME $INCUS_VM_NAME --vm --profile=$INCUS_VM_NAME" /dev/null
fi
# TODO ensure we are only GROWING the volume--never shrinking per zfs volume docs.
lxc storage volume set ss-base "$SSDATA_VOLUME_NAME" size="${SSDATA_DISK_SIZE_GB}GB"
BACKUP_VOLUME_NAME="$PRIMARY_DOMAIN_IDENTIFIER-$VM_ID""b"
if ! lxc storage volume list ss-base | grep -q "$BACKUP_VOLUME_NAME"; then
lxc storage volume create ss-base "$BACKUP_VOLUME_NAME" --type=filesystem
fi
lxc storage volume set ss-base "$BACKUP_VOLUME_NAME" size="${BACKUP_DISK_SIZE_GB}GB"
bash -c "./stub_lxc_profile.sh --vm=$VIRTUAL_MACHINE --lxd-hostname=$LXD_VM_NAME --ss-volume-name=$SSDATA_VOLUME_NAME --backup-volume-name=$BACKUP_VOLUME_NAME"
# now let's create a new VM to work with.
#lxc init --profile="$LXD_VM_NAME" "$BASE_IMAGE_VM_NAME" "$LXD_VM_NAME" --vm
lxc init "$DOCKER_BASE_IMAGE_NAME" "$LXD_VM_NAME" --vm --profile="$LXD_VM_NAME"
# let's PIN the HW address for now so we don't exhaust IP
# and so we can set DNS internally.
lxc config set "$LXD_VM_NAME" "volatile.enp5s0.hwaddr=$MAC_ADDRESS_TO_PROVISION"
incus config set "$INCUS_VM_NAME" "volatile.enp5s0.hwaddr=$MAC_ADDRESS_TO_PROVISION"
# attack the docker block device.
lxc storage volume attach ss-base "$DOCKER_VOLUME_NAME" "$LXD_VM_NAME"
if [ "$VIRTUAL_MACHINE" != lnplayserver ]; then
# attack the docker block device.
incus storage volume attach ss-base "$DOCKER_VOLUME_NAME" "$INCUS_VM_NAME"
fi
# if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
# # attach any volumes
# for CHAIN in testnet mainnet; do
# for DATA in blocks chainstate; do
# MOUNT_PATH="/$CHAIN-$DATA"
# lxc config device add "$LXD_VM_NAME" "$CHAIN-$DATA" disk pool=ss-base source="$CHAIN-$DATA" path="$MOUNT_PATH"
# incus config device add "$INCUS_VM_NAME" "$CHAIN-$DATA" disk pool=ss-base source="$CHAIN-$DATA" path="$MOUNT_PATH"
# done
# done
# fi
lxc start "$LXD_VM_NAME"
sleep 10
incus start "$INCUS_VM_NAME"
sleep 15
bash -c "./wait_for_lxc_ip.sh --lxd-name=$LXD_VM_NAME"
bash -c "./wait_for_ip.sh --incus-name=$INCUS_VM_NAME"
# scan the remote machine and install it's identity in our SSH known_hosts file.
ssh-keyscan -H "$FQDN" >> "$SSH_HOME/known_hosts"
ssh "$FQDN" "sudo chown ubuntu:ubuntu $REMOTE_DATA_PATH"
ssh "$FQDN" "sudo chown -R ubuntu:ubuntu $REMOTE_BACKUP_PATH"
if [ "$VIRTUAL_MACHINE" != lnplayserver ]; then
ssh "$FQDN" "sudo chown ubuntu:ubuntu $REMOTE_DATA_PATH"
ssh "$FQDN" "sudo chown -R ubuntu:ubuntu $REMOTE_BACKUP_PATH"
fi
fi

View File

@ -9,10 +9,9 @@ export SS_ROOT_PATH="$HOME/ss"
export REMOTES_PATH="$SS_ROOT_PATH/remotes"
export PROJECTS_PATH="$SS_ROOT_PATH/projects"
export SITES_PATH="$SS_ROOT_PATH/sites"
export LXD_CONFIG_PATH="$SS_ROOT_PATH/lxd"
export INCUS_CONFIG_PATH="$SS_ROOT_PATH/incus"
export SS_CACHE_PATH="$SS_ROOT_PATH/cache"
export REMOTE_HOME="/home/ubuntu"
export REMOTE_DATA_PATH="$REMOTE_HOME/ss-data"
export REMOTE_DATA_PATH_LETSENCRYPT="$REMOTE_DATA_PATH/letsencrypt"
@ -33,18 +32,19 @@ export BTCPAYSERVER_BACKUP_DISK_SIZE_GB=20
export BTCPAYSERVER_DOCKER_DISK_SIZE_GB=30
export WWW_HOSTNAME="www"
export BTCPAY_HOSTNAME="btcpayserver"
export BTCPAY_SERVER_HOSTNAME="btcpayserver"
export LNPLAY_SERVER_HOSTNAME="lnplayserver"
export BTCPAY_HOSTNAME_IN_CERT="btcpay"
export NEXTCLOUD_HOSTNAME="nextcloud"
export GITEA_HOSTNAME="git"
export NOSTR_HOSTNAME="relay"
export CLAMS_HOSTNAME="clams"
export REGISTRY_URL="https://index.docker.io/v1"
export BTCPAY_SERVER_CPU_COUNT="4"
export BTCPAY_SERVER_MEMORY_MB="4096"
export WWW_SERVER_CPU_COUNT="4"
export WWW_SERVER_MEMORY_MB="4096"
export LNPLAY_SERVER_CPU_COUNT="4"
export LNPLAY_SERVER_MEMORY_MB="4096"
export DOCKER_IMAGE_CACHE_FQDN="registry-1.docker.io"

17
deployment/domain_list.sh Executable file
View File

@ -0,0 +1,17 @@
#!/bin/bash
set -exu
# the DOMAIN_LIST is a complete list of all our domains. We often iterate over this list.
DOMAIN_LIST="${PRIMARY_DOMAIN}"
if [ -n "$OTHER_SITES_LIST" ]; then
DOMAIN_LIST="${DOMAIN_LIST},${OTHER_SITES_LIST}"
fi
export DOMAIN_LIST="$DOMAIN_LIST"
export DOMAIN_COUNT=$(("$(echo "$DOMAIN_LIST" | tr -cd , | wc -c)"+1))
export OTHER_SITES_LIST="$OTHER_SITES_LIST"
export PRIMARY_WWW_FQDN="$WWW_HOSTNAME.$PRIMARY_DOMAIN"
export BTCPAY_SERVER_FQDN="$BTCPAY_SERVER_HOSTNAME.$PRIMARY_DOMAIN"
export LNPLAY_SERVER_FQDN="$LNPLAY_SERVER_HOSTNAME.$PRIMARY_DOMAIN"

View File

@ -2,21 +2,42 @@
# https://www.sovereign-stack.org/ss-down/
set -eu
set -exu
cd "$(dirname "$0")"
if lxc remote get-default -q | grep -q "local"; then
echo "ERROR: you are on the local lxc remote. Nothing to take down"
if incus remote get-default -q | grep -q "local"; then
echo "ERROR: you are on the local incus remote. Nothing to take down"
exit 1
fi
KEEP_DOCKER_VOLUME=true
KEEP_ZFS_STORAGE_VOLUMES=true
OTHER_SITES_LIST=
SKIP_BTCPAY_SERVER=false
SKIP_WWW_SERVER=false
SKIP_LNPLAY_SERVER=false
BACKUP_WWW_APPS=true
WWW_SERVER_MAC_ADDRESS=
BTCPAY_SERVER_MAC_ADDRESS=
LNPLAY_SERVER_MAC_ADDRESS=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--destroy)
KEEP_DOCKER_VOLUME=false
--purge)
KEEP_ZFS_STORAGE_VOLUMES=false
shift
;;
--skip-btcpayserver)
SKIP_BTCPAY_SERVER=true
shift
;;
--skip-wwwserver)
SKIP_WWW_SERVER=true
shift
;;
--skip-lnplayserver)
SKIP_LNPLAY_SERVER=true
shift
;;
*)
@ -39,51 +60,78 @@ export SITE_PATH="$SITES_PATH/$PRIMARY_DOMAIN"
source "$SITE_PATH/site.conf"
source ./project/domain_env.sh
SKIP=btcpayserver
for VIRTUAL_MACHINE in www btcpayserver; do
LXD_NAME="$VIRTUAL_MACHINE-${PRIMARY_DOMAIN//./-}"
source ./domain_list.sh
if lxc list | grep -q "$LXD_NAME"; then
bash -c "./up.sh --stop --skip-$SKIP"
lxc stop "$LXD_NAME"
SERVERS=
if [ "$SKIP_WWW_SERVER" = false ] && [ -n "$WWW_SERVER_MAC_ADDRESS" ]; then
SERVERS="www $SERVERS"
fi
lxc delete "$LXD_NAME"
if [ "$SKIP_BTCPAY_SERVER" = false ] && [ -n "$BTCPAY_SERVER_MAC_ADDRESS" ]; then
SERVERS="$SERVERS btcpayserver"
fi
if [ "$SKIP_LNPLAY_SERVER" = false ] && [ -n "$LNPLAY_SERVER_MAC_ADDRESS" ]; then
SERVERS="$SERVERS lnplayserver"
fi
for VIRTUAL_MACHINE in $SERVERS; do
INCUS_VM_NAME="$VIRTUAL_MACHINE-${PRIMARY_DOMAIN//./-}"
if incus list | grep -q "$INCUS_VM_NAME"; then
bash -c "./stop.sh --server=$VIRTUAL_MACHINE"
incus stop "$INCUS_VM_NAME"
incus delete "$INCUS_VM_NAME"
fi
# remove the ssh known endpoint else we get warnings.
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$VIRTUAL_MACHINE.$PRIMARY_DOMAIN" | exit
if lxc profile list | grep -q "$LXD_NAME"; then
lxc profile delete "$LXD_NAME"
if incus profile list | grep -q "$INCUS_VM_NAME"; then
incus profile delete "$INCUS_VM_NAME"
fi
if [ "$KEEP_DOCKER_VOLUME" = false ]; then
# destroy the docker volume
VM_ID=w
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
VM_ID="b"
fi
if [ "$KEEP_ZFS_STORAGE_VOLUMES" = false ]; then
# d for docker; b for backup; s for ss-data
for DATA in d b s; do
VOLUME_NAME="$PRIMARY_DOMAIN_IDENTIFIER-$VM_ID""$DATA"
if lxc storage volume list ss-base -q | grep -q "$VOLUME_NAME"; then
for DATA in docker backup ss-data; do
VOLUME_NAME="$VIRTUAL_MACHINE-$DATA"
if incus storage volume list ss-base -q | grep -q "$VOLUME_NAME"; then
RESPONSE=
read -r -p "Are you sure you want to delete the '$VOLUME_NAME' volume intended for '$LXD_NAME'?": RESPONSE
read -r -p "Are you sure you want to delete the '$VOLUME_NAME' volume intended for '$INCUS_VM_NAME'?": RESPONSE
if [ "$RESPONSE" = "y" ]; then
lxc storage volume delete ss-base "$VOLUME_NAME"
incus storage volume delete ss-base "$VOLUME_NAME"
fi
fi
done
fi
SKIP=www
done
if lxc network list -q | grep -q ss-ovn; then
lxc network delete ss-ovn
BACKUP_WWW_APPS=true
echo "BACKUP_WWW_APPS: $BACKUP_WWW_APPS"
echo "SERVERS: $SERVERS"
echo "BACKUP_WWW_APPS: $BACKUP_WWW_APPS"
# let's grab a snapshot of the
if [ "$BACKUP_WWW_APPS" = true ]; then
SNAPSHOT_ID=$(cat /dev/urandom | tr -dc 'a-aA-Z' | fold -w 6 | head -n 1)
incus storage volume snapshot create ss-base www-ss-data "$SNAPSHOT_ID"
BACKUP_LOCATION="$HOME/ss/backups"
mkdir -p "$BACKUP_LOCATION"
#incus storage volume export ss-base "www-ss-data" "$BACKUP_LOCATION/project-$(incus project list --format csv | grep "(current)" | awk '{print $1}')_www-ss-data_""$(date +%s)"".tar.gz"
#incus storage volume snapshot delete ss-base "www-ss-data" "$SNAPSHOT_ID"
fi
# TODO make a snapshot on all the zfs storage volumes.
if [[ "$SERVERS" == *"www"* && "$SERVERS" == *"btcpay"* ]]; then
if incus network list -q | grep -q ss-ovn; then
incus network delete ss-ovn
fi
fi

View File

@ -4,13 +4,12 @@ You are in the Sovereign Stack Management Environment (SSME). From here, you can
ss-remote - Take a remote SSH endpoint under management of Sovereign Stack.
ss-reset - The opposite of ss-remote; de-provisions an existing remote.
ss-up - Instantiate a deployment to your active project according to your
various remote.conf, project.conf, and site.conf files.
various project.conf and site.conf files.
ss-down - Reverses ss-up. Takes the active project down. Non-destructive of user data,
unless you provide the --destroy flag.
unless you provide the --purge flag.
ss-update - This is just ss-down then ss-up.
ss-show - show the lxd resources associated with the current remote.
ss-show - show the incus resources associated with the current remote.
For more infomation about all these topics, consult the Sovereign Stack website starting with:
- https://www.sovereign-stack.org/tag/deployment-management/

1
deployment/project Submodule

@ -0,0 +1 @@
Subproject commit e8470d789a3811e2fe3f6818fd9a6fea859ba71c

View File

@ -2,18 +2,15 @@
set -eu
PROJECT_NAME="$(lxc info | grep "project:" | awk '{print $2}')"
PROJECT_NAME="$(incus info | grep "project:" | awk '{print $2}')"
export PROJECT_NAME="$PROJECT_NAME"
if [ "$PROJECT_NAME" = default ]; then
echo "ERROR: You are on the default project. Use 'lxc project list' and 'lxc project switch <project>'."
echo "ERROR: You are on the default project. Use 'incus project list' and 'incus project switch <project>'."
exit 1
fi
BITCOIN_CHAIN=$(echo "$PROJECT_NAME" | cut -d'-' -f2)
export PROJECT_PATH="$PROJECTS_PATH/$PROJECT_NAME"
export BITCOIN_CHAIN="$BITCOIN_CHAIN"
PROJECT_DEFINITION_PATH="$PROJECT_PATH/project.conf"
@ -27,25 +24,42 @@ source "$PROJECT_DEFINITION_PATH"
export PRIMARY_SITE_DEFINITION_PATH="$SITES_PATH/$PRIMARY_DOMAIN/site.conf"
if [ ! -f "$PRIMARY_SITE_DEFINITION_PATH" ]; then
echo "ERROR: the site definition does not exist."
exit 1
fi
if [ -z "$PRIMARY_DOMAIN" ]; then
echo "ERROR: The PRIMARY_DOMAIN is not specified. Check your remote definition at '$PRIMARY_SITE_DEFINITION_PATH'."
exit 1
fi
source "$PRIMARY_SITE_DEFINITION_PATH"
SHASUM_OF_PRIMARY_DOMAIN="$(echo -n "$PRIMARY_DOMAIN" | sha256sum | awk '{print $1;}' )"
export PRIMARY_DOMAIN_IDENTIFIER="${SHASUM_OF_PRIMARY_DOMAIN: -6}"
# default values are already at regtest mode.
if [ "$BITCOIN_CHAIN" = testnet ]; then
WWW_SSDATA_DISK_SIZE_GB=30
WWW_BACKUP_DISK_SIZE_GB=30
WWW_DOCKER_DISK_SIZE_GB=50
BTCPAYSERVER_SSDATA_DISK_SIZE_GB=30
BTCPAYSERVER_BACKUP_DISK_SIZE_GB=30
BTCPAYSERVER_DOCKER_DISK_SIZE_GB=100
elif [ "$BITCOIN_CHAIN" = mainnet ]; then
WWW_SSDATA_DISK_SIZE_GB=40
WWW_BACKUP_DISK_SIZE_GB=40
WWW_DOCKER_DISK_SIZE_GB=50
BTCPAYSERVER_SSDATA_DISK_SIZE_GB=30
BTCPAYSERVER_BACKUP_DISK_SIZE_GB=30
BTCPAYSERVER_DOCKER_DISK_SIZE_GB=300
fi
export WWW_SSDATA_DISK_SIZE_GB="$WWW_SSDATA_DISK_SIZE_GB"
export WWW_BACKUP_DISK_SIZE_GB="$WWW_BACKUP_DISK_SIZE_GB"
export WWW_DOCKER_DISK_SIZE_GB="$WWW_DOCKER_DISK_SIZE_GB"
export BTCPAYSERVER_SSDATA_DISK_SIZE_GB="$BTCPAYSERVER_SSDATA_DISK_SIZE_GB"
export BTCPAYSERVER_BACKUP_DISK_SIZE_GB="$BTCPAYSERVER_BACKUP_DISK_SIZE_GB"
export BTCPAYSERVER_DOCKER_DISK_SIZE_GB="$BTCPAYSERVER_DOCKER_DISK_SIZE_GB"
export BTCPAYSERVER_DOCKER_DISK_SIZE_GB="$BTCPAYSERVER_DOCKER_DISK_SIZE_GB"

View File

@ -1,11 +1,11 @@
#!/bin/bash
set -e
set -exu
cd "$(dirname "$0")"
# This script is meant to be executed on the management machine.
# it reaches out to an SSH endpoint and provisions that machine
# to use LXD.
# to use incus.
DATA_PLANE_MACVLAN_INTERFACE=
DISK_TO_USE=
@ -13,8 +13,8 @@ DISK_TO_USE=
# override the remote name.
REMOTE_NAME="${1:-}"
if [ -z "$REMOTE_NAME" ]; then
echo "ERROR: The remote name was not provided. Syntax is: 'ss-remote REMOTE_NAME SSH_HOST_FQDN'"
echo " for example: 'ss-remote dev host01.domain.tld"
echo "ERROR: The remote name was not provided. Syntax is: 'ss-remote <remote_name> <remote01.domain.tld>'"
echo " for example: 'ss-remote development clusterhost00.domain.tld"
exit 1
fi
@ -32,8 +32,6 @@ if [ ! -f "$REMOTE_DEFINITION" ]; then
cat >"$REMOTE_DEFINITION" <<EOL
# https://www.sovereign-stack.org/ss-remote
LXD_REMOTE_PASSWORD="$(gpg --gen-random --armor 1 14)"
DEPLOYMENT_STRING="(dev|regtest),(staging|testnet)"
# REGISTRY_URL=http://registry.domain.tld:5000
EOL
@ -47,7 +45,7 @@ fi
source "$REMOTE_DEFINITION"
if ! lxc remote list | grep -q "$REMOTE_NAME"; then
if ! incus remote list | grep -q "$REMOTE_NAME"; then
FQDN="${2:-}"
if [ -z "$FQDN" ]; then
@ -89,7 +87,7 @@ if ! lxc remote list | grep -q "$REMOTE_NAME"; then
ssh-copy-id -i "$HOME/.ssh/id_rsa.pub" "ubuntu@$FQDN"
if [ -z "$DISK_TO_USE" ]; then
if ! ssh "ubuntu@$FQDN" lxc storage list -q | grep -q ss-base; then
if ! ssh "ubuntu@$FQDN" incus storage list -q | grep -q ss-base; then
echo "INFO: It looks like the DISK_TO_USE has not been set. Enter it now."
echo ""
@ -101,7 +99,7 @@ if ! lxc remote list | grep -q "$REMOTE_NAME"; then
fi
else
echo "ERROR: the remote already exists! You need to go delete your lxd remote if you want to re-create your remote."
echo "ERROR: the remote already exists! You need to go delete your incus remote if you want to re-create your remote."
echo " It's may also be helpful to reset/rename your remote path."
exit 1
fi
@ -119,38 +117,33 @@ if [ "$DISK_TO_USE" != loop ]; then
fi
fi
# error out if the remote password is unset.
if [ -z "$LXD_REMOTE_PASSWORD" ]; then
echo "ERROR: LXD_REMOTE_PASSWORD must be set in your remote.conf file."
exit 1
fi
if ! command -v lxc >/dev/null 2>&1; then
if lxc profile list --format csv | grep -q "$BASE_IMAGE_VM_NAME"; then
lxc profile delete "$BASE_IMAGE_VM_NAME"
if ! command -v incus >/dev/null 2>&1; then
if incus profile list --format csv | grep -q "$BASE_IMAGE_VM_NAME"; then
incus profile delete "$BASE_IMAGE_VM_NAME"
sleep 1
fi
if lxc network list --format csv -q --project default | grep -q lxdbr0; then
lxc network delete lxdbr0 --project default
if incus network list --format csv -q --project default | grep -q incusbr0; then
incus network delete incusbr0 --project default
sleep 1
fi
if lxc network list --format csv -q project default | grep -q lxdbr1; then
lxc network delete lxdbr1 --project default
if incus network list --format csv -q project default | grep -q incusbr1; then
incus network delete incusbr1 --project default
sleep 1
fi
fi
# install dependencies.
ssh -t "ubuntu@$FQDN" 'sudo apt update && sudo apt upgrade -y && sudo apt install htop dnsutils nano -y'
if ! ssh "ubuntu@$FQDN" snap list | grep -q lxd; then
ssh -t "ubuntu@$FQDN" 'sudo snap install lxd --channel=latest/candidate'
sleep 5
fi
ssh -t "ubuntu@$FQDN" 'sudo apt update && sudo apt upgrade -y && sudo apt install htop dnsutils nano zfsutils-linux -y'
REMOTE_SCRIPT_PATH="$REMOTE_HOME/install_incus.sh"
scp ../install_incus.sh "ubuntu@$FQDN:$REMOTE_SCRIPT_PATH"
ssh -t "ubuntu@$FQDN" "chmod +x $REMOTE_SCRIPT_PATH"
ssh -t "ubuntu@$FQDN" "sudo bash -c $REMOTE_SCRIPT_PATH"
ssh -t "ubuntu@$FQDN" "sudo adduser ubuntu incus-admin"
# install OVN for the project-specific bridge networks
ssh -t "ubuntu@$FQDN" "sudo apt-get install -y ovn-host ovn-central && sudo ovs-vsctl set open_vswitch . external_ids:ovn-remote=unix:/var/run/ovn/ovnsb_db.sock external_ids:ovn-encap-type=geneve external_ids:ovn-encap-ip=127.0.0.1"
@ -167,16 +160,15 @@ IP_OF_MGMT_MACHINE="$(ssh ubuntu@"$FQDN" env | grep SSH_CLIENT | cut -d " " -f 1
IP_OF_MGMT_MACHINE="${IP_OF_MGMT_MACHINE#*=}"
IP_OF_MGMT_MACHINE="$(echo "$IP_OF_MGMT_MACHINE" | cut -d: -f1)"
# run lxd init on the remote server.
cat <<EOF | ssh ubuntu@"$FQDN" lxd init --preseed
# run incus admin init on the remote server.
cat <<EOF | ssh ubuntu@"$FQDN" incus admin init --preseed
config:
core.https_address: ${MGMT_PLANE_IP}:8443
core.trust_password: ${LXD_REMOTE_PASSWORD}
core.dns_address: ${MGMT_PLANE_IP}
images.auto_update_interval: 15
networks:
- name: lxdbr0
- name: incusbr0
description: "ss-config,${DATA_PLANE_MACVLAN_INTERFACE:-error}"
type: bridge
config:
@ -185,7 +177,7 @@ networks:
ipv4.nat: true
ipv6.address: none
dns.mode: managed
- name: lxdbr1
- name: incusbr1
description: "Non-natting bridge needed for ovn networks."
type: bridge
config:
@ -203,55 +195,48 @@ profiles:
pool: ss-base
type: disk
name: default
cluster:
server_name: ${REMOTE_NAME}
enabled: true
member_config: []
cluster_address: ""
cluster_certificate: ""
server_address: ""
cluster_password: ""
cluster_certificate_path: ""
cluster_token: ""
EOF
# ensure the lxd service is available over the network, then add a lxc remote, then switch the active remote to it.
if wait-for-it -t 20 "$FQDN:8443"; then
# now create a remote on your local LXC client and switch to it.
# the software will now target the new remote.
lxc remote add "$REMOTE_NAME" "$FQDN" --password="$LXD_REMOTE_PASSWORD" --protocol=lxd --auth-type=tls --accept-certificate
lxc remote switch "$REMOTE_NAME"
ssh ubuntu@"$FQDN" incus project list -q >> /dev/null
echo "INFO: A new remote named '$REMOTE_NAME' has been created. Your LXC client has been switched to it."
# ensure the incus service is available over the network, then add a incus remote, then switch the active remote to it.
if wait-for-it -t 20 "$FQDN:8443"; then
# before we add the remote, we need a trust token from the incus server
INCUS_CERT_TRUST_TOKEN=$(ssh ubuntu@"$FQDN" incus config trust add ss-mgmt | tail -n 1)
# now create a remote on your local incus client and switch to it.
# the software will now target the new remote.
incus remote add "$REMOTE_NAME" "$FQDN" --auth-type=tls --accept-certificate --token="$INCUS_CERT_TRUST_TOKEN"
incus remote switch "$REMOTE_NAME"
echo "INFO: A new remote named '$REMOTE_NAME' has been created. Your incus client has been switched to it."
else
echo "ERROR: Could not detect the LXD endpoint. Something went wrong."
echo "ERROR: Could not detect the incus endpoint. Something went wrong."
exit 1
fi
# create the default storage pool if necessary
if ! lxc storage list --format csv | grep -q ss-base; then
if ! incus storage list --format csv | grep -q ss-base; then
if [ "$DISK_TO_USE" != loop ]; then
# we omit putting a size here so, so LXD will consume the entire disk if '/dev/sdb' or partition if '/dev/sdb1'.
# we omit putting a size here so, so incus will consume the entire disk if '/dev/sdb' or partition if '/dev/sdb1'.
# TODO do some sanity/resource checking on DISK_TO_USE. Impelment full-disk encryption?
lxc storage create ss-base zfs source="$DISK_TO_USE"
incus storage create ss-base zfs source="$DISK_TO_USE"
else
# if a disk is the default 'loop', then we create a zfs storage pool
# on top of the existing filesystem using a loop device, per LXD docs
lxc storage create ss-base zfs
# on top of the existing filesystem using a loop device, per incus docs
incus storage create ss-base zfs
fi
# # create the testnet/mainnet blocks/chainstate subvolumes.
# for CHAIN in mainnet testnet; do
# for DATA in blocks chainstate; do
# if ! lxc storage volume list ss-base | grep -q "$CHAIN-$DATA"; then
# lxc storage volume create ss-base "$CHAIN-$DATA" --type=filesystem
# if ! incus storage volume list ss-base | grep -q "$CHAIN-$DATA"; then
# incus storage volume create ss-base "$CHAIN-$DATA" --type=filesystem
# fi
# done
# done
else
echo "WARNING! The host '$FQDN' appears to have Sovereign Stack worksloads already provisioned."
echo "INFO: Here are your current Deployments."
lxc project list -q
fi
echo "INFO: completed remote.sh."

View File

@ -2,9 +2,19 @@
set -eu
CURRENT_REMOTE="$(lxc remote get-default)"
CURRENT_REMOTE="$(incus remote get-default)"
DEPLOYMENT_STRING=
if echo "$CURRENT_REMOTE" | grep -q "production"; then
SS_ROOT_PATH="$HOME/ss"
REMOTES_PATH="$SS_ROOT_PATH/remotes"
PROJECTS_PATH="$SS_ROOT_PATH/projects"
SITES_PATH="$SS_ROOT_PATH/sites"
INCUS_CONFIG_PATH="$SS_ROOT_PATH/incus"
SS_CACHE_PATH="$SS_ROOT_PATH/cache"
if echo "$CURRENT_REMOTE" | grep -q "prod"; then
echo "WARNING: You are running a migration procedure on a production system."
echo ""
@ -40,49 +50,3 @@ fi
source "$REMOTE_DEFINITION"
# ensure our projects are provisioned according to DEPLOYMENT_STRING
for PROJECT_CHAIN in ${DEPLOYMENT_STRING//,/ }; do
NO_PARENS="${PROJECT_CHAIN:1:${#PROJECT_CHAIN}-2}"
PROJECT_PREFIX=$(echo "$NO_PARENS" | cut -d'|' -f1)
BITCOIN_CHAIN=$(echo "$NO_PARENS" | cut -d'|' -f2)
PROJECT_NAME="$PROJECT_PREFIX-$BITCOIN_CHAIN"
# create the lxc project as specified by PROJECT_NAME
if ! lxc project list | grep -q "$PROJECT_NAME"; then
lxc project create "$PROJECT_NAME"
lxc project set "$PROJECT_NAME" features.networks=true features.images=false features.storage.volumes=true
lxc project switch "$PROJECT_NAME"
fi
# default values are already at regtest mode.
if [ "$BITCOIN_CHAIN" = testnet ]; then
WWW_SSDATA_DISK_SIZE_GB=30
WWW_BACKUP_DISK_SIZE_GB=30
WWW_DOCKER_DISK_SIZE_GB=50
BTCPAYSERVER_SSDATA_DISK_SIZE_GB=30
BTCPAYSERVER_BACKUP_DISK_SIZE_GB=30
BTCPAYSERVER_DOCKER_DISK_SIZE_GB=100
elif [ "$BITCOIN_CHAIN" = mainnet ]; then
WWW_SSDATA_DISK_SIZE_GB=40
WWW_BACKUP_DISK_SIZE_GB=40
WWW_DOCKER_DISK_SIZE_GB=1000
BTCPAYSERVER_SSDATA_DISK_SIZE_GB=30
BTCPAYSERVER_BACKUP_DISK_SIZE_GB=30
BTCPAYSERVER_DOCKER_DISK_SIZE_GB=500
fi
export WWW_SSDATA_DISK_SIZE_GB="$WWW_SSDATA_DISK_SIZE_GB"
export WWW_BACKUP_DISK_SIZE_GB="$WWW_BACKUP_DISK_SIZE_GB"
export WWW_DOCKER_DISK_SIZE_GB="$WWW_DOCKER_DISK_SIZE_GB"
export BTCPAYSERVER_SSDATA_DISK_SIZE_GB="$BTCPAYSERVER_SSDATA_DISK_SIZE_GB"
export BTCPAYSERVER_BACKUP_DISK_SIZE_GB="$BTCPAYSERVER_BACKUP_DISK_SIZE_GB"
export BTCPAYSERVER_DOCKER_DISK_SIZE_GB="$BTCPAYSERVER_DOCKER_DISK_SIZE_GB"
done

View File

@ -1,17 +1,19 @@
#!/bin/bash
set -e
set -exu
cd "$(dirname "$0")"
echo "WARNING: THIS SCRIPT NEEDS WORK"
exit 1
PURGE_LXD=false
PURGE_INCUS=false
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--purge)
PURGE_LXD=true
PURGE_INCUS=true
shift
;;
*)
@ -21,65 +23,65 @@ for i in "$@"; do
esac
done
source ../defaults.sh
./down.sh
# these only get initialzed upon creation, so we MUST delete here so they get recreated.
if lxc profile list | grep -q "$BASE_IMAGE_VM_NAME"; then
lxc profile delete "$BASE_IMAGE_VM_NAME"
if incus profile list | grep -q "$BASE_IMAGE_VM_NAME"; then
incus profile delete "$BASE_IMAGE_VM_NAME"
fi
if lxc image list | grep -q "$BASE_IMAGE_VM_NAME"; then
lxc image rm "$BASE_IMAGE_VM_NAME"
if incus image list | grep -q "$BASE_IMAGE_VM_NAME"; then
incus image rm "$BASE_IMAGE_VM_NAME"
fi
if lxc image list | grep -q "$DOCKER_BASE_IMAGE_NAME"; then
lxc image rm "$DOCKER_BASE_IMAGE_NAME"
if incus image list | grep -q "$DOCKER_BASE_IMAGE_NAME"; then
incus image rm "$DOCKER_BASE_IMAGE_NAME"
fi
CURRENT_PROJECT="$(lxc info | grep "project:" | awk '{print $2}')"
if ! lxc info | grep -q "project: default"; then
lxc project switch default
lxc project delete "$CURRENT_PROJECT"
CURRENT_PROJECT="$(incus info | grep "project:" | awk '{print $2}')"
if ! incus info | grep -q "project: default"; then
incus project switch default
incus project delete "$CURRENT_PROJECT"
fi
if [ "$PURGE_LXD" = true ]; then
if [ "$PURGE_INCUS" = true ]; then
if lxc profile show default | grep -q "root:"; then
lxc profile device remove default root
if incus profile show default | grep -q "root:"; then
incus profile device remove default root
fi
if lxc profile show default| grep -q "eth0:"; then
lxc profile device remove default eth0
if incus profile show default| grep -q "eth0:"; then
incus profile device remove default eth0
fi
if lxc network list --format csv -q --project default | grep -q lxdbr0; then
lxc network delete lxdbr0 --project default
if incus network list --format csv -q --project default | grep -q incusbr0; then
incus network delete incusbr0 --project default
fi
if lxc network list --format csv -q --project default | grep -q lxdbr1; then
lxc network delete lxdbr1 --project default
if incus network list --format csv -q --project default | grep -q incusbr1; then
incus network delete incusbr1 --project default
fi
# # create the testnet/mainnet blocks/chainstate subvolumes.
# for CHAIN in mainnet testnet; do
# for DATA in blocks chainstate; do
# if lxc storage volume list ss-base | grep -q "$CHAIN-$DATA"; then
# lxc storage volume delete ss-base "$CHAIN-$DATA"
# if incus storage volume list ss-base | grep -q "$CHAIN-$DATA"; then
# incus storage volume delete ss-base "$CHAIN-$DATA"
# fi
# done
# done
if lxc storage list --format csv | grep -q ss-base; then
lxc storage delete ss-base
fi
echo "WARNING: ss-basae NOT DELETED. NEED TO TEST THIS SCRIPT"
# if incus storage list --format csv | grep -q ss-base; then
# incus storage delete ss-base
# fi
CURRENT_REMOTE="$(lxc remote get-default)"
if ! lxc remote get-default | grep -q "local"; then
lxc remote switch local
lxc remote remove "$CURRENT_REMOTE"
CURRENT_REMOTE="$(incus remote get-default)"
if ! incus remote get-default | grep -q "local"; then
incus remote switch local
incus remote remove "$CURRENT_REMOTE"
echo "INFO: The remote '$CURRENT_REMOTE' has been removed! You are now controlling your local instance."
fi

View File

@ -1 +1,4 @@
#!/bin/bash
set -exu

View File

@ -9,14 +9,14 @@ cd "$(dirname "$0")"
echo "Global Settings:"
lxc image list
lxc storage list
incus image list
incus storage list
echo
echo
PROJECT_NAME="$(lxc info | grep "project:" | awk '{print $2}')"
PROJECT_NAME="$(incus info | grep "project:" | awk '{print $2}')"
export export="$PROJECT_NAME"
export PROJECT_PATH="$PROJECTS_PATH/$PROJECT_NAME"
@ -26,17 +26,17 @@ echo "Active project: $PROJECT_NAME"
echo "----------------------------------------------------------"
echo " Networks:"
lxc network list
incus network list
echo
echo " Storage Volumes:"
lxc storage volume list ss-base
incus storage volume list ss-base
echo
echo " Profiles:"
lxc profile list
incus profile list
echo
echo " Instances (VMs):"
lxc list
incus list

66
deployment/stop.sh Executable file
View File

@ -0,0 +1,66 @@
#!/bin/bash
# https://www.sovereign-stack.org/ss-down/
set -eu
cd "$(dirname "$0")"
if incus remote get-default -q | grep -q "local"; then
echo "ERROR: you are on the local incus remote. Nothing to take down"
exit 1
fi
SERVER_TO_STOP=
OTHER_SITES_LIST=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--server=*)
SERVER_TO_STOP="${i#*=}"
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
if [ -z "$SERVER_TO_STOP" ]; then
echo "ERROR: you MUST specify a server to stop with '--server=www' for example."
exit 1
fi
. ./deployment_defaults.sh
. ./remote_env.sh
. ./project_env.sh
# let's bring down services on the remote deployment if necessary.
export DOMAIN_NAME="$PRIMARY_DOMAIN"
export SITE_PATH="$SITES_PATH/$PRIMARY_DOMAIN"
source "$SITE_PATH/site.conf"
source ./project/domain_env.sh
source ./domain_list.sh
if [ "$SERVER_TO_STOP" = www ]; then
DOCKER_HOST="ssh://ubuntu@$PRIMARY_WWW_FQDN" ./project/www/stop_docker_stacks.sh
fi
if [ "$SERVER_TO_STOP" = btcpayserver ]; then
if wait-for-it -t 5 "$BTCPAY_SERVER_FQDN":22; then
ssh "$BTCPAY_SERVER_FQDN" "bash -c $BTCPAY_SERVER_APPPATH/btcpay-down.sh"
else
echo "ERROR: the remote BTCPAY Server is not available on ssh."
exit 1
fi
fi
if [ "$SERVER_TO_STOP" = lnplayserver ]; then
DOCKER_HOST="ssh://ubuntu@$LNPLAY_SERVER_FQDN" ./project/lnplay/down.sh
fi

View File

@ -1,18 +1,18 @@
#!/bin/bash
set -eu
set -exu
cd "$(dirname "$0")"
VIRTUAL_MACHINE=base
LXD_HOSTNAME=
INCUS_HOSTNAME=
SSDATA_VOLUME_NAME=
BACKUP_VOLUME_NAME=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--lxd-hostname=*)
LXD_HOSTNAME="${i#*=}"
--incus-hostname=*)
INCUS_HOSTNAME="${i#*=}"
shift
;;
--vm=*)
@ -36,11 +36,11 @@ done
# generate the custom cloud-init file. Cloud init installs and configures sshd
SSH_AUTHORIZED_KEY=$(<"$SSH_PUBKEY_PATH")
eval "$(ssh-agent -s)"
ssh-add "$SSH_HOME/id_rsa"
eval "$(ssh-agent -s)" > /dev/null
ssh-add "$SSH_HOME/id_rsa" > /dev/null
export SSH_AUTHORIZED_KEY="$SSH_AUTHORIZED_KEY"
export FILENAME="$LXD_HOSTNAME.yml"
export FILENAME="$INCUS_HOSTNAME.yml"
mkdir -p "$PROJECT_PATH/cloud-init"
YAML_PATH="$PROJECT_PATH/cloud-init/$FILENAME"
@ -73,9 +73,14 @@ if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
EOF
fi
elif [ "$VIRTUAL_MACHINE" = lnplayserver ]; then
cat >> "$YAML_PATH" <<EOF
limits.cpu: "${LNPLAY_SERVER_CPU_COUNT}"
limits.memory: "${LNPLAY_SERVER_MEMORY_MB}MB"
. ./target.sh
EOF
fi
# if VIRTUAL_MACHINE=base, then we doing the base image.
if [ "$VIRTUAL_MACHINE" = base ]; then
@ -135,9 +140,6 @@ EOF
{
"registry-mirrors": [
"${REGISTRY_URL}"
],
"labels": [
"PROJECT_COMMIT=${PROJECT_GIT_COMMIT}"
]
}
@ -153,7 +155,7 @@ if [ "$VIRTUAL_MACHINE" = base ]; then
runcmd:
- sudo mkdir -m 0755 -p /etc/apt/keyrings
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
- echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" | sudo tee /etc/apt/sources.list.d/docker.list
- sudo apt-get update
- sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- sudo DEBIAN_FRONTEND=noninteractive apt-get install -y openssh-server
@ -175,8 +177,13 @@ if [ "$VIRTUAL_MACHINE" != base ]; then
preserve_hostname: true
fqdn: ${FQDN}
resize_rootfs: false
EOF
fi
if [ "$VIRTUAL_MACHINE" = www ] || [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
# all other machines that are not the base image
cat >> "$YAML_PATH" <<EOF
resize_rootfs: false
disk_setup:
/dev/sdb:
table_type: 'gpt'
@ -209,28 +216,62 @@ if [ "$VIRTUAL_MACHINE" != base ]; then
match:
macaddress: ${MAC_ADDRESS_TO_PROVISION}
set-name: enp5s0
EOF
fi
# TODO try to get DHCP working reliably.
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
dhcp4: true
addresses:
- 10.10.10.66/24
EOF
fi
if [ "$VIRTUAL_MACHINE" = www ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
addresses:
- 10.10.10.65/24
EOF
fi
# All profiles get a root disk and cloud-init config.
cat >> "$YAML_PATH" <<EOF
description: Default LXD profile for ${FILENAME}
description: Default incus profile for ${FILENAME}
devices:
EOF
if [ "$VIRTUAL_MACHINE" = lnplayserver ]; then
# All profiles get a root disk and cloud-init config.
cat >> "$YAML_PATH" <<EOF
root:
path: /
pool: ss-base
type: disk
size: 20GiB
EOF
else
# All profiles get a root disk and cloud-init config.
cat >> "$YAML_PATH" <<EOF
root:
path: /
pool: ss-base
type: disk
EOF
fi
cat >> "$YAML_PATH" <<EOF
config:
source: cloud-init:config
type: disk
EOF
if [ "$VIRTUAL_MACHINE" != base ]; then
if [ "$VIRTUAL_MACHINE" = www ] || [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
cat >> "$YAML_PATH" <<EOF
ss-data:
path: ${REMOTE_DATA_PATH}
@ -250,42 +291,48 @@ if [ "$VIRTUAL_MACHINE" = base ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
name: enp6s0
network: lxdbr0
network: incusbr0
type: nic
name: ${FILENAME}
EOF
else
# If we are deploying a VM that attaches to the network underlay.
# all other vms attach to the network underlay
cat >> "$YAML_PATH" <<EOF
enp5s0:
nictype: macvlan
parent: ${DATA_PLANE_MACVLAN_INTERFACE}
type: nic
EOF
if [ "$VIRTUAL_MACHINE" = www ] || [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
name: enp6s0
network: ss-ovn
type: nic
EOF
fi
cat >> "$YAML_PATH" <<EOF
name: ${PRIMARY_DOMAIN}
EOF
fi
# let's create a profile for the BCM TYPE-1 VMs. This is per VM.
if [ "$VIRTUAL_MACHINE" = base ]; then
if ! lxc profile list --format csv --project default | grep -q "$LXD_HOSTNAME"; then
lxc profile create "$LXD_HOSTNAME" --project default
if ! incus profile list --format csv --project default | grep -q "$INCUS_HOSTNAME"; then
incus profile create "$INCUS_HOSTNAME" --project default
fi
# configure the profile with our generated cloud-init.yml file.
cat "$YAML_PATH" | lxc profile edit "$LXD_HOSTNAME" --project default
incus profile edit "$INCUS_HOSTNAME" --project default < "$YAML_PATH"
else
if ! lxc profile list --format csv | grep -q "$LXD_HOSTNAME"; then
lxc profile create "$LXD_HOSTNAME"
if ! incus profile list --format csv | grep -q "$INCUS_HOSTNAME"; then
incus profile create "$INCUS_HOSTNAME"
fi
# configure the profile with our generated cloud-init.yml file.
cat "$YAML_PATH" | lxc profile edit "$LXD_HOSTNAME"
incus profile edit "$INCUS_HOSTNAME" < "$YAML_PATH"
fi

View File

@ -1,3 +0,0 @@
#!/bin/bash
export TARGET_PROJECT_GIT_COMMIT=42a1604146bce97dd363f3f6e44afb7e19f1ce0f

View File

@ -1,12 +1,10 @@
#!/bin/bash
set -eu
set -exu
cd "$(dirname "$0")"
. ./target.sh
# check to ensure dependencies are met.
for cmd in wait-for-it dig rsync sshfs lxc; do
for cmd in wait-for-it dig rsync sshfs incus; do
if ! command -v "$cmd" >/dev/null 2>&1; then
echo "This script requires \"${cmd}\" to be installed. Please run 'install.sh'."
exit 1
@ -14,7 +12,7 @@ for cmd in wait-for-it dig rsync sshfs lxc; do
done
# do a spot check; if we are on production warn.
if lxc remote get-default | grep -q "production"; then
if incus remote get-default | grep -q "production"; then
echo "WARNING: You are running command against a production system!"
echo ""
@ -38,22 +36,25 @@ OTHER_SITES_LIST=
PRIMARY_DOMAIN=
RUN_CERT_RENEWAL=true
SKIP_BASE_IMAGE_CREATION=false
SKIP_WWW=false
RESTORE_WWW=false
RESTORE_CERTS=false
BACKUP_CERTS=false
BACKUP_BTCPAY=false
BACKUP_CERTS=false
BACKUP_APPS=false
BACKUP_BTCPAY=false
BACKUP_BTCPAY_ARCHIVE_PATH=
BACKUP_CERTS=true
BACKUP_BTCPAY=true
SKIP_BTCPAY_SERVER=false
SKIP_WWW_SERVER=false
SKIP_LNPLAY_SERVER=false
BACKUP_BTCPAY_ARCHIVE_PATH=
RESTORE_BTCPAY=false
SKIP_BTCPAY=false
UPDATE_BTCPAY=false
REMOTE_NAME="$(lxc remote get-default)"
STOP_SERVICES=false
REMOTE_NAME="$(incus remote get-default)"
USER_SAYS_YES=false
RESTART_FRONT_END=true
WWW_SERVER_MAC_ADDRESS=
BTCPAY_SERVER_MAC_ADDRESS=
LNPLAY_SERVER_MAC_ADDRESS=
LNPLAY_ENV_PATH=
LNPLAY_VM_EXPIRATION_DATE=
LNPLAY_ORDER_ID=
# grab any modifications from the command line.
for i in "$@"; do
@ -62,30 +63,30 @@ for i in "$@"; do
RESTORE_CERTS=true
shift
;;
--restore-www)
--restore-wwwserver)
RESTORE_WWW=true
RESTORE_CERTS=true
shift
;;
--restore-btcpay)
RESTORE_BTCPAY=true
shift
;;
--backup-www)
BACKUP_CERTS=true
BACKUP_APPS=true
--skip-btcpayserver)
SKIP_BTCPAY_SERVER=true
shift
;;
--skip-wwwserver)
SKIP_WWW_SERVER=true
shift
;;
--skip-lnplayserver)
SKIP_LNPLAY_SERVER=true
shift
;;
--backup-btcpayserver)
BACKUP_BTCPAY=true
shift
;;
--stop)
STOP_SERVICES=true
RESTART_FRONT_END=false
shift
;;
--backup-archive-path=*)
BACKUP_BTCPAY_ARCHIVE_PATH="${i#*=}"
shift
@ -94,14 +95,6 @@ for i in "$@"; do
UPDATE_BTCPAY=true
shift
;;
--skip-www)
SKIP_WWW=true
shift
;;
--skip-btcpayserver)
SKIP_BTCPAY=true
shift
;;
--skip-base-image)
SKIP_BASE_IMAGE_CREATION=true
shift
@ -110,6 +103,18 @@ for i in "$@"; do
RUN_CERT_RENEWAL=false
shift
;;
--lnplay-env-path=*)
LNPLAY_ENV_PATH="${i#*=}"
shift
;;
--vm-expiration-date=*)
LNPLAY_VM_EXPIRATION_DATE="${i#*=}"
shift
;;
--order-id=*)
LNPLAY_ORDER_ID="${i#*=}"
shift
;;
-y)
USER_SAYS_YES=true
shift
@ -121,6 +126,7 @@ for i in "$@"; do
esac
done
if [ "$RESTORE_BTCPAY" = true ] && [ -z "$BACKUP_BTCPAY_ARCHIVE_PATH" ]; then
echo "ERROR: Use the '--backup-archive-path=/path/to/btcpay/archive.tar.gz' option when restoring btcpay server."
exit 1
@ -134,18 +140,15 @@ fi
. ./remote_env.sh
export REGISTRY_DOCKER_IMAGE="registry:2"
export RESTORE_WWW="$RESTORE_WWW"
export STOP_SERVICES="$STOP_SERVICES"
export BACKUP_CERTS="$BACKUP_CERTS"
export BACKUP_APPS="$BACKUP_APPS"
export RESTORE_BTCPAY="$RESTORE_BTCPAY"
export RESTORE_WWW="$RESTORE_WWW"
export BACKUP_BTCPAY="$BACKUP_BTCPAY"
export RUN_CERT_RENEWAL="$RUN_CERT_RENEWAL"
export REMOTE_NAME="$REMOTE_NAME"
export REMOTE_PATH="$REMOTES_PATH/$REMOTE_NAME"
export USER_SAYS_YES="$USER_SAYS_YES"
export BACKUP_BTCPAY_ARCHIVE_PATH="$BACKUP_BTCPAY_ARCHIVE_PATH"
export RESTART_FRONT_END="$RESTART_FRONT_END"
export RESTORE_CERTS="$RESTORE_CERTS"
# todo convert this to Trezor-T
@ -163,15 +166,14 @@ fi
export REMOTE_DEFINITION="$REMOTE_DEFINITION"
source "$REMOTE_DEFINITION"
export LXD_REMOTE_PASSWORD="$LXD_REMOTE_PASSWORD"
export DEPLOYMENT_STRING="$DEPLOYMENT_STRING"
# this is our password generation mechanism. Relying on GPG for secure password generation
# TODO see if this is a secure way to do it.
function new_pass {
gpg --gen-random --armor 1 25
}
function stub_site_definition {
mkdir -p "$SITE_PATH" "$PROJECT_PATH/sites"
@ -195,7 +197,7 @@ DOMAIN_NAME="${DOMAIN_NAME}"
SITE_LANGUAGE_CODES="en"
DUPLICITY_BACKUP_PASSPHRASE="$(new_pass)"
DEPLOY_GHOST=true
DEPLOY_CLAMS=false
DEPLOY_NEXTCLOUD=false
DEPLOY_NOSTR=false
NOSTR_ACCOUNT_PUBKEY=
@ -207,6 +209,12 @@ NEXTCLOUD_MYSQL_ROOT_PASSWORD="$(new_pass)"
GITEA_MYSQL_PASSWORD="$(new_pass)"
GITEA_MYSQL_ROOT_PASSWORD="$(new_pass)"
#GHOST_DEPLOY_SMTP=true
#MAILGUN_FROM_ADDRESS=false
#MAILGUN_SMTP_USERNAME=
#MAILGUN_SMTP_PASSWORD=
EOL
chmod 0744 "$SITE_DEFINITION_PATH"
@ -218,9 +226,13 @@ EOL
}
PROJECT_NAME="$(lxc info | grep "project:" | awk '{print $2}')"
PROJECT_NAME="$(incus info | grep "project:" | awk '{print $2}')"
export PROJECT_NAME="$PROJECT_NAME"
export PROJECT_PATH="$PROJECTS_PATH/$PROJECT_NAME"
export SKIP_BTCPAY_SERVER="$SKIP_BTCPAY_SERVER"
export SKIP_WWW_SERVER="$SKIP_WWW_SERVER"
export SKIP_LNPLAY_SERVER="$SKIP_LNPLAY_SERVER"
mkdir -p "$PROJECT_PATH" "$REMOTE_PATH/projects"
@ -246,10 +258,16 @@ WWW_SERVER_MAC_ADDRESS=
# WWW_SERVER_CPU_COUNT="6"
# WWW_SERVER_MEMORY_MB="4096"
BTCPAYSERVER_MAC_ADDRESS=
BTCPAY_SERVER_MAC_ADDRESS=
# BTCPAY_SERVER_CPU_COUNT="4"
# BTCPAY_SERVER_MEMORY_MB="4096"
LNPLAY_SERVER_MAC_ADDRESS=
# LNPLAY_SERVER_CPU_COUNT="4"
# LNPLAY_SERVER_MEMORY_MB="4096"
# BITCOIN_CHAIN=mainnet
EOL
chmod 0744 "$PROJECT_DEFINITION_PATH"
@ -266,31 +284,14 @@ if [ -z "$PRIMARY_DOMAIN" ]; then
exit 1
fi
if [ -z "$WWW_SERVER_MAC_ADDRESS" ]; then
echo "ERROR: the WWW_SERVER_MAC_ADDRESS is not specified. Check your project.conf."
exit 1
fi
if [ -z "$BTCPAYSERVER_MAC_ADDRESS" ]; then
echo "ERROR: the BTCPAYSERVER_MAC_ADDRESS is not specified. Check your project.conf."
exit 1
fi
# the DOMAIN_LIST is a complete list of all our domains. We often iterate over this list.
DOMAIN_LIST="${PRIMARY_DOMAIN}"
if [ -n "$OTHER_SITES_LIST" ]; then
DOMAIN_LIST="${DOMAIN_LIST},${OTHER_SITES_LIST}"
fi
export DOMAIN_LIST="$DOMAIN_LIST"
export DOMAIN_COUNT=$(("$(echo "$DOMAIN_LIST" | tr -cd , | wc -c)"+1))
source ./domain_list.sh
# let's provision our primary domain first.
export DOMAIN_NAME="$PRIMARY_DOMAIN"
export PRIMARY_DOMAIN="$PRIMARY_DOMAIN"
export BITCOIN_CHAIN="$BITCOIN_CHAIN"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
export PRIMARY_WWW_FQDN="$WWW_HOSTNAME.$DOMAIN_NAME"
export PRIMARY_SITE_PATH="$SITES_PATH/$PRIMARY_DOMAIN"
stub_site_definition
@ -303,23 +304,28 @@ export UPDATE_BTCPAY="$UPDATE_BTCPAY"
VPS_HOSTNAME=
. ./base.sh
if ! lxc image list --format csv | grep -q "$DOCKER_BASE_IMAGE_NAME"; then
# create the lxd base image.
if ! incus image list --format csv | grep -q "$DOCKER_BASE_IMAGE_NAME"; then
# create the incus base image.
if [ "$SKIP_BASE_IMAGE_CREATION" = false ]; then
./create_lxc_base.sh
./create_base.sh
fi
fi
for VIRTUAL_MACHINE in www btcpayserver; do
if [ "$VIRTUAL_MACHINE" = btcpayserver ] && [ "$SKIP_BTCPAY" = true ]; then
continue
fi
VMS_TO_PROVISION=""
if [ -n "$WWW_SERVER_MAC_ADDRESS" ] && [ "$SKIP_WWW_SERVER" = false ]; then
VMS_TO_PROVISION="www"
fi
if [ "$VIRTUAL_MACHINE" = www ] && [ "$SKIP_WWW" = true ]; then
continue
fi
if [ -n "$BTCPAY_SERVER_MAC_ADDRESS" ] && [ "$SKIP_BTCPAY_SERVER" = false ]; then
VMS_TO_PROVISION="$VMS_TO_PROVISION btcpayserver"
fi
if [ -n "$LNPLAY_SERVER_MAC_ADDRESS" ] || [ "$SKIP_LNPLAY_SERVER" = false ]; then
VMS_TO_PROVISION="$VMS_TO_PROVISION lnplayserver"
fi
for VIRTUAL_MACHINE in $VMS_TO_PROVISION; do
export VIRTUAL_MACHINE="$VIRTUAL_MACHINE"
FQDN=
@ -336,74 +342,61 @@ for VIRTUAL_MACHINE in www btcpayserver; do
fi
# Goal is to get the macvlan interface.
LXD_SS_CONFIG_LINE=
if lxc network list --format csv --project=default | grep lxdbr0 | grep -q "ss-config"; then
LXD_SS_CONFIG_LINE="$(lxc network list --format csv --project=default | grep lxdbr0 | grep ss-config)"
INCUS_SS_CONFIG_LINE=
if incus network list --format csv --project default | grep incusbr0 | grep -q "ss-config"; then
INCUS_SS_CONFIG_LINE="$(incus network list --format csv --project default | grep incusbr0 | grep ss-config)"
fi
if [ -z "$LXD_SS_CONFIG_LINE" ]; then
if [ -z "$INCUS_SS_CONFIG_LINE" ]; then
echo "ERROR: the MACVLAN interface has not been specified. You may need to run 'ss-remote' again."
exit 1
fi
CONFIG_ITEMS="$(echo "$LXD_SS_CONFIG_LINE" | awk -F'"' '{print $2}')"
CONFIG_ITEMS="$(echo "$INCUS_SS_CONFIG_LINE" | awk -F'"' '{print $2}')"
DATA_PLANE_MACVLAN_INTERFACE="$(echo "$CONFIG_ITEMS" | cut -d ',' -f2)"
export DATA_PLANE_MACVLAN_INTERFACE="$DATA_PLANE_MACVLAN_INTERFACE"
# Now let's switch to the new project to ensure new resources are created under the project scope.
if ! lxc info | grep "project:" | grep -q "$PROJECT_NAME"; then
lxc project switch "$PROJECT_NAME"
if ! incus info | grep "project:" | grep -q "$PROJECT_NAME"; then
incus project switch "$PROJECT_NAME"
fi
# check if the OVN network exists in this project.
if ! lxc network list | grep -q "ss-ovn"; then
lxc network create ss-ovn --type=ovn network=lxdbr1 ipv6.address=none
if ! incus network list | grep -q "ss-ovn"; then
incus network create ss-ovn --type=ovn network=incusbr1 ipv6.address=none
fi
export MAC_ADDRESS_TO_PROVISION=
export VPS_HOSTNAME="$VPS_HOSTNAME"
export FQDN="$VPS_HOSTNAME.$DOMAIN_NAME"
if [ "$VIRTUAL_MACHINE" = www ]; then
if [ "$SKIP_WWW" = true ]; then
echo "INFO: Skipping WWW due to command line argument."
continue
fi
if [ "$VIRTUAL_MACHINE" = www ] && [ -n "$WWW_SERVER_MAC_ADDRESS" ]; then
FQDN="$WWW_HOSTNAME.$DOMAIN_NAME"
VPS_HOSTNAME="$WWW_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$WWW_SERVER_MAC_ADDRESS"
elif [ "$VIRTUAL_MACHINE" = btcpayserver ] || [ "$SKIP_BTCPAY" = true ]; then
FQDN="$BTCPAY_HOSTNAME.$DOMAIN_NAME"
VPS_HOSTNAME="$BTCPAY_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$BTCPAYSERVER_MAC_ADDRESS"
elif [ "$VIRTUAL_MACHINE" = btcpayserver ] && [ -n "$BTCPAY_SERVER_MAC_ADDRESS" ]; then
FQDN="$BTCPAY_SERVER_HOSTNAME.$DOMAIN_NAME"
VPS_HOSTNAME="$BTCPAY_SERVER_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$BTCPAY_SERVER_MAC_ADDRESS"
elif [ "$VIRTUAL_MACHINE" = lnplayserver ] && [ -n "$LNPLAY_SERVER_MAC_ADDRESS" ]; then
FQDN="$LNPLAY_SERVER_HOSTNAME.$DOMAIN_NAME"
VPS_HOSTNAME="$LNPLAY_SERVER_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$LNPLAY_SERVER_MAC_ADDRESS"
elif [ "$VIRTUAL_MACHINE" = "$BASE_IMAGE_VM_NAME" ]; then
export FQDN="$BASE_IMAGE_VM_NAME"
else
echo "ERROR: VIRTUAL_MACHINE not within allowable bounds."
exit
FQDN="$BASE_IMAGE_VM_NAME"
fi
export FQDN="$FQDN"
export LXD_VM_NAME="${FQDN//./-}"
export INCUS_VM_NAME="${FQDN//./-}"
export MAC_ADDRESS_TO_PROVISION="$MAC_ADDRESS_TO_PROVISION"
export PROJECT_PATH="$PROJECT_PATH"
./deploy_vm.sh
if [ "$VIRTUAL_MACHINE" = www ]; then
# this tells our local docker client to target the remote endpoint via SSH
export DOCKER_HOST="ssh://ubuntu@$PRIMARY_WWW_FQDN"
# enable docker swarm mode so we can support docker stacks.
if docker info | grep -q "Swarm: inactive"; then
docker swarm init --advertise-addr enp6s0
fi
fi
done
# let's stub out the rest of our site definitions, if any.
@ -415,18 +408,71 @@ for DOMAIN_NAME in ${OTHER_SITES_LIST//,/ }; do
stub_site_definition
done
# now let's run the www and btcpay-specific provisioning scripts.
if [ "$SKIP_WWW" = false ]; then
./project/www/go.sh
else
echo "INFO: Skipping www VM."
if [ "$SKIP_BTCPAY_SERVER" = false ]; then
if [ -n "$BTCPAY_SERVER_MAC_ADDRESS" ]; then
export DOCKER_HOST="ssh://ubuntu@$BTCPAY_SERVER_FQDN"
./project/btcpayserver/go.sh
fi
fi
export DOMAIN_NAME="$PRIMARY_DOMAIN"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
if [ "$SKIP_BTCPAY" = false ]; then
./project/btcpayserver/go.sh
else
echo "INFO: Skipping the btcpayserver VM."
fi
if [ "$SKIP_WWW_SERVER" = false ]; then
# now let's run the www and btcpay-specific provisioning scripts.
if [ -n "$WWW_SERVER_MAC_ADDRESS" ]; then
export DOCKER_HOST="ssh://ubuntu@$WWW_FQDN"
# enable docker swarm mode so we can support docker stacks.
if docker info | grep -q "Swarm: inactive"; then
docker swarm init --advertise-addr enp6s0
fi
./project/www/go.sh
fi
fi
# don't run lnplay stuff if user specifies --skip-lnplay
if [ "$SKIP_LNPLAY_SERVER" = false ]; then
# now let's run the www and btcpay-specific provisioning scripts.
if [ -n "$LNPLAY_SERVER_MAC_ADDRESS" ]; then
export DOCKER_HOST="ssh://ubuntu@$LNPLAY_SERVER_FQDN"
LNPLAY_ENV_FILE="$PRIMARY_SITE_PATH/$LNPLAY_SERVER_FQDN/lnplay.conf"
if [ ! -f "$LNPLAY_ENV_FILE" ]; then
# and we have to set our environment file as well.
cat > "$LNPLAY_ENV_FILE" <<EOL
DOCKER_HOST=ssh://ubuntu@${LNPLAY_SERVER_FQDN}
BACKEND_FQDN=lnplay.${PRIMARY_DOMAIN}
FRONTEND_FQDN=remote.${PRIMARY_DOMAIN}
ENABLE_TLS=true
BTC_CHAIN=${BITCOIN_CHAIN}
CHANNEL_SETUP=none
LNPLAY_SERVER_PATH=${SITES_PATH}/${PRIMARY_DOMAIN}/lnplayserver
DEPLOY_PRISM_PLUGIN=true
EOL
fi
INCUS_LNPLAYSERVER_IMAGE_NAME="lnplayserver-$DOMAIN_NAME"
if ! incus image list -q --format csv | grep -q "$INCUS_LNPLAYSERVER_IMAGE_NAME"; then
# do all the docker image creation steps, but don't run services.
bash -c "./project/lnplay/up.sh -y --no-services --lnplay-conf-path=$LNPLAY_ENV_FILE"
# stop the instance so we can get an image yo
INCUS_VM_NAME="${LNPLAY_SERVER_FQDN//./-}"
incus stop "$INCUS_VM_NAME"
# create the incus image.
incus publish -q --public "$INCUS_VM_NAME" --alias="$INCUS_LNPLAYSERVER_IMAGE_NAME" --compression none
incus start "$INCUS_VM_NAME"
sleep 10
bash -c "./wait_for_ip.sh --incus-name=$INCUS_VM_NAME"
sleep 3
fi
# bring up lnplay services.
bash -c "./project/lnplay/up.sh -y --lnplay-conf-path=$LNPLAY_ENV_FILE"
fi
fi

View File

@ -1,101 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
. ./target.sh
# # As part of the install script, we pull down any other sovereign-stack git repos
# PROJECTS_SCRIPTS_REPO_URL="https://git.sovereign-stack.org/ss/project"
# PROJECTS_SCRIPTS_PATH="$(pwd)/deployment/project"
# if [ ! -d "$PROJECTS_SCRIPTS_PATH" ]; then
# git clone "$PROJECTS_SCRIPTS_REPO_URL" "$PROJECTS_SCRIPTS_PATH"
# else
# cd "$PROJECTS_SCRIPTS_PATH" || exit 1
# git -c advice.detachedHead=false pull origin main
# git checkout "$TARGET_PROJECT_GIT_COMMIT"
# cd - || exit 1
# fi
# check if there are any uncommited changes. It's dangerous to
# alter production systems when you have commits to make or changes to stash.
if git update-index --refresh | grep -q "needs update"; then
echo "ERROR: You have uncommited changes! You MUST commit or stash all changes to continue."
exit 1
fi
echo "WARNING: this script backs up your existing remote and saves all data locally in the SSME."
echo " Then, all your VMs are destroyed on the remote resulting is destruction of user data."
echo " But then we re-create everything using the new codebase, then restore user data to the"
echo " newly provisioned VMs."
RESPONSE=
read -r -p "Are you sure you want to continue (y/n): ": RESPONSE
if [ "$RESPONSE" != "y" ]; then
echo "STOPPING."
exit 0
fi
. ./deployment_defaults.sh
. ./remote_env.sh
. ./project_env.sh
# Check to see if any of the VMs actually don't exist.
# (we only migrate instantiated vms)
for VM in www btcpayserver; do
LXD_NAME="$VM-${DOMAIN_NAME//./-}"
# if the VM doesn't exist, the we emit an error message and hard quit.
if ! lxc list --format csv | grep -q "$LXD_NAME"; then
echo "ERROR: there is no VM named '$LXD_NAME'. You probably need to run ss-up again."
exit 1
fi
done
BTCPAY_RESTORE_ARCHIVE_PATH="$SITES_PATH/$PRIMARY_DOMAIN/backups/btcpayserver/$(date +%s).tar.gz"
echo "INFO: The BTCPAY_RESTORE_ARCHIVE_PATH for this migration will be: $BTCPAY_RESTORE_ARCHIVE_PATH"
# first, let's grab the GIT commit from the remote machine.
export DOMAIN_NAME="$PRIMARY_DOMAIN"
export SITE_PATH="$SITES_PATH/$PRIMARY_DOMAIN"
# source the site path so we know what features it has.
source "$SITE_PATH/site.conf"
source ./project/domain_env.sh
# now we want to switch the git HEAD of the project subdirectory to the
# version of code that was last used
GIT_COMMIT_ON_REMOTE_HOST="$(ssh ubuntu@$BTCPAY_FQDN cat /home/ubuntu/.ss-githead)"
cd project/
echo "INFO: switch the 'project' repo to commit prior commit '$GIT_COMMIT_ON_REMOTE_HOST'"
echo " This allows Sovereign Stack to can grab a backup using the version of the code"
echo " that was used when the deployment was created."
git checkout "$GIT_COMMIT_ON_REMOTE_HOST"
cd -
# run deploy which backups up everything, but doesnt restart any services.
bash -c "./up.sh --stop --backup-archive-path=$BTCPAY_RESTORE_ARCHIVE_PATH --backup-www --backup-btcpayserver --skip-base-image"
# call the down script (be default it is non-destructuve of user data.)
./down.sh
# next we switch back to the current version of Sovereign Stack scripts for bringin up the new version.
cd project/
echo "INFO: switching the 'project' repo back to the most recent commit '$TARGET_PROJECT_GIT_COMMIT'"
echo " That way new deployments will be instantiated using the latest codebase."
git checkout "$TARGET_PROJECT_GIT_COMMIT"
cd -
# TODO we can do some additional logic here. FOr example if the user wants to provide a source/target project/remote,
# we can backup the source remote+project and restore it to the target remote+project. This will facilitate cross-device migrations
# However, if the source and target project/remote are the same, we don't really
# need to do any restorations (or backups for that matter, though we still grab one);
# we simply mount the existing data. That's the more common case where the user is simply upgrading the system in-place.
./up.sh

View File

@ -2,13 +2,13 @@
set -e
LXC_INSTANCE_NAME=
INCUS_INSTANCE_NAME=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--lxd-name=*)
LXC_INSTANCE_NAME="${i#*=}"
--incus-name=*)
INCUS_INSTANCE_NAME="${i#*=}"
shift
;;
*)
@ -19,19 +19,19 @@ for i in "$@"; do
done
# if the invoker did not set the instance name, throw an error.
if [ -z "$LXC_INSTANCE_NAME" ]; then
echo "ERROR: The lxc instance name was not specified. Use '--lxc-name' when calling wait_for_lxc_ip.sh."
if [ -z "$INCUS_INSTANCE_NAME" ]; then
echo "ERROR: The instance name was not specified. Use '--incus-name' when calling wait_for_ip.sh."
exit 1
fi
if ! lxc list --format csv | grep -q "$LXC_INSTANCE_NAME"; then
echo "ERROR: the lxc instance '$LXC_INSTANCE_NAME' does not exist."
if ! incus list --format csv | grep -q "$INCUS_INSTANCE_NAME"; then
echo "ERROR: the instance '$INCUS_INSTANCE_NAME' does not exist."
exit 1
fi
IP_V4_ADDRESS=
while true; do
IP_V4_ADDRESS="$(lxc list "$LXC_INSTANCE_NAME" --format csv --columns=4 | grep enp5s0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')" || true
IP_V4_ADDRESS="$(incus list "$INCUS_INSTANCE_NAME" --format csv --columns=4 | grep enp5s0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')" || true
export IP_V4_ADDRESS="$IP_V4_ADDRESS"
if [ -n "$IP_V4_ADDRESS" ]; then
# give the machine extra time to spin up.
@ -44,6 +44,8 @@ while true; do
done
# wait for cloud-init to complet before returning.
while lxc exec "$LXC_INSTANCE_NAME" -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
while incus exec "$INCUS_INSTANCE_NAME" -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done
done
sleep 1

View File

@ -1,6 +1,6 @@
#!/bin/bash
set -eu
set -exu
cd "$(dirname "$0")"
# https://www.sovereign-stack.org/install/
@ -11,42 +11,40 @@ if [ "$(hostname)" = ss-mgmt ]; then
exit 1
fi
# the DISK variable here tells us which disk (partition) the admin wants to use for
# lxd resources. By default, we provision the disk under / as a loop device. Admin
# can override with CLI modifications.
DISK="rpool/lxd"
export DISK="$DISK"
# ensure the iptables forward policy is set to ACCEPT so your host can act as a router
# Note this is necessary if docker is running (or has been previuosly installed) on the
# same host running incus.
sudo iptables -F FORWARD
sudo iptables -P FORWARD ACCEPT
# install lxd snap and initialize it
if ! snap list | grep -q lxd; then
sudo snap install lxd --channel=latest/candidate
sleep 5
# run the incus install script.
sudo bash -c ./install_incus.sh
# run lxd init
cat <<EOF | lxd init --preseed
# run incus init
cat <<EOF | sudo incus admin init --preseed
config: {}
networks:
- config:
ipv4.address: auto
ipv4.dhcp: true
ipv4.nat: true
ipv6.address: none
description: "Default network bridge for ss-mgmt outbound network access."
name: lxdbr0
name: incusbr0
type: bridge
project: default
storage_pools:
- config:
source: ${DISK}
size: 30GiB
description: ""
name: sovereign-stack
driver: zfs
profiles:
- config: {}
description: ""
description: "Default profile for ss-mgmt."
devices:
enp5s0:
name: enp5s0
network: lxdbr0
network: incusbr0
type: nic
root:
path: /
@ -54,30 +52,29 @@ profiles:
type: disk
name: default
projects: []
cluster: null
EOF
fi
. ./deployment/deployment_defaults.sh
. ./deployment/base.sh
# we need to get the base image. IMport it if it's cached, else download it then cache it.
if ! lxc image list | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
if ! incus image list | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
# if the image if cached locally, import it from disk, otherwise download it from ubuntu
IMAGE_PATH="$HOME/ss/cache/ss-ubuntu-jammy"
IMAGE_IDENTIFIER=$(find "$IMAGE_PATH" | grep ".qcow2" | head -n1 | cut -d "." -f1)
METADATA_FILE="$IMAGE_PATH/meta-$IMAGE_IDENTIFIER.tar.xz"
IMAGE_FILE="$IMAGE_PATH/$IMAGE_IDENTIFIER.qcow2"
if [ -d "$IMAGE_PATH" ] && [ -f "$METADATA_FILE" ] && [ -f "$IMAGE_FILE" ]; then
lxc image import "$METADATA_FILE" "$IMAGE_FILE" --alias "$UBUNTU_BASE_IMAGE_NAME"
incus image import "$METADATA_FILE" "$IMAGE_FILE" --alias "$UBUNTU_BASE_IMAGE_NAME"
else
lxc image copy "images:$BASE_LXC_IMAGE" local: --alias "$UBUNTU_BASE_IMAGE_NAME" --vm --auto-update
incus image copy "images:$BASE_INCUS_IMAGE" local: --alias "$UBUNTU_BASE_IMAGE_NAME" --vm --auto-update
mkdir -p "$IMAGE_PATH"
lxc image export "$UBUNTU_BASE_IMAGE_NAME" "$IMAGE_PATH" --vm
incus image export "$UBUNTU_BASE_IMAGE_NAME" "$IMAGE_PATH" --vm
fi
fi
@ -90,28 +87,25 @@ if [ ! -f "$SSH_PRIVKEY_PATH" ]; then
ssh-keygen -f "$SSH_PRIVKEY_PATH" -t rsa -b 4096
fi
chmod 700 "$HOME/.ssh"
chmod 600 "$HOME/.ssh/config"
# add SSH_PUBKEY_PATH to authorized_keys
grep -qxF "$(cat $SSH_PUBKEY_PATH)" "$SSH_PATH/authorized_keys" || cat "$SSH_PUBKEY_PATH" >> "$SSH_PATH/authorized_keys"
grep -qxF "$(cat "$SSH_PUBKEY_PATH")" "$SSH_PATH/authorized_keys" || cat "$SSH_PUBKEY_PATH" >> "$SSH_PATH/authorized_keys"
FROM_BUILT_IMAGE=false
if ! lxc list --format csv | grep -q ss-mgmt; then
if ! incus list --format csv | grep -q ss-mgmt; then
# TODO check to see if there's an existing ss-mgmt image to spawn from, otherwise do this.
if lxc image list | grep -q ss-mgmt; then
if incus image list | grep -q ss-mgmt; then
FROM_BUILT_IMAGE=true
lxc init ss-mgmt ss-mgmt --vm -c limits.cpu=4 -c limits.memory=4GiB --profile=default
incus init ss-mgmt ss-mgmt --vm -c limits.cpu=4 -c limits.memory=4GiB --profile=default
else
lxc init "images:$BASE_LXC_IMAGE" ss-mgmt --vm -c limits.cpu=4 -c limits.memory=4GiB --profile=default
incus init "images:$BASE_INCUS_IMAGE" ss-mgmt --vm -c limits.cpu=4 -c limits.memory=4GiB --profile=default
fi
fi
# mount the pre-verified sovereign stack git repo into the new vm
if ! lxc config device show ss-mgmt | grep -q ss-code; then
lxc config device add ss-mgmt ss-code disk source="$(pwd)" path=/home/ubuntu/sovereign-stack
if ! incus config device show ss-mgmt | grep -q ss-code; then
incus config device add ss-mgmt ss-code disk source="$(pwd)" path=/home/ubuntu/sovereign-stack
fi
# create the ~/ss path and mount it into the vm.
@ -120,71 +114,71 @@ source ./deployment/base.sh
mkdir -p "$SS_ROOT_PATH"
if ! lxc config device show ss-mgmt | grep -q ss-root; then
lxc config device add ss-mgmt ss-root disk source="$SS_ROOT_PATH" path=/home/ubuntu/ss
if ! incus config device show ss-mgmt | grep -q ss-root; then
incus config device add ss-mgmt ss-root disk source="$SS_ROOT_PATH" path=/home/ubuntu/ss
fi
# # if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
# BITCOIN_DIR="$HOME/.bitcoin"
# REMOTE_BITCOIN_CACHE_PATH="/home/ubuntu/ss/cache/bitcoin"
# BITCOIN_TESTNET_BLOCKS_PATH="$BITCOIN_DIR/testnet3/blocks"
# if [ -d "$BITCOIN_TESTNET_BLOCKS_PATH" ]; then
# if ! lxc config device show ss-mgmt | grep -q ss-testnet-blocks; then
# lxc config device add ss-mgmt ss-testnet-blocks disk source="$BITCOIN_TESTNET_BLOCKS_PATH" path=$REMOTE_BITCOIN_CACHE_PATH/testnet/blocks
# fi
# fi
# if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
BITCOIN_DIR="$HOME/.bitcoin"
REMOTE_BITCOIN_CACHE_PATH="/home/ubuntu/ss/cache/bitcoin"
BITCOIN_TESTNET_BLOCKS_PATH="$BITCOIN_DIR/testnet3/blocks"
if [ -d "$BITCOIN_TESTNET_BLOCKS_PATH" ]; then
if ! incus config device show ss-mgmt | grep -q ss-testnet-blocks; then
incus config device add ss-mgmt ss-testnet-blocks disk source="$BITCOIN_TESTNET_BLOCKS_PATH" path=$REMOTE_BITCOIN_CACHE_PATH/testnet/blocks
fi
fi
# # if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
# BITCOIN_TESTNET_CHAINSTATE_PATH="$BITCOIN_DIR/testnet3/chainstate"
# if [ -d "$BITCOIN_TESTNET_CHAINSTATE_PATH" ]; then
# if ! lxc config device show ss-mgmt | grep -q ss-testnet-chainstate; then
# lxc config device add ss-mgmt ss-testnet-chainstate disk source="$BITCOIN_TESTNET_CHAINSTATE_PATH" path=$REMOTE_BITCOIN_CACHE_PATH/testnet/chainstate
# fi
# fi
# if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
BITCOIN_TESTNET_CHAINSTATE_PATH="$BITCOIN_DIR/testnet3/chainstate"
if [ -d "$BITCOIN_TESTNET_CHAINSTATE_PATH" ]; then
if ! incus config device show ss-mgmt | grep -q ss-testnet-chainstate; then
incus config device add ss-mgmt ss-testnet-chainstate disk source="$BITCOIN_TESTNET_CHAINSTATE_PATH" path="$REMOTE_BITCOIN_CACHE_PATH/testnet/chainstate"
fi
fi
# # if a ~/.bitcoin/blocks dir exists, mount it in.
# BITCOIN_MAINNET_BLOCKS_PATH="$BITCOIN_DIR/blocks"
# if [ -d "$BITCOIN_MAINNET_BLOCKS_PATH" ]; then
# if ! lxc config device show ss-mgmt | grep -q ss-mainnet-blocks; then
# lxc config device add ss-mgmt ss-mainnet-blocks disk source="$BITCOIN_MAINNET_BLOCKS_PATH" path=$REMOTE_BITCOIN_CACHE_PATH/mainnet/blocks
# fi
# fi
# if a ~/.bitcoin/blocks dir exists, mount it in.
BITCOIN_MAINNET_BLOCKS_PATH="$BITCOIN_DIR/blocks"
if [ -d "$BITCOIN_MAINNET_BLOCKS_PATH" ]; then
if ! incus config device show ss-mgmt | grep -q ss-mainnet-blocks; then
incus config device add ss-mgmt ss-mainnet-blocks disk source="$BITCOIN_MAINNET_BLOCKS_PATH" path="$REMOTE_BITCOIN_CACHE_PATH/mainnet/blocks"
fi
fi
# # if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
# BITCOIN_MAINNET_CHAINSTATE_PATH="$BITCOIN_DIR/chainstate"
# if [ -d "$BITCOIN_MAINNET_CHAINSTATE_PATH" ]; then
# if ! lxc config device show ss-mgmt | grep -q ss-mainnet-blocks; then
# lxc config device add ss-mgmt ss-mainnet-chainstate disk source="$BITCOIN_MAINNET_CHAINSTATE_PATH" path=$REMOTE_BITCOIN_CACHE_PATH/mainnet/chainstate
# fi
# fi
# if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
BITCOIN_MAINNET_CHAINSTATE_PATH="$BITCOIN_DIR/chainstate"
if [ -d "$BITCOIN_MAINNET_CHAINSTATE_PATH" ]; then
if ! incus config device show ss-mgmt | grep -q ss-mainnet-blocks; then
incus config device add ss-mgmt ss-mainnet-chainstate disk source="$BITCOIN_MAINNET_CHAINSTATE_PATH" path="$REMOTE_BITCOIN_CACHE_PATH/mainnet/chainstate"
fi
fi
# mount the ssh directory in there.
if [ -f "$SSH_PUBKEY_PATH" ]; then
if ! lxc config device show ss-mgmt | grep -q ss-ssh; then
lxc config device add ss-mgmt ss-ssh disk source="$HOME/.ssh" path=/home/ubuntu/.ssh
if ! incus config device show ss-mgmt | grep -q ss-ssh; then
incus config device add ss-mgmt ss-ssh disk source="$HOME/.ssh" path=/home/ubuntu/.ssh
fi
fi
# start the vm if it's not already running
if lxc list --format csv | grep -q "ss-mgmt,STOPPED"; then
lxc start ss-mgmt
if incus list --format csv | grep -q "ss-mgmt,STOPPED"; then
incus start ss-mgmt
sleep 10
fi
# wait for the vm to have an IP address
. ./management/wait_for_lxc_ip.sh
. ./management/wait_for_ip.sh
# do some other preparations for user experience
lxc file push ./management/bash_aliases ss-mgmt/home/ubuntu/.bash_aliases
lxc file push ./management/bash_profile ss-mgmt/home/ubuntu/.bash_profile
lxc file push ./management/bashrc ss-mgmt/home/ubuntu/.bashrc
lxc file push ./management/motd ss-mgmt/etc/update-motd.d/sovereign-stack
incus file push ./management/bash_aliases ss-mgmt/home/ubuntu/.bash_aliases
incus file push ./management/bash_profile ss-mgmt/home/ubuntu/.bash_profile
incus file push ./management/bashrc ss-mgmt/home/ubuntu/.bashrc
incus file push ./management/motd ss-mgmt/etc/update-motd.d/sovereign-stack
# install SSH
lxc exec ss-mgmt apt-get update
lxc exec ss-mgmt -- apt-get install -y openssh-server
lxc file push ./management/sshd_config ss-mgmt/etc/ssh/sshd_config
lxc exec ss-mgmt -- sudo systemctl restart sshd
incus exec ss-mgmt apt-get update
incus exec ss-mgmt -- apt-get install -y openssh-server
incus file push ./management/sshd_config ss-mgmt/etc/ssh/sshd_config
incus exec ss-mgmt -- sudo systemctl restart sshd
# add 'ss-manage' to the bare metal ~/.bashrc
ADDED_COMMAND=false
@ -202,34 +196,19 @@ ssh-keyscan -H "$IP_V4_ADDRESS" >> "$SSH_HOME/known_hosts"
ssh "ubuntu@$IP_V4_ADDRESS" sudo chown -R ubuntu:ubuntu /home/ubuntu
if [ "$FROM_BUILT_IMAGE" = false ]; then
ssh "ubuntu@$IP_V4_ADDRESS" /home/ubuntu/sovereign-stack/management/provision.sh
lxc stop ss-mgmt
incus stop ss-mgmt
if ! lxc image list | grep -q "ss-mgmt"; then
if ! incus image list | grep -q "ss-mgmt"; then
echo "Publishing image. Please wait, this may take a while..."
lxc publish ss-mgmt --alias=ss-mgmt
incus publish ss-mgmt --alias=ss-mgmt
fi
lxc start ss-mgmt
incus start ss-mgmt
fi
if [ "$ADDED_COMMAND" = true ]; then
echo "NOTICE! You need to run 'source ~/.bashrc' before continuing. After that, type 'ss-manage' to enter your management environment."
fi
. ./deployment/target.sh
# As part of the install script, we pull down any other sovereign-stack git repos
PROJECTS_SCRIPTS_REPO_URL="https://git.sovereign-stack.org/ss/project"
PROJECTS_SCRIPTS_PATH="$(pwd)/deployment/project"
if [ ! -d "$PROJECTS_SCRIPTS_PATH" ]; then
git clone "$PROJECTS_SCRIPTS_REPO_URL" "$PROJECTS_SCRIPTS_PATH"
else
cd "$PROJECTS_SCRIPTS_PATH"
git -c advice.detachedHead=false pull origin main
git checkout "$TARGET_PROJECT_GIT_COMMIT"
cd -
fi

75
install_incus.sh Executable file
View File

@ -0,0 +1,75 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
if [ $UID -ne 0 ]; then
echo "ERROR: run with sudo."
exit 1
fi
# put the zabbly key in there.
mkdir -p /etc/apt/keyrings/
cat <<EOF > /etc/apt/keyrings/zabbly.asc
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQGNBGTlYcIBDACYQoVXVyQ6Y3Of14GwEaiv/RstQ8jWnH441OtvDbD/VVT8yF0P
pUfypWjQS8aq0g32Qgb9H9+b8UAAKojA2W0szjJFlmmSq19YDMMmNC4AnfeZlKYM
61Zonna7fPaXmlsTlSiUeo/PGvmAXrkFURC9S8FbhZdWEcUpf9vcKAoEzV8qGA4J
xbKlj8EOjSkdq3OQ1hHjP8gynbbzMhZQwjbnWqoiPj35ed9EMn+0QcX+GmynGq6T
hBXdRdeQjZC6rmXzNF2opCyxqx3BJ0C7hUtpHegmeoH34wnJHCqGYkEKFAjlRLoW
tOzHY9J7OFvB6U7ENtnquj7lg2VQK+hti3uiHW+oide06QgjVw2irucCblQzphgo
iX5QJs7tgFFDsA9Ee0DZP6cu83hNFdDcXEZBc9MT5Iu0Ijvj7Oeym3DJpkCuIWgk
SeP56sp7333zrg73Ua7YZsZHRayAe/4YdNUua+90P4GD12TpTtJa4iRWRd7bis6m
tSkKRj7kxyTsxpEAEQEAAbQmWmFiYmx5IEtlcm5lbCBCdWlsZHMgPGluZm9AemFi
Ymx5LmNvbT6JAdQEEwEKAD4WIQRO/FkGlssVuHxzo62CzIeXyDjc/QUCZOVhwgIb
AwUJA8JnAAULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRCCzIeXyDjc/W05C/4n
lGRTlyOETF2K8oWbjtan9wlttQ+pwymJCnP8T+JJDycGL8dPsGdG1ldHdorVZpFi
1P+Bem9bbiW73TpbX+WuCfP1g3WN7AVa2mYRfSVhsLNeBAMRgWgNW9JYsmg99lmY
aPsRYZdGu/PB+ffMIyWhjL3CKCbYS6lV5N5Mi4Lobyz/I1Euxpk2vJhhUqh786nJ
pQpDnvEl1CRANS6JD9bIvEdfatlAhFlrz1TTf6R7SlppyYI7tme4I/G3dnnHWYSG
cGRaLwpwobTq0UNSO71g7+at9eY8dh5nn2lZUvvxZvlbXoOoPxKUoeGVXqoq5F7S
QcMVAogYtyNlnLnsUfSPw6YFRaQ5o00h30bR3hk+YmJ47AJCRY9GIc/IEdSnd/Z5
Ea7CrP2Bo4zxPgcl8fe311FQRTRoWr19l5PXZgGjzy6siXTrYQi6GjLtqVB5SjJf
rrIIy1vZRyDL96WPu6fS+XQMpjsSygj+DBFk8OAvHhQhMCXHgT4BMyg4D5GE0665
AY0EZOVhwgEMAMIztf6WlRsweysb0tzktYE5E/GxIK1lwcD10Jzq3ovJJPa2Tg2t
J6ZBmMQfwU4OYO8lJxlgm7t6MYh41ZZaRhySCtbJiAXqK08LP9Gc1iWLRvKuMzli
NFSiFDFGT1D6kwucVfL/THxvZlQ559kK+LB4iXEKXz37r+MCX1K9uiv0wn63Vm0K
gD3HDgfXWYJcNyXXfJBe3/T5AhuSBOQcpa7Ow5n8zJ+OYg3FFKWHDBTSSZHpbJFr
ArMIGARz5/f+EVj9XGY4W/+ZJlxNh8FzrTLeRArmCWqKLPRG/KF36dTY7MDpOzlw
vu7frv+cgiXHZ2NfPrkH8oOl4L+ufze5KBGcN0QwFDcuwCkv/7Ft9Ta7gVaIBsK7
12oHInUJ6EkBovxpuaLlHlP8IfmZLZbbHzR2gR0e6IhLtrzd7urB+gXUtp6+wCL+
kWD14TTJhSQ+SFU8ajvUah7/1m2bxdjZNp9pzOPGkr/jEjCM0CpZiCY62SeIJqVc
4/ID9NYLAGmSIwARAQABiQG8BBgBCgAmFiEETvxZBpbLFbh8c6OtgsyHl8g43P0F
AmTlYcICGwwFCQPCZwAACgkQgsyHl8g43P0wEgv+LuknyXHpYpiUcJOl9Q5yLokd
o7tJwJ+9Fu7EDAfM7mPgyBj7Ad/v9RRP+JKWHqIYEjyrRnz9lmzciU+LT/CeoQu/
MgpU8wRI4gVtLkX2238amrTKKlVjQUUNHf7cITivUs/8e5W21JfwvcSzu5z4Mxyw
L6vMlBUAixtzZSXD6O7MO9uggHUZMt5gDSPXG2RcIgWm0Bd1yTHL7jZt67xBgZ4d
hUoelMN2XIDLv4SY78jbHAqVN6CLLtWrz0f5YdaeYj8OT6Ohr/iJQdlfVaiY4ikp
DzagLi0LvG9/GuB9eO6yLuojg45JEH8DC7NW5VbdUITxQe9NQ/j5kaRKTEq0fyZ+
qsrryTyvXghxK8oMUcI10l8d41qXDDPCA40kruuspCZSAle3zdqpYqiu6bglrgWr
Zr2Nm9ecm/kkqMIcyJ8e2mlkuufq5kVem0Oez+GIDegvwnK3HAqWQ9lzdWKvnLiE
gNkvg3bqIwZ/WoHBnSwOwwAzwarJl/gn8OG6CIeP
=8Uc6
-----END PGP PUBLIC KEY BLOCK-----
EOF
sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF'
apt-get update
# we || true this here because installing incus fails.
# TODO see if this can be fixed by installing JUST the incus client.
# none of the systemd/daemon stuff is needed necessarily.
apt-get install incus -y

View File

@ -6,34 +6,34 @@ set -eu
cd "$(dirname "$0")"
# check to ensure dependencies are met.
if ! command -v lxc >/dev/null 2>&1; then
echo "This script requires 'lxd/lxc' to be installed. Please run 'install.sh'."
if ! command -v incus >/dev/null 2>&1; then
echo "This script requires incus to be installed. Please run 'install.sh'."
exit 1
fi
if ! lxc remote get-default | grep -q "local"; then
lxc remote switch "local"
if ! incus remote get-default | grep -q "local"; then
incus remote switch "local"
fi
if ! lxc list -q --format csv | grep -q ss-mgmt; then
if ! incus list -q --format csv | grep -q ss-mgmt; then
echo "ERROR: the 'ss-mgmt' VM does not exist. You may need to run install.sh"
exit 1
fi
# if the mgmt machine doesn't exist, then warn the user to perform ./install.sh
if ! lxc list --format csv | grep -q "ss-mgmt"; then
if ! incus list --format csv | grep -q "ss-mgmt"; then
echo "ERROR: the management machine VM does not exist. You probably need to run './install.sh'."
echo "INFO: check out https://www.sovereign-stack.org/tag/code-lifecycle-management/ for more information."
fi
# if the machine does exist, let's make sure it's RUNNING.
if lxc list --format csv | grep -q "ss-mgmt,STOPPED"; then
if incus list --format csv | grep -q "ss-mgmt,STOPPED"; then
echo "INFO: The SSME was in a STOPPED state. Starting the environment. Please wait."
lxc start ss-mgmt
incus start ss-mgmt
sleep 30
fi
. ./management/wait_for_lxc_ip.sh
. ./management/wait_for_ip.sh
# let's ensure ~/.ssh/ssh_config is using the correct IP address for ss-mgmt.
ssh ubuntu@"$IP_V4_ADDRESS"

View File

@ -1,11 +1,12 @@
#!/bin/bash
alias ss-up='/home/ubuntu/sovereign-stack/deployment/up.sh $@'
alias ss-remote='/home/ubuntu/sovereign-stack/deployment/remote.sh $@'
alias ss-show='/home/ubuntu/sovereign-stack/deployment/show.sh $@'
alias ss-reset='/home/ubuntu/sovereign-stack/deployment/reset.sh $@'
alias ss-update='/home/ubuntu/sovereign-stack/deployment/update.sh $@'
alias ss-down='/home/ubuntu/sovereign-stack/deployment/down.sh $@'
alias ss-help='cat /home/ubuntu/sovereign-stack/deployment/help.txt'
alias ss-show='/home/ubuntu/sovereign-stack/deployment/show.sh $@'
alias ss-remote='/home/ubuntu/sovereign-stack/deployment/remote.sh $@'
alias ss-up='/home/ubuntu/sovereign-stack/deployment/up.sh $@'
alias ss-down='/home/ubuntu/sovereign-stack/deployment/down.sh $@'
alias ss-reset='/home/ubuntu/sovereign-stack/deployment/reset.sh $@'
alias ss-stop='/home/ubuntu/sovereign-stack/deployment/stop.sh $@'
alias ss-start='/home/ubuntu/sovereign-stack/deployment/start.sh $@'
alias ss-restore='/home/ubuntu/sovereign-stack/deployment/restore.sh $@'
alias ll='ls -lah'
alias ll='ls -lah'

View File

@ -5,7 +5,7 @@ cd "$(dirname "$0")"
# NOTE! This script MUST be executed as root.
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release htop
sudo apt-get install -y ca-certificates curl gnupg lsb-release jq bc
sudo mkdir -m 0755 -p /etc/apt/keyrings
@ -17,36 +17,26 @@ fi
# TODO REVIEW mgmt software requirements
sudo apt-get update
sudo apt-get install -y wait-for-it dnsutils rsync sshfs apt-transport-https docker-ce-cli libcanberra-gtk-module snapd nano git
sudo apt-get install -y wait-for-it dnsutils rsync sshfs apt-transport-https docker-ce-cli libcanberra-gtk-module nano git gridsite-clients
sudo bash -c "$HOME/sovereign-stack/install_incus.sh"
sleep 10
sudo incus admin init --minimal
# #apt install python3-pip python3-dev libusb-1.0-0-dev libudev-dev pinentry-curses for trezor stuff
# # for trezor installation
# #pip3 install setuptools wheel
# #pip3 install trezor_agent
# # ensure the trezor-t udev rules are in place.
# # if [ ! -f /etc/udev/rules.d/51-trezor.rules ]; then
# # sudo cp ./51-trezor.rules /etc/udev/rules.d/51-trezor.rules
# # fi
# install snap
if ! snap list | grep -q lxd; then
sudo snap install lxd --channel=latest/candidate
sleep 6
# We just do an auto initialization. All we are using is the LXD client inside the management environment.
sudo lxd init --auto
# add docker group
if ! grep -q "^docker:" /etc/group; then
sudo groupadd docker
fi
# run a lxd command so we don't we a warning upon first invocation
lxc list > /dev/null 2>&1
# add incus-admin group
if ! grep -q "^incus-admin:" /etc/group; then
sudo groupadd incus-admin
fi
# add groups for docker and lxd
if ! groups ubuntu | grep -q docker; then
sudo addgroup docker
if ! groups ubuntu | grep -q "\bdocker\b"; then
sudo usermod -aG docker ubuntu
sudo usermod -aG lxd ubuntu
fi
if ! groups ubuntu | grep -q "\bincus-admin\b"; then
sudo usermod -aG incus-admin ubuntu
fi

View File

@ -5,7 +5,7 @@ set -e
IP_V4_ADDRESS=
while true; do
# wait for
if lxc list ss-mgmt | grep -q enp5s0; then
if incus list ss-mgmt | grep -q enp5s0; then
break;
else
sleep 1
@ -13,7 +13,7 @@ while true; do
done
while true; do
IP_V4_ADDRESS=$(lxc list ss-mgmt --format csv --columns=4 | grep enp5s0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')
IP_V4_ADDRESS=$(incus list ss-mgmt --format csv --columns=4 | grep enp5s0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')
if [ -n "$IP_V4_ADDRESS" ]; then
# give the machine extra time to spin up.
break;
@ -27,6 +27,6 @@ done
export IP_V4_ADDRESS="$IP_V4_ADDRESS"
# wait for the VM to complete its default cloud-init.
while lxc exec ss-mgmt -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
while incus exec ss-mgmt -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done

View File

@ -1,22 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
# this script will tag the repo then push it to origin
TAG_NAME="$(head -n 1 ./version.txt)"
TAG_MESSAGE="Creating tag $TAG_NAME on $(date)."
# create the git tag.
if ! git tag | grep -q "$TAG_NAME"; then
git tag -a "$TAG_NAME" -m "$TAG_MESSAGE" -s
fi
## note this will only work if you have permissions to update HEAD on https://git.sovereign-stack.org/ss/sovereign-stack.git
RESPONSE=
read -r -p " Would you like to push this to the main Sovereign Stack repo? (y) ": RESPONSE
if [ "$RESPONSE" = "y" ]; then
# optional; push to remote
git push --set-upstream origin --all
git push --set-upstream origin --tags
fi

View File

@ -1,14 +1,17 @@
#!/bin/bash
set -eu
set -exu
PURGE_LXD=false
# this script uninstalls incus from the MANAGEMENT MACHINE
# if you want to remove incus from remote cluster hosts, run ss-reset.
PURGE_INCUS=false
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--purge)
PURGE_LXD=true
PURGE_INCUS=true
shift
;;
*)
@ -19,72 +22,72 @@ for i in "$@"; do
done
# this script undoes install.sh
if ! command -v lxc >/dev/null 2>&1; then
echo "This script requires 'lxc' to be installed. Please run 'install.sh'."
if ! command -v incus >/dev/null 2>&1; then
echo "This script requires incus to be installed. Please run 'install.sh'."
exit 1
fi
if ! lxc remote get-default | grep -q "local"; then
if ! incus remote get-default | grep -q "local"; then
echo "ERROR: You MUST be on the local remote when uninstalling the SSME."
echo "INFO: You can use 'lxc remote switch local' to do this."
echo "INFO: You can use 'incus remote switch local' to do this."
exit 1
fi
if ! lxc project list | grep -q "default (current)"; then
if ! incus project list | grep -q "default (current)"; then
echo "ERROR: You MUST be on the default project when uninstalling the SSME."
echo "INFO: You can use 'lxc project switch default' to do this."
echo "INFO: You can use 'incus project switch default' to do this."
exit 1
fi
if lxc list --format csv | grep -q "ss-mgmt"; then
if incus list --format csv | grep -q "ss-mgmt"; then
if lxc list --format csv -q | grep -q "ss-mgmt,RUNNING"; then
lxc stop ss-mgmt
if incus list --format csv -q | grep -q "ss-mgmt,RUNNING"; then
incus stop ss-mgmt
fi
if lxc config device list ss-mgmt -q | grep -q "ss-code"; then
lxc config device remove ss-mgmt ss-code
if incus config device list ss-mgmt -q | grep -q "ss-code"; then
incus config device remove ss-mgmt ss-code
fi
if lxc config device list ss-mgmt -q | grep -q "ss-root"; then
lxc config device remove ss-mgmt ss-root
if incus config device list ss-mgmt -q | grep -q "ss-root"; then
incus config device remove ss-mgmt ss-root
fi
if lxc config device list ss-mgmt -q | grep -q "ss-ssh"; then
lxc config device remove ss-mgmt ss-ssh
if incus config device list ss-mgmt -q | grep -q "ss-ssh"; then
incus config device remove ss-mgmt ss-ssh
fi
lxc delete ss-mgmt
incus delete ss-mgmt
fi
if [ "$PURGE_LXD" = true ]; then
if [ "$PURGE_INCUS" = true ]; then
if lxc profile device list default | grep -q root; then
lxc profile device remove default root
if incus profile device list default | grep -q root; then
incus profile device remove default root
fi
if lxc profile device list default | grep -q enp5s0; then
lxc profile device remove default enp5s0
if incus profile device list default | grep -q enp5s0; then
incus profile device remove default enp5s0
fi
if lxc network list --project default | grep -q lxdbr0; then
lxc network delete lxdbr0
if incus network list --project default | grep -q incusbr0; then
incus network delete incusbr0
fi
# this file contains the BASE_IMAGE_NAME
. ./deployment/base.sh
if lxc image list | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
lxc image delete "$UBUNTU_BASE_IMAGE_NAME"
if incus image list | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
incus image delete "$UBUNTU_BASE_IMAGE_NAME"
fi
if lxc storage list --format csv | grep -q sovereign-stack; then
lxc storage delete sovereign-stack
if incus storage list --format csv | grep -q sovereign-stack; then
incus storage delete sovereign-stack
fi
if snap list | grep -q lxd; then
sudo snap remove lxd
if dpkg -l | grep -q incus; then
sudo apt purge incus -y
fi
fi

View File

@ -1 +0,0 @@
v0.1.0