1
1
Fork 1

Compare commits

...

197 Commits

Author SHA1 Message Date
Derek Smith 4c1f7da382
Update project 2023-11-29 09:48:09 -05:00
Derek Smith 796c36e40e
Update project head. 2023-11-19 14:01:44 -05:00
Derek Smith 82fd312766
Bump LXD version. 2023-11-19 14:00:54 -05:00
Derek Smith 2fd61026ef
Increase timer. 2023-11-19 14:00:34 -05:00
Derek Smith f8d6d2acfc
Add the project git submodule back in there. 2023-09-13 10:40:25 -04:00
Derek Smith 0c253e9a49
Remove git submodules file. 2023-09-13 10:17:57 -04:00
Derek Smith 8d28a68bd3
Remove the project git submodule. 2023-09-13 10:17:21 -04:00
Derek Smith ffd4b607c2
deleted project path in prep for submodule fix 2023-09-13 09:14:29 -04:00
Derek Smith eaa4f0abb1
changes from tabconf 2023-09-05 22:01:57 -04:00
Derek Smith 334b138770
Update Version. 2023-08-20 21:23:13 -04:00
Derek Smith 68f487a5dc
Update to latest project head. 2023-08-20 20:47:27 -04:00
Derek Smith 88291a2ee0
Update project root. 2023-08-13 12:18:06 -04:00
Derek Smith 132e32b806
Update clams-server defaults + advance project. 2023-08-13 11:58:30 -04:00
Derek Smith 7819cbe564
Add deployment/project as submodule. 2023-08-12 12:51:46 -04:00
Derek Smith 22ecde022f
Removed submodules 2023-08-12 12:45:23 -04:00
Derek Smith 9d7c9a51f6
Remove project .gitignore. 2023-08-12 12:27:55 -04:00
Derek Smith 08235de798
Nitpicks. 2023-08-12 12:25:23 -04:00
Derek Smith 5369016265
WWW updates. 2023-08-12 12:22:34 -04:00
Derek Smith 0ca1fdbea9
Add domain_list.sh and stop.sh 2023-08-12 12:21:15 -04:00
Derek Smith d88814e6e4
Clams server updates. 2023-08-12 12:19:44 -04:00
Derek Smith ddcc119030
BTCPayserver upates. 2023-08-12 12:16:44 -04:00
Derek Smith 919acbf172
Add --skip logic for VMs. 2023-08-12 12:14:00 -04:00
Derek Smith f41601ad04
Remove project git clone logic in lie of submodule 2023-08-12 12:11:50 -04:00
Derek Smith d49f3f58b6
Switch to the lxd stable channel. 2023-08-11 10:28:06 -04:00
Derek Smith 62c7fd0504
Add iptables FORWARDING due to dockerd conflict. 2023-08-11 10:26:23 -04:00
Derek Smith e602f8be90
Add --disk-or-partition flag, default loop 2023-08-11 10:24:02 -04:00
Derek Smith d8903eb5bc
Add clams-server in lieu of existing integration. 2023-08-11 09:46:08 -04:00
Derek Smith 0b38d6d635
Update to grab git commit in docker info. 2023-05-16 22:08:07 -04:00
Derek Smith beb1d7b5a4
Update project ref. 2023-05-16 22:04:47 -04:00
Derek Smith 5d15d5f690
Various updates. 2023-05-16 22:02:32 -04:00
Derek Smith 6c4d262c43
New Version. 2023-04-14 13:39:38 -04:00
Derek Smith d571680ff1
Update target commit. 2023-04-14 13:17:42 -04:00
Derek Smith 6e6c55e801
Add remote/project checks on uninstall.sh 2023-04-14 13:16:41 -04:00
Derek Smith e762d05828
The rest. 2023-04-12 10:09:47 -04:00
Derek Smith bccef77d4e
Update env sourcing. 2023-04-12 10:07:41 -04:00
Derek Smith 0d8ece8362
Update comments. 2023-04-12 10:06:37 -04:00
Derek Smith d68d673c69
Reenable lxc image cache. 2023-04-12 10:05:49 -04:00
Derek Smith 4f466381f6
Refactor ssh; revert to rsa. 2023-04-12 10:05:25 -04:00
Derek Smith 6e2514b12f
Refactor vm readiness logic. 2023-04-12 10:03:46 -04:00
Derek Smith f615e14988
Fix typo and re-enable uncommited changes checking 2023-04-07 14:11:56 -04:00
Derek Smith 086577f32a
Remove debugging flag. 2023-04-07 14:04:43 -04:00
Derek Smith 879ec0c34f
Other updates. 2023-04-07 14:02:24 -04:00
Derek Smith 8b65f22797
Incoming from other repo. 2023-04-07 14:02:04 -04:00
Derek Smith 7604c95aa0
Move deploy.sh to main repo as up.sh 2023-04-07 14:01:31 -04:00
Derek Smith 754ebc81a4
Moved btcpay bashrc to project repo. 2023-04-07 14:01:11 -04:00
Derek Smith 8ec3af6703
Add skip-base-image flag. 2023-04-07 10:57:54 -04:00
Derek Smith 42a22f4942
Refactorings. 2023-04-07 10:41:18 -04:00
Derek Smith d6eb1f3031
Refactorings. 2023-04-07 10:23:04 -04:00
Derek Smith 200e6afc76
fix typo. 2023-04-06 15:22:32 -04:00
Derek Smith 8321345030
Update adjust vscode color. 2023-04-06 15:10:35 -04:00
Derek Smith b665857921
Update project ref. 2023-04-06 15:09:38 -04:00
Derek Smith 925d17d22c
Update project ref. 2023-04-06 11:14:32 -04:00
Derek Smith be69d5774d
Update ss location. 2023-04-05 20:16:23 -04:00
Derek Smith e4a19010c6
Updates 2023-04-05 20:08:57 -04:00
Derek Smith 9c0e29f772
Update comment. 2023-04-05 15:40:43 -04:00
Derek Smith 54be0f6b3c
Update project ref. 2023-04-05 12:09:31 -04:00
Derek Smith bf5ff6903e
Fix cert renewal logic. 2023-04-05 12:00:39 -04:00
Derek Smith a3b29e129a
Make shutdown process less harsh. 2023-04-05 11:11:11 -04:00
Derek Smith 6a0255a326
All other changes. 2023-04-04 16:25:58 -04:00
Derek Smith a1c50b083d
Comment all blockchain/chainstate stuff 2023-04-04 16:25:20 -04:00
Derek Smith 0f65419d55
Make volumes project-specific. 2023-04-04 16:24:39 -04:00
Derek Smith cae11df6a5
Update project ref. 2023-04-04 11:44:56 -04:00
Derek Smith 18bff0d2e9
Update target git commit. 2023-04-04 11:35:00 -04:00
Derek Smith 425bfd8069
Leftovers. 2023-04-04 11:34:48 -04:00
Derek Smith b82813231c
Remote update. 2023-04-04 11:34:20 -04:00
Derek Smith f2b02e474d
Add dependency check. 2023-04-04 11:33:50 -04:00
Derek Smith 5a54400831
Scope to correct project. 2023-04-04 11:32:54 -04:00
Derek Smith 69ab9d0a42
Supporting stuff for volume mount work. 2023-04-04 11:31:24 -04:00
Derek Smith 35afd1bbd4
Path updates. 2023-04-04 11:30:07 -04:00
Derek Smith a44ce22e00
Minor nostr updates. 2023-04-04 11:28:39 -04:00
Derek Smith 33a855e801
Rename ss-destroy to ss-down 2023-04-04 11:27:27 -04:00
Derek Smith 69161e726e
Improve install/uninstall.sh 2023-04-02 09:43:55 -04:00
Derek Smith 622741ce06
Add/update variable names. 2023-04-02 09:42:55 -04:00
Derek Smith a4301c0a1c
Remove iteration over PROJECTS. 2023-04-02 09:41:42 -04:00
Derek Smith d786227a5f
undo 2023-03-30 11:10:42 -04:00
Derek Smith 7d839e9724
Add removal of docker volume. 2023-03-30 11:08:34 -04:00
Derek Smith 441eff3f26
Remove unnecessary output. 2023-03-27 11:08:09 -04:00
Derek Smith 1b7e871758
Update target git ref 2023-03-27 11:07:52 -04:00
Derek Smith fbf16a38a0
Change to weekly base image creation. 2023-03-27 11:07:42 -04:00
Derek Smith a8dccf6b5c
Make clams browser-app non-default. 2023-03-27 11:07:05 -04:00
Derek Smith fc441f72f4
Switch to mysql for ghost. 2023-03-25 12:07:05 -04:00
Derek Smith efac5e4f45
Create weekly basse images. 2023-03-24 17:10:43 -04:00
Derek Smith f823ba361e
Improve output. 2023-03-24 17:10:04 -04:00
Derek Smith dc40e732ee
Scope destroy to external project. 2023-03-23 18:58:46 -04:00
Derek Smith dc269fe29c
Mindor updates. 2023-03-23 16:34:25 -04:00
Derek Smith d35a882d8b
Scope update invoke to project level. 2023-03-23 15:35:07 -04:00
Derek Smith 3884c24e48
Iterate over projects in show.sh 2023-03-23 14:25:31 -04:00
Derek Smith 2730bda32b
Run latest lxd for development. 2023-03-22 16:36:31 -04:00
Derek Smith 73eacb9125
Update project ref + bugfix. 2023-03-21 13:20:28 -04:00
Derek Smith 9723474b7b
Update project ref. 2023-03-20 20:14:08 -04:00
Derek Smith a595491ee6
Control flow, documentation and update PROJECT ref 2023-03-20 13:03:01 -04:00
Derek Smith 46dedcf6b2
Load cached lxd image at the management machine. 2023-03-20 13:02:20 -04:00
Derek Smith d5c2d67562
Fix bash_aliases in the SSME. 2023-03-20 13:01:50 -04:00
Derek Smith b5c2b0ef81
Update project ref. 2023-03-20 10:33:05 -04:00
Derek Smith f06a3545d2
Make management machine persistence mandatory. 2023-03-20 10:25:35 -04:00
Derek Smith 786d34df93
Improve control flow. 2023-03-19 16:42:48 -04:00
Derek Smith 5dc7194a62
Whitespace and documentation updates. 2023-03-19 13:49:57 -04:00
Derek Smith 1fdf96bfa2
Add --purge to reset.sh 2023-03-19 13:49:37 -04:00
Derek Smith 9e7d9d3a90
Unify device naming convention 2023-03-19 13:49:00 -04:00
Derek Smith ab08923543
Add mainnet data cache on ss-mgmt 2023-03-19 13:48:26 -04:00
Derek Smith 27a02534b6
Update project REF 2023-03-19 13:45:35 -04:00
Derek Smith 2992f34092
Update project ref. 2023-03-18 18:33:36 -04:00
Derek Smith f1abfd77af
Project Updates. 2023-03-18 18:18:21 -04:00
Derek Smith de6f078774
Update informational messages and whitespace. 2023-03-18 11:15:07 -04:00
Derek Smith 18fb5a1fe0
Move code block farther down. 2023-03-18 11:14:36 -04:00
Derek Smith 731639bc4b
Add default data plane. 2023-03-18 11:14:01 -04:00
Derek Smith a3ea78cfe2
Mount management machine .ssh into ss-mgmt 2023-03-18 11:12:14 -04:00
Derek Smith 8aef17fa00
Add jammy image disk caching. 2023-03-18 11:11:12 -04:00
Derek Smith 03d669750c
Formatting & add information messages. 2023-03-17 11:19:35 -04:00
Derek Smith 5f83e83fc8
Update project target commit. 2023-03-16 16:43:48 -04:00
Derek Smith 086aee9ed4
Updates for storage volumes. 2023-03-16 15:50:02 -04:00
Derek Smith 1a586e2399
Update project ref. 2023-03-15 19:22:32 -04:00
Derek Smith 1c2511af21
Create SS_ROOT_PATH 2023-03-15 19:21:02 -04:00
Derek Smith e12ead7ede
Installation/ss-mgmt provisioning improvements. 2023-03-15 19:19:32 -04:00
Derek Smith 1775f28cdb
Updated publish and changed version for tag. 2023-03-15 11:59:09 -04:00
Derek Smith cde7d287c5
Update projects commitment. 2023-03-15 11:23:29 -04:00
Derek Smith 16d4339af6
Prompt and help udpates. 2023-03-15 11:19:34 -04:00
Derek Smith 672be48b08
Added PROEJCT_NAME= 2023-03-14 11:45:29 -04:00
Derek Smith 7e8706f81c
Add USER_TARGET_PROJECT to update.sh 2023-03-14 11:44:22 -04:00
Derek Smith 5da08eab26
Add --project scope. 2023-03-14 11:37:33 -04:00
Derek Smith e2bfd5d090
Update commit hash. 2023-03-13 14:47:06 -04:00
Derek Smith a6ac567f12
Update target project git commit. 2023-03-13 14:44:13 -04:00
Derek Smith 34c1edf27e
Add project context switch during ss-update. 2023-03-13 13:54:47 -04:00
Derek Smith 180cd1fa8d
Rename site_definition to site.conf 2023-03-13 13:54:29 -04:00
Derek Smith afa6c530ff
Rename ss-migrate to ss-update. 2023-03-13 13:45:26 -04:00
Derek Smith cc6bdef20d
Update TARGET_PROJECT_GIT_COMMIT 2023-03-13 13:41:01 -04:00
Derek Smith 514ae6ce24
Rename to remote.conf 2023-03-13 13:40:47 -04:00
Derek Smith d283dfb353
Add iterate over DEPLOYMENT_STRING 2023-03-13 13:40:07 -04:00
Derek Smith c08260a2d4
Remove symlink. 2023-03-11 11:13:36 -05:00
Derek Smith efeb0261bc
Minor updates. 2023-03-11 11:12:19 -05:00
Derek Smith b2abf3fdf4
Update docker image versions. 2023-03-10 22:40:50 -05:00
Derek Smith 870d0b685c
Update project ref. 2023-03-10 21:13:40 -05:00
Derek Smith 6884154c04
Update documentation and links. 2023-03-10 21:12:39 -05:00
Derek Smith 8590e82411
Simplifiey and add comments. 2023-03-10 21:12:24 -05:00
Derek Smith 7a705828b7
Update migration warning message. 2023-03-10 11:14:55 -05:00
Derek Smith 867771c908
Remove Project 2023-03-09 15:38:50 -05:00
Derek Smith e205d1cc7a
Update gitignore 2023-03-09 15:37:53 -05:00
Derek Smith 7ba91f8bcb
Various updates. 2023-03-09 15:36:30 -05:00
Derek Smith 628df90d32
Update projects git ref and other monor updates. 2023-03-09 10:53:00 -05:00
Derek Smith bd3acd8ef4
Add farscapian.gpg 2023-03-09 10:16:11 -05:00
Derek Smith c3980df073
Various plumbing updates. 2023-03-09 09:58:56 -05:00
Derek Smith 9c518e47e2
Implement LXD projects. 2023-03-09 09:58:16 -05:00
Derek Smith f5deac4874
Standardize lxd version. 2023-03-09 09:56:38 -05:00
Derek Smith 493946c1f5
Rename cluster to remote. 2023-03-09 09:55:40 -05:00
Derek Smith 98866559bd
Move projects pull to install.sh 2023-03-06 19:04:56 -05:00
Derek Smith 03d7411a05
Update TARGET_PROJECT_GIT_COMMIT 2023-03-06 18:55:57 -05:00
Derek Smith eb67ec10a7
Update debugging info. 2023-03-06 15:42:17 -05:00
Derek Smith 2f16d2de0a
Wire up project repo. 2023-03-06 14:44:36 -05:00
Derek Smith 0510a36522
Delete project for repo refactor. 2023-03-06 14:33:25 -05:00
Derek Smith 49b435c113
Update publish.sh 2023-03-06 12:56:35 -05:00
Derek Smith 299446b7e8
Commit publish.sh 2023-03-06 12:54:25 -05:00
Derek Smith a19bbec9c1
Bump version tag to v0.0.24 2023-03-06 12:54:09 -05:00
Derek Smith ecd6b3a77d
Remove root .gitignore. 2023-03-06 12:53:51 -05:00
Derek Smith 9b8a2403eb
Remove temporary file. 2023-03-06 12:48:58 -05:00
Derek Smith 1de414dfc9
Organize files under project/ 2023-03-06 12:47:45 -05:00
Derek Smith 7ce1f0adbe
Stability improvements. 2023-03-06 11:47:27 -05:00
Derek Smith 8caae387b6
Simplify and refactor. 2023-03-05 11:50:22 -05:00
Derek Smith 610946e357
Improve cluster deployment 2023-03-05 11:49:53 -05:00
Derek Smith ddb0fbef57
Add clams integration. 2023-03-05 11:49:06 -05:00
Derek Smith 4a84fd24e5
Refactor project env 2023-03-05 11:45:27 -05:00
Derek Smith 16f88d964d Working. Clams integration stop. 2023-03-03 08:59:52 -05:00
Derek Smith b6e0e5ac4d Working on Dockerd 2023-03-02 09:46:17 -05:00
Derek Smith c3bab8c844
4 2023-02-01 16:16:59 -05:00
Derek Smith 48a26c0db0
1 2023-02-01 16:11:50 -05:00
Derek Smith 144b0209eb
2 2023-02-01 16:08:04 -05:00
Derek Smith 864926ffd0
3 2023-02-01 16:07:06 -05:00
Derek Smith a875137b05
45 2023-02-01 16:06:29 -05:00
Derek Smith faca4f41ce
4 2023-02-01 16:05:02 -05:00
Derek Smith ce66768038
3 2023-02-01 16:03:33 -05:00
Derek Smith 4124efe2ea
4 2023-02-01 16:03:07 -05:00
Derek Smith c832d1d050
1 2023-02-01 16:02:38 -05:00
Derek Smith bd3918ec93
Updates 2023-02-01 15:24:03 -05:00
Derek Smith 2c0645c951
Undo this commit. 2023-02-01 14:44:05 -05:00
Derek Smith d91ba02e7f
Added initial Clams build support. 2023-01-15 15:57:47 -05:00
Derek Smith bd129ed073
Remove onion artifacts. 2023-01-12 16:51:22 -05:00
Derek Smith 9d13ec3991
Simplify nostr deployment. 2023-01-07 17:04:00 -05:00
Derek Smith dbb48d7f99
Nostr config updates. 2023-01-07 10:45:44 -05:00
Derek Smith 12d7d33053
Simplify README.md 2023-01-07 10:45:20 -05:00
Derek Smith 902121a72d
Fix --no-backup-www bug. 2023-01-07 09:39:20 -05:00
Derek Smith 9a419ccc6e
Deploy nostr relay when -n NOSTR_ACCOUNT_PUBKEY. 2023-01-07 09:37:55 -05:00
Derek Smith 59ca96cb6c
migration and publish improvements. 2023-01-05 11:14:48 -05:00
Derek Smith 42aa3742fc
Delete reset now that migrate.sh is done. 2023-01-05 11:14:02 -05:00
Derek Smith 104e547a57
Make --restart-front-end default. Goal is opposite 2023-01-05 11:12:50 -05:00
Derek Smith 7a08b7cdcf
Bump Nextcloud to 25.0.2 2023-01-05 11:12:17 -05:00
Derek Smith fac6ab0ecf
Refactor 2023-01-05 11:11:25 -05:00
Derek Smith b23d60a6ad
Updated publish for support git remotes. 2022-12-26 10:34:08 -05:00
Derek Smith e0da0c511a
Updating publishing logic. 2022-12-26 10:31:09 -05:00
Derek Smith 3bc86d8aa3
Added comment + refactor. 2022-12-25 13:04:48 -05:00
Derek Smith a5ce7ceb4c
Add core-lightning mod via custom btcpay fragments 2022-12-25 13:04:20 -05:00
Derek Smith bceec32b66
Add bitcoin/lighting aliases to lxc profile stub 2022-12-25 13:02:22 -05:00
Derek Smith 8316360c70
Bump Ghost image. 2022-12-25 13:01:03 -05:00
Derek Smith 1288962d79
Add switch to restart nginx stack + cert renewal. 2022-12-25 13:00:45 -05:00
Derek Smith 1bc685fa74
Control flow updates. 2022-12-25 12:59:02 -05:00
Derek Smith 69d5564e44
Migration updates for vertical scaling of VMs. 2022-12-12 17:16:47 -05:00
Derek Smith bd9a76108b
Various updates. 2022-11-21 10:58:32 -05:00
Derek Smith 47191ba254
Removed unicode characters. 2022-11-16 12:23:02 -05:00
Derek Smith 36d6b273f1
Documentation and unicode updates. 2022-11-16 12:20:16 -05:00
73 changed files with 2472 additions and 3233 deletions

2
.gitignore vendored
View File

@ -1 +1 @@
./reset.sh
archive

3
.gitmodules vendored Normal file
View File

@ -0,0 +1,3 @@
[submodule "deployment/project"]
path = deployment/project
url = https://git.sovereign-stack.org/ss/project

18
.vscode/settings.json vendored
View File

@ -10,13 +10,15 @@
"shellcheck.enableQuickFix": true,
"shellcheck.run": "onType",
"shellcheck.executablePath": "shellcheck",
"shellcheck.customArgs": [],
"shellcheck.ignorePatterns": {},
"shellcheck.exclude": [
// "SC1090",
// "SC1091",
"SC2029"
"shellcheck.customArgs": [
"-x"
],
"shellcheck.ignorePatterns": {},
// "shellcheck.exclude": [
// "SC1090",
// "SC1091",
// "SC2029"
// ],
"terminal.integrated.fontFamily": "monospace",
"workbench.colorCustomizations": {
"activityBar.background": "#1900a565",
@ -28,8 +30,8 @@
"titleBar.inactiveBackground": "#02972799",
"titleBar.activeForeground": "#e7e7e7",
"titleBar.inactiveForeground": "#e7e7e799",
"statusBar.background": "#ff9900",
"statusBarItem.hoverBackground": "#ff9900",
"statusBar.background": "#d68100",
"statusBarItem.hoverBackground": "#c77700",
"statusBar.foreground": "#000000"
}
}

1
NOTES
View File

@ -1 +0,0 @@
Trezor MUST Use the "Crypto" firmware with shitcoin support in order for 2FA (WEBAUTHN) to work. Bummer.

View File

@ -1,15 +1,3 @@
# Documentation
The Sovereign Stack scripts in this repository are meant to be cloned to and executed from your management machine.
You can update Sovereign Stack scripts on your management machine by running `git pull --all --tags`. Generally, you want to use ONLY signed git tags for your deployments. Use `git checkout v0.1.0` for example to switch to a specific version of Sovereign Stack. The scripts check to ensure that the code you're running on your management machine is GREATER THAN OR EQUAL TO the version of your deployments (TODO). The scripts work to bring your old deployments into line with the current Sovereign Stack version.
Once your managent machine checkedout a specific version of Sovereign stack, you will want to run the various scripts against your remotes. But before you can do that, you need to bring a bare-metal Ubuntu 22.04 cluster host under management (i.e., add it as a remote). Generally speaking you will run `ss-cluster` to bring a new bare-metal host under management of your management machine. This can be run AFTER you have verified SSH access to the bare-metal hosts. The device SHOULD also have a DHCP Reservation and DNS records in place.
After you have taken a machine under management, you can run `ss-deploy` it. All Sovereign Stack scripts execute against your current lxc remote. (Run `lxc remote list` to see your remotes). This will deploy Sovereign Stack software to your active remote in accordance with the various cluster, project, and site defintions. These files are stubbed out for the user automatically and documetnation guides the user through the process.
It is the responsiblity of the management machine (i.e,. system owner) to run the scripts on a regular and ongoing basis to ensure active deployments stay up-to-date with the Sovereign Stack master branch.
By default (i.e., without any command line modifiers), Sovereign Stack scripts will back up active deployments resulting in minimal downtime. (zero downtime for Ghost, minimal for Nextcloud/Gitea, BTCPAY Server).
All other documentation for this project can be found at the [sovereign-stack.org](https://www.sovereign-stack.org).
All documentation for this project can be found at [sovereign-stack.org](https://www.sovereign-stack.org). To get started with this code, check out [this post](https://www.sovereign-stack.org/get/).

View File

@ -1,225 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
# This script is meant to be executed on the management machine.
# it reaches out to an SSH endpoint and provisions that machine
# to use LXD.
DATA_PLANE_MACVLAN_INTERFACE=
DISK_TO_USE=
# override the cluster name.
CLUSTER_NAME="${1:-}"
if [ -z "$CLUSTER_NAME" ]; then
echo "ERROR: The cluster name was not provided."
exit 1
fi
#shellcheck disable=SC1091
source ./defaults.sh
export CLUSTER_PATH="$CLUSTERS_DIR/$CLUSTER_NAME"
CLUSTER_DEFINITION="$CLUSTER_PATH/cluster_definition"
export CLUSTER_DEFINITION="$CLUSTER_DEFINITION"
mkdir -p "$CLUSTER_PATH"
if [ ! -f "$CLUSTER_DEFINITION" ]; then
# stub out a cluster_definition.
cat >"$CLUSTER_DEFINITION" <<EOL
#!/bin/bash
# see https://www.sovereign-stack.org/cluster_definition for more info!
export LXD_CLUSTER_PASSWORD="$(gpg --gen-random --armor 1 14)"
export SOVEREIGN_STACK_MAC_ADDRESS="CHANGE_ME_REQUIRED"
export PROJECT_NAME="public"
#export REGISTRY_URL="https://index.docker.io/v1/"
EOL
chmod 0744 "$CLUSTER_DEFINITION"
echo "We stubbed out a '$CLUSTER_DEFINITION' file for you."
echo "Use this file to customize your cluster deployment;"
echo "Check out 'https://www.sovereign-stack.org/cluster-definition' for an example."
exit 1
fi
source "$CLUSTER_DEFINITION"
if ! lxc remote list | grep -q "$CLUSTER_NAME"; then
FQDN="${2:-}"
shift
if [ -z "$FQDN" ]; then
echo "ERROR: The Fully Qualified Domain Name of the new cluster member was not set."
exit 1
fi
# let's check to ensure we have SSH access to the specified host.
if ! wait-for-it -t 5 "$FQDN:22"; then
echo "ERROR: We can't get an SSH connection to '$FQDN:22'. Ensure you have the host set up correctly."
exit 1
fi
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--data-plane-interface=*)
DATA_PLANE_MACVLAN_INTERFACE="${i#*=}"
shift
;;
--disk=*)
DISK_TO_USE="${i#*=}"
shift
;;
*)
;;
esac
done
if [ -z "$DATA_PLANE_MACVLAN_INTERFACE" ]; then
echo "INFO: It looks like you didn't provide input on the command line for the data plane macvlan interface."
echo " We need to know which interface that is! Enter it here now."
echo ""
ssh "ubuntu@$FQDN" ip link
echo "Please enter the network interface that's dedicated to the Sovereign Stack data plane: "
read -r DATA_PLANE_MACVLAN_INTERFACE
fi
if [ -z "$DISK_TO_USE" ]; then
echo "INFO: It looks like the DISK_TO_USE has not been set. Enter it now."
echo ""
ssh "ubuntu@$FQDN" lsblk
echo "Please enter the disk or partition that Sovereign Stack will use to store data (default: loop): "
read -r DISK_TO_USE
else
DISK_TO_USE=loop
fi
else
echo "ERROR: the cluster already exists! You need to go delete your lxd remote if you want to re-create your cluster."
echo " It's may also be helpful to reset/rename your cluster path."
exit 1
fi
# if the disk is loop-based, then we assume the / path exists.
if [ "$DISK_TO_USE" != loop ]; then
# ensure we actually have that disk/partition on the system.
if ssh "ubuntu@$FQDN" lsblk | grep -q "$DISK_TO_USE"; then
echo "ERROR: We could not the disk you specified. Please run this command again and supply a different disk."
echo "NOTE: You can always specify on the command line by adding the '--disk=/dev/sdd', for example."
exit 1
fi
fi
# The MGMT Plane IP is the IP address that the LXD API binds to, which happens
# to be the same as whichever SSH connection you're coming in on.
MGMT_PLANE_IP="$(ssh ubuntu@"$FQDN" env | grep SSH_CONNECTION | cut -d " " -f 3)"
IP_OF_MGMT_MACHINE="$(ssh ubuntu@"$FQDN" env | grep SSH_CLIENT | cut -d " " -f 1 )"
IP_OF_MGMT_MACHINE="${IP_OF_MGMT_MACHINE#*=}"
IP_OF_MGMT_MACHINE="$(echo "$IP_OF_MGMT_MACHINE" | cut -d: -f1)"
# error out if the cluster password is unset.
if [ -z "$LXD_CLUSTER_PASSWORD" ]; then
echo "ERROR: LXD_CLUSTER_PASSWORD must be set in your cluster_definition."
exit 1
fi
if ! command -v lxc >/dev/null 2>&1; then
if lxc profile list --format csv | grep -q sovereign-stack; then
lxc profile delete sovereign-stack
sleep 1
fi
if lxc network list --format csv | grep -q lxdbrSS; then
lxc network delete lxdbrSS
sleep 1
fi
fi
ssh -t "ubuntu@$FQDN" "
set -ex
# install ufw and allow SSH.
sudo apt update
sudo apt upgrade -y
sudo apt install ufw htop dnsutils nano -y
sudo ufw allow ssh
sudo ufw allow 8443/tcp comment 'allow LXD management'
# enable the host firewall
if sudo ufw status | grep -q 'Status: inactive'; then
sudo ufw enable
fi
# install lxd as a snap if it's not installed.
if ! snap list | grep -q lxd; then
sudo snap install lxd --candidate
sleep 4
fi
"
# if the DATA_PLANE_MACVLAN_INTERFACE is not specified, then we 'll
# just attach VMs to the network interface used for for the default route.
if [ -z "$DATA_PLANE_MACVLAN_INTERFACE" ]; then
DATA_PLANE_MACVLAN_INTERFACE="$(ssh -t ubuntu@"$FQDN" ip route | grep default | cut -d " " -f 5)"
fi
# stub out the lxd init file for the remote SSH endpoint.
CLUSTER_MASTER_LXD_INIT="$CLUSTER_PATH/lxdinit_profile.yml"
cat >"$CLUSTER_MASTER_LXD_INIT" <<EOF
config:
core.https_address: ${MGMT_PLANE_IP}:8443
core.trust_password: ${LXD_CLUSTER_PASSWORD}
images.auto_update_interval: 15
networks:
- name: lxdbrSS
type: bridge
config:
ipv4.address: 10.139.144.1/24
ipv4.nat: "false"
ipv4.dhcp: "false"
ipv6.address: "none"
dns.mode: "none"
#managed: true
description: ss-config,${DATA_PLANE_MACVLAN_INTERFACE:-},${DISK_TO_USE:-}
# lxdbrSS is an isolated inter-vm network segment with no outbount Internet access.
cluster:
server_name: ${CLUSTER_NAME}
enabled: true
member_config: []
cluster_address: ""
cluster_certificate: ""
server_address: ""
cluster_password: ""
cluster_certificate_path: ""
cluster_token: ""
EOF
# configure the LXD Daemon with our preseed.
cat "$CLUSTER_MASTER_LXD_INIT" | ssh "ubuntu@$FQDN" lxd init --preseed
# ensure the lxd service is available over the network, then add a lxc remote, then switch the active remote to it.
if wait-for-it -t 20 "$FQDN:8443"; then
# now create a remote on your local LXC client and switch to it.
# the software will now target the new cluster.
lxc remote add "$CLUSTER_NAME" "$FQDN" --password="$LXD_CLUSTER_PASSWORD" --protocol=lxd --auth-type=tls --accept-certificate
lxc remote switch "$CLUSTER_NAME"
echo "INFO: You have create a new cluster named '$CLUSTER_NAME'. Great! We switched your lxd remote to it."
else
echo "ERROR: Could not detect the LXD endpoint. Something went wrong."
exit 1
fi
echo "HINT: Now you can consider running 'ss-deploy'."

View File

@ -1,113 +0,0 @@
#!/bin/bash
set -eu
export WWW_SERVER_MAC_ADDRESS=
export DEPLOY_WWW_SERVER=false
export DEPLOY_BTCPAY_SERVER=false
export DEPLOY_GHOST=false
export DEPLOY_NOSTR_RELAY=false
export DEPLOY_ONION_SITE=false
export DEPLOY_NEXTCLOUD=false
export DEPLOY_GITEA=false
export WWW_HOSTNAME="www"
export BTCPAY_HOSTNAME="btcpayserver"
export BTCPAY_HOSTNAME_IN_CERT="btcpay"
export NEXTCLOUD_HOSTNAME="nextcloud"
export GITEA_HOSTNAME="git"
export NOSTR_HOSTNAME="relay"
export SITE_LANGUAGE_CODES="en"
export LANGUAGE_CODE="en"
export NOSTR_ACCOUNT_PUBKEY=
# this is where the html is sourced from.
export SITE_HTML_PATH=
export BTCPAY_ADDITIONAL_HOSTNAMES=
export GHOST_MYSQL_PASSWORD=
export GHOST_MYSQL_ROOT_PASSWORD=
export NEXTCLOUD_MYSQL_PASSWORD=
export GITEA_MYSQL_PASSWORD=
export NEXTCLOUD_MYSQL_ROOT_PASSWORD=
export GITEA_MYSQL_ROOT_PASSWORD=
export DUPLICITY_BACKUP_PASSPHRASE=
#opt-add-fireflyiii;opt-add-zammad
export SSH_HOME="$HOME/.ssh"
export VLAN_INTERFACE=
export VM_NAME="sovereign-stack-base"
export DEV_MEMORY_MB="8096"
export DEV_CPU_COUNT="6"
export DOCKER_IMAGE_CACHE_FQDN="registry-1.docker.io"
export NEXTCLOUD_SPACE_GB=10
# first of all, if there are uncommited changes, we quit. You better stash or commit!
# Remote VPS instances are tagged with your current git HEAD so we know which code revision
# used when provisioning the VPS.
#LATEST_GIT_COMMIT="$(cat ./.git/refs/heads/master)"
#export LATEST_GIT_COMMIT="$LATEST_GIT_COMMIT"
# check if there are any uncommited changes. It's dangerous to instantiate VMs using
# code that hasn't been committed.
# if git update-index --refresh | grep -q "needs update"; then
# echo "ERROR: You have uncommited changes! Better stash your work with 'git stash'."
# exit 1
# fi
BTC_CHAIN=regtest
export BTC_CHAIN="$BTC_CHAIN"
DEFAULT_DB_IMAGE="mariadb:10.9.3-jammy"
# run the docker stack.
export GHOST_IMAGE="ghost:5.20.0"
# TODO switch to mysql. May require intricate export work. THIS MUST BE COMPLETED BEFORE v1 RELEASE
#https://forum.ghost.org/t/how-to-migrate-from-mariadb-10-to-mysql-8/29575
export GHOST_DB_IMAGE="$DEFAULT_DB_IMAGE"
export NGINX_IMAGE="nginx:1.23.2"
# version of backup is 24.0.3
export NEXTCLOUD_IMAGE="nextcloud:25.0.0"
export NEXTCLOUD_DB_IMAGE="$DEFAULT_DB_IMAGE"
# TODO PIN the gitea version number.
export GITEA_IMAGE="gitea/gitea:latest"
export GITEA_DB_IMAGE="$DEFAULT_DB_IMAGE"
export NOSTR_RELAY_IMAGE="scsibug/nostr-rs-relay"
export SOVEREIGN_STACK_MAC_ADDRESS=
export WWW_SERVER_MAC_ADDRESS=
export BTCPAYSERVER_MAC_ADDRESS=
export CLUSTERS_DIR="$HOME/ss-clusters"
export PROJECTS_DIR="$HOME/ss-projects"
export SITES_PATH="$HOME/ss-sites"
# The base VM image.
export BASE_LXC_IMAGE="ubuntu/22.04/cloud"
# Deploy a registry cache on your management machine.
export DEPLOY_MGMT_REGISTRY=false
export OTHER_SITES_LIST=
export BTCPAY_ALT_NAMES=
export REMOTE_HOME="/home/ubuntu"
export BTCPAY_SERVER_APPPATH="$REMOTE_HOME/btcpayserver-docker"
export REMOTE_CERT_BASE_DIR="$REMOTE_HOME/.certs"
# this space is for OS, docker images, etc. DOES NOT INCLUDE USER DATA.
export ROOT_DISK_SIZE_GB=20
export REGISTRY_URL="https://index.docker.io/v1/"

410
deploy.sh
View File

@ -1,410 +0,0 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
RESPOSITORY_PATH="$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
export RESPOSITORY_PATH="$RESPOSITORY_PATH"
check_dependencies () {
for cmd in "$@"; do
if ! command -v "$cmd" >/dev/null 2>&1; then
echo "This script requires \"${cmd}\" to be installed. Please run 'install.sh'."
exit 1
fi
done
}
# Check system's dependencies
check_dependencies wait-for-it dig rsync sshfs lxc
# let's check to ensure the management machine is on the Baseline ubuntu 21.04
if ! lsb_release -d | grep -q "Ubuntu 22.04"; then
echo "ERROR: Your machine is not running the Ubuntu 22.04 LTS baseline OS on your management machine."
exit 1
fi
DOMAIN_NAME=
RUN_CERT_RENEWAL=false
SKIP_WWW=false
RESTORE_WWW=false
BACKUP_CERTS=false
BACKUP_APPS=false
BACKUP_BTCPAY=false
RESTORE_BTCPAY=false
BTCPAY_RESTORE_ARCHIVE_PATH=
BTCPAY_LOCAL_BACKUP_PATH=
SKIP_BTCPAY=false
UPDATE_BTCPAY=false
RECONFIGURE_BTCPAY_SERVER=false
CLUSTER_NAME="$(lxc remote get-default)"
STOP_SERVICES=false
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--restore-www)
RESTORE_WWW=true
BACKUP_APPS=false
RUN_CERT_RENEWAL=false
shift
;;
--restore-btcpay)
RESTORE_BTCPAY=true
BACKUP_BTCPAY=false
RUN_CERT_RENEWAL=false
shift
;;
--backup-certs)
BACKUP_CERTS=true
shift
;;
--stop)
STOP_SERVICES=true
shift
;;
--domain=*)
DOMAIN_NAME="${i#*=}"
shift
;;
--update-btcpay)
UPDATE_BTCPAY=true
shift
;;
--skip-www)
SKIP_WWW=true
shift
;;
--skip-btcpay)
SKIP_BTCPAY=true
shift
;;
--backup-ghost)
BACKUP_APPS=true
shift
;;
--backup-btcpay)
BACKUP_BTCPAY=true
shift
;;
--restore-archive=*)
BTCPAY_RESTORE_ARCHIVE_PATH="${i#*=}"
shift
;;
--renew-certs)
RUN_CERT_RENEWAL=true
shift
;;
--reconfigure-btcpay)
RECONFIGURE_BTCPAY_SERVER=true
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
# do some CLI checking.
if [ "$RESTORE_BTCPAY" = true ] && [ ! -f "$BTCPAY_RESTORE_ARCHIVE_PATH" ]; then
echo "ERROR: The restoration archive is not specified. Ensure --restore-archive= is set on the command line."
exit 1
fi
# set up our default paths.
source ./defaults.sh
export CACHES_DIR="$HOME/ss-cache"
export DOMAIN_NAME="$DOMAIN_NAME"
export REGISTRY_DOCKER_IMAGE="registry:2"
export BTCPAY_RESTORE_ARCHIVE_PATH="$BTCPAY_RESTORE_ARCHIVE_PATH"
export RESTORE_WWW="$RESTORE_WWW"
export STOP_SERVICES="$STOP_SERVICES"
export BACKUP_CERTS="$BACKUP_CERTS"
export BACKUP_APPS="$BACKUP_APPS"
export RESTORE_BTCPAY="$RESTORE_BTCPAY"
export BACKUP_BTCPAY="$BACKUP_BTCPAY"
export RUN_CERT_RENEWAL="$RUN_CERT_RENEWAL"
export CLUSTER_NAME="$CLUSTER_NAME"
export CLUSTER_PATH="$CLUSTERS_DIR/$CLUSTER_NAME"
# ensure our cluster path is created.
mkdir -p "$CLUSTER_PATH"
# if an authorized_keys file does not exist, we'll stub one out with the current user.
# add additional id_rsa.pub entries manually for more administrative logins.
if [ ! -f "$CLUSTER_PATH/authorized_keys" ]; then
cat "$SSH_HOME/id_rsa.pub" >> "$CLUSTER_PATH/authorized_keys"
echo "INFO: Sovereign Stack just stubbed out '$CLUSTER_PATH/authorized_keys'. Go update it."
echo " Add ssh pubkeys for your various management machines, if any."
echo " By default we added your main ssh pubkey: '$SSH_HOME/id_rsa.pub'."
exit 1
fi
CLUSTER_DEFINITION="$CLUSTER_PATH/cluster_definition"
export CLUSTER_DEFINITION="$CLUSTER_DEFINITION"
#########################################
if [ ! -f "$CLUSTER_DEFINITION" ]; then
echo "ERROR: The cluster defintion could not be found. You may need to re-run 'ss-cluster create'."
exit 1
fi
source "$CLUSTER_DEFINITION"
# this is our password generation mechanism. Relying on GPG for secure password generation
function new_pass {
gpg --gen-random --armor 1 25
}
function instantiate_vms {
export BTC_CHAIN="$BTC_CHAIN"
export UPDATE_BTCPAY="$UPDATE_BTCPAY"
export RECONFIGURE_BTCPAY_SERVER="$RECONFIGURE_BTCPAY_SERVER"
# iterate over all our server endpoints and provision them if needed.
# www
VPS_HOSTNAME=
for VIRTUAL_MACHINE in www btcpayserver; do
FQDN=
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
source "$SITE_PATH/site_definition"
source "$RESPOSITORY_PATH/domain_env.sh"
# VALIDATE THE INPUT from the ENVFILE
if [ -z "$DOMAIN_NAME" ]; then
echo "ERROR: DOMAIN_NAME not specified. Use the --domain-name= option."
exit 1
fi
# first let's get the DISK_TO_USE and DATA_PLANE_MACVLAN_INTERFACE from the ss-config
# which is set up during LXD cluster creation ss-cluster.
LXD_SS_CONFIG_LINE=
if lxc network list --format csv | grep lxdbrSS | grep -q ss-config; then
LXD_SS_CONFIG_LINE="$(lxc network list --format csv | grep lxdbrSS | grep ss-config)"
fi
if [ -z "$LXD_SS_CONFIG_LINE" ]; then
echo "ERROR: the MACVLAN interface has not been specified. You may need to run ss-cluster again."
exit 1
fi
CONFIG_ITEMS="$(echo "$LXD_SS_CONFIG_LINE" | awk -F'"' '{print $2}')"
DATA_PLANE_MACVLAN_INTERFACE="$(echo "$CONFIG_ITEMS" | cut -d ',' -f2)"
DISK_TO_USE="$(echo "$CONFIG_ITEMS" | cut -d ',' -f3)"
export DATA_PLANE_MACVLAN_INTERFACE="$DATA_PLANE_MACVLAN_INTERFACE"
export DISK_TO_USE="$DISK_TO_USE"
./deployment/create_lxc_base.sh
export MAC_ADDRESS_TO_PROVISION=
export VPS_HOSTNAME="$VPS_HOSTNAME"
export FQDN="$VPS_HOSTNAME.$DOMAIN_NAME"
# ensure the admin has set the MAC address for the base image.
if [ -z "$SOVEREIGN_STACK_MAC_ADDRESS" ]; then
echo "ERROR: SOVEREIGN_STACK_MAC_ADDRESS is undefined. Check your project definition."
exit 1
fi
DDNS_HOST=
if [ "$VIRTUAL_MACHINE" = www ]; then
if [ "$SKIP_WWW" = true ]; then
echo "INFO: Skipping WWW due to command line argument."
continue
fi
VPS_HOSTNAME="$WWW_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$WWW_SERVER_MAC_ADDRESS"
DDNS_HOST="$WWW_HOSTNAME"
ROOT_DISK_SIZE_GB="$((ROOT_DISK_SIZE_GB + NEXTCLOUD_SPACE_GB))"
elif [ "$VIRTUAL_MACHINE" = btcpayserver ] || [ "$SKIP_BTCPAY" = true ]; then
DDNS_HOST="$BTCPAY_HOSTNAME"
VPS_HOSTNAME="$BTCPAY_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$BTCPAYSERVER_MAC_ADDRESS"
if [ "$BTC_CHAIN" = mainnet ]; then
ROOT_DISK_SIZE_GB=150
elif [ "$BTC_CHAIN" = testnet ]; then
ROOT_DISK_SIZE_GB=70
fi
elif [ "$VIRTUAL_MACHINE" = "sovereign-stack" ]; then
DDNS_HOST="sovereign-stack-base"
ROOT_DISK_SIZE_GB=8
else
echo "ERROR: VIRTUAL_MACHINE not within allowable bounds."
exit
fi
export DDNS_HOST="$DDNS_HOST"
export FQDN="$DDNS_HOST.$DOMAIN_NAME"
export LXD_VM_NAME="${FQDN//./-}"
export VIRTUAL_MACHINE="$VIRTUAL_MACHINE"
export REMOTE_CERT_DIR="$REMOTE_CERT_BASE_DIR/$FQDN"
export MAC_ADDRESS_TO_PROVISION="$MAC_ADDRESS_TO_PROVISION"
export BTCPAY_LOCAL_BACKUP_PATH="$SITE_PATH/backups/btcpayserver/$BACKUP_TIMESTAMP"
export BTCPAY_LOCAL_BACKUP_ARCHIVE_PATH="$BTCPAY_LOCAL_BACKUP_PATH/$UNIX_BACKUP_TIMESTAMP.tar.gz"
./deployment/deploy_vms.sh
# if the local docker client isn't logged in, do so;
# this helps prevent docker pull errors since they throttle.
if [ ! -f "$HOME/.docker/config.json" ]; then
echo "$REGISTRY_PASSWORD" | docker login --username "$REGISTRY_USERNAME" --password-stdin
fi
# this tells our local docker client to target the remote endpoint via SSH
export DOCKER_HOST="ssh://ubuntu@$PRIMARY_WWW_FQDN"
# enable docker swarm mode so we can support docker stacks.
if docker info | grep -q "Swarm: inactive"; then
docker swarm init --advertise-addr enp6s0
fi
done
}
function stub_site_definition {
mkdir -p "$SITE_PATH" "$PROJECT_PATH/sites"
# create a symlink from the CLUSTERPATH/sites/DOMAIN_NAME to the ss-sites/domain name
if [ ! -d "$PROJECT_PATH/sites/$DOMAIN_NAME" ]; then
ln -s "$SITE_PATH" "$PROJECT_PATH/sites/$DOMAIN_NAME"
fi
if [ ! -f "$SITE_PATH/site_definition" ]; then
# check to see if the enf file exists. exist if not.
SITE_DEFINITION_PATH="$SITE_PATH/site_definition"
if [ ! -f "$SITE_DEFINITION_PATH" ]; then
# stub out a site_definition with new passwords.
cat >"$SITE_DEFINITION_PATH" <<EOL
#!/bin/bash
export DOMAIN_NAME="${DOMAIN_NAME}"
#export BTCPAY_ALT_NAMES="tip,store,pay,send"
export SITE_LANGUAGE_CODES="en"
export DUPLICITY_BACKUP_PASSPHRASE="$(new_pass)"
export DEPLOY_GHOST=true
export DEPLOY_NEXTCLOUD=false
export DEPLOY_NOSTR_RELAY=false
export NOSTR_ACCOUNT_PUBKEY="NOSTR_IDENTITY_PUBKEY_GOES_HERE"
export DEPLOY_GITEA=false
#export DEPLOY_ONION_SITE=false
export GHOST_MYSQL_PASSWORD="$(new_pass)"
export GHOST_MYSQL_ROOT_PASSWORD="$(new_pass)"
export NEXTCLOUD_MYSQL_PASSWORD="$(new_pass)"
export NEXTCLOUD_MYSQL_ROOT_PASSWORD="$(new_pass)"
export GITEA_MYSQL_PASSWORD="$(new_pass)"
export GITEA_MYSQL_ROOT_PASSWORD="$(new_pass)"
EOL
chmod 0744 "$SITE_DEFINITION_PATH"
echo "INFO: we stubbed a new site_defintion for you at '$SITE_DEFINITION_PATH'. Go update it yo!"
exit 1
fi
fi
}
CURRENT_PROJECT="$(lxc info | grep "project:" | awk '{print $2}')"
PROJECT_PATH="$PROJECTS_DIR/$PROJECT_NAME"
mkdir -p "$PROJECT_PATH" "$CLUSTER_PATH/projects"
export PROJECT_PATH="$PROJECT_PATH"
# create a symlink from ./clusterpath/projects/project
if [ ! -d "$CLUSTER_PATH/projects/$PROJECT_NAME" ]; then
ln -s "$PROJECT_PATH" "$CLUSTER_PATH/projects/$PROJECT_NAME"
fi
# check if we need to provision a new lxc project.
if [ "$PROJECT_NAME" != "$CURRENT_PROJECT" ]; then
if ! lxc project list | grep -q "$PROJECT_NAME"; then
echo "INFO: The lxd project specified in the cluster_definition did not exist. We'll create one!"
lxc project create "$PROJECT_NAME"
fi
echo "INFO: switch to lxd project '$PROJECT_NAME'."
lxc project switch "$PROJECT_NAME"
fi
# check to see if the enf file exists. exist if not.
PROJECT_DEFINITION_PATH="$PROJECT_PATH/project_definition"
if [ ! -f "$PROJECT_DEFINITION_PATH" ]; then
# stub out a project_definition
cat >"$PROJECT_DEFINITION_PATH" <<EOL
#!/bin/bash
# see https://www.sovereign-stack.org/project-definition for more info.
export WWW_SERVER_MAC_ADDRESS="CHANGE_ME_REQUIRED"
export BTCPAYSERVER_MAC_ADDRESS="CHANGE_ME_REQUIRED"
export BTC_CHAIN="regtest|testnet|mainnet"
export PRIMARY_DOMAIN="domain0.tld"
export OTHER_SITES_LIST="domain1.tld,domain2.tld,domain3.tld"
EOL
chmod 0744 "$PROJECT_DEFINITION_PATH"
echo "INFO: we stubbed a new project_defition for you at '$PROJECT_DEFINITION_PATH'. Go update it yo!"
echo "INFO: Learn more at https://www.sovereign-stack.org/project-definitions/"
exit 1
fi
# source project defition.
source "$PROJECT_DEFINITION_PATH"
# the DOMAIN_LIST is a complete list of all our domains. We often iterate over this list.
export DOMAIN_LIST="${PRIMARY_DOMAIN},${OTHER_SITES_LIST}"
export DOMAIN_COUNT=$(("$(echo "$DOMAIN_LIST" | tr -cd , | wc -c)"+1))
# let's provision our primary domain first.
export DOMAIN_NAME="$PRIMARY_DOMAIN"
# we deploy the WWW and btcpay server under the PRIMARY_DOMAIN.
export DEPLOY_WWW_SERVER=true
export DEPLOY_BTCPAY_SERVER=true
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
export PRIMARY_WWW_FQDN="$WWW_HOSTNAME.$DOMAIN_NAME"
stub_site_definition
# bring the VMs up under the primary domain name.
instantiate_vms
# let's stub out the rest of our site definitions, if any.
for DOMAIN_NAME in ${OTHER_SITES_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# stub out the site_defition if it's doesn't exist.
stub_site_definition
done
# now let's run the www and btcpay-specific provisioning scripts.
if [ "$SKIP_WWW" = false ] && [ "$DEPLOY_BTCPAY_SERVER" = true ]; then
bash -c "./deployment/www/go.sh"
fi
if [ "$SKIP_BTCPAY" = false ] && [ "$DEPLOY_BTCPAY_SERVER" = true ]; then
bash -c "./deployment/btcpayserver/go.sh"
fi

9
deployment/base.sh Executable file
View File

@ -0,0 +1,9 @@
#!/bin/bash
# The base VM image.
export LXD_UBUNTU_BASE_VERSION="jammy"
export BASE_IMAGE_VM_NAME="ss-base-${LXD_UBUNTU_BASE_VERSION//./-}"
export BASE_LXC_IMAGE="ubuntu/$LXD_UBUNTU_BASE_VERSION/cloud"
WEEK_NUMBER=$(date +%U)
export UBUNTU_BASE_IMAGE_NAME="ss-ubuntu-${LXD_UBUNTU_BASE_VERSION//./-}"
export DOCKER_BASE_IMAGE_NAME="ss-docker-${LXD_UBUNTU_BASE_VERSION//./-}-$WEEK_NUMBER"

View File

@ -1,32 +0,0 @@
#!/bin/bash
set -e
cd "$(dirname "$0")"
# take the services down, create a backup archive, then pull it down.
# the script executed here from the BTCPAY repo will automatically take services down
# and bring them back up.
echo "INFO: Starting BTCPAY Backup script for host '$BTCPAY_FQDN'."
ssh "$BTCPAY_FQDN" "mkdir -p $REMOTE_HOME/backups; cd $REMOTE_HOME/; sudo BTCPAY_BASE_DIRECTORY=$REMOTE_HOME bash -c $BTCPAY_SERVER_APPPATH/btcpay-down.sh"
# TODO; not sure if this is necessary, but we want to give the VM additional time to take down all services
# that way processes can run shutdown procedures and leave files in the correct state.
sleep 10
# TODO enable encrypted archives
# TODO switch to btcpay-backup.sh when on LXD fully.
scp ./remote_scripts/btcpay-backup.sh "$BTCPAY_FQDN:$REMOTE_HOME/btcpay-backup.sh"
ssh "$BTCPAY_FQDN" "sudo cp $REMOTE_HOME/btcpay-backup.sh $BTCPAY_SERVER_APPPATH/btcpay-backup.sh && sudo chmod 0755 $BTCPAY_SERVER_APPPATH/btcpay-backup.sh"
ssh "$BTCPAY_FQDN" "cd $REMOTE_HOME/; sudo BTCPAY_BASE_DIRECTORY=$REMOTE_HOME BTCPAY_DOCKER_COMPOSE=$REMOTE_HOME/btcpayserver-docker/Generated/docker-compose.generated.yml bash -c $BTCPAY_SERVER_APPPATH/btcpay-backup.sh"
# next we pull the resulting backup archive down to our management machine.
ssh "$BTCPAY_FQDN" "sudo cp /var/lib/docker/volumes/backup_datadir/_data/backup.tar.gz $REMOTE_HOME/backups/btcpay.tar.gz"
ssh "$BTCPAY_FQDN" "sudo chown ubuntu:ubuntu $REMOTE_HOME/backups/btcpay.tar.gz"
mkdir -p "$BTCPAY_LOCAL_BACKUP_PATH"
scp "$BTCPAY_FQDN:$REMOTE_HOME/backups/btcpay.tar.gz" "$BTCPAY_LOCAL_BACKUP_ARCHIVE_PATH"
echo "INFO: Created backup archive '$BTCPAY_LOCAL_BACKUP_ARCHIVE_PATH' for host '$BTCPAY_FQDN'."

View File

@ -1,6 +0,0 @@
# we append this text to the btcpay server /home/ubuntu/.bashrc so
# logged in users have more common access to the variou
alias bitcoin-cli="bitcoin-cli.sh $@"
alias lightning-cli="bitcoin-lightning-cli.sh $@"

View File

@ -1,60 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
export DOCKER_HOST="ssh://ubuntu@$BTCPAY_FQDN"
OPEN_URL=true
RUN_SERVICES=true
# we will re-run the btcpayserver provisioning scripts if directed to do so.
# if an update does occur, we grab another backup.
if [ "$UPDATE_BTCPAY" = true ]; then
# run the update.
ssh "$FQDN" "bash -c $BTCPAY_SERVER_APPPATH/btcpay-down.sh"
# btcpay-update.sh brings services back up, but does not take them down.
ssh "$FQDN" "sudo bash -c $BTCPAY_SERVER_APPPATH/btcpay-update.sh"
sleep 20
elif [ "$RESTORE_BTCPAY" = true ]; then
# run the update.
ssh "$FQDN" "bash -c $BTCPAY_SERVER_APPPATH/btcpay-down.sh"
sleep 15
./restore.sh
RUN_SERVICES=true
OPEN_URL=true
elif [ "$RECONFIGURE_BTCPAY_SERVER" == true ]; then
# the administrator may have indicated a reconfig;
# if so, we re-run setup script.
./stub_btcpay_setup.sh
RUN_SERVICES=true
OPEN_URL=true
fi
# if the script gets this far, then we grab a regular backup.
if [ "$BACKUP_BTCPAY" = true ]; then
# we just grab a regular backup
./backup_btcpay.sh
fi
if [ "$RUN_SERVICES" = true ]; then
# The default is to resume services, though admin may want to keep services off (eg., for a migration)
# we bring the services back up by default.
ssh "$FQDN" "bash -c $BTCPAY_SERVER_APPPATH/btcpay-up.sh"
OPEN_URL=true
fi
if [ "$OPEN_URL" = true ]; then
if wait-for-it -t 5 "$PRIMARY_WWW_FQDN:80"; then
xdg-open "http://$PRIMARY_WWW_FQDN" > /dev/null 2>&1
fi
fi

View File

@ -1,117 +0,0 @@
#!/bin/bash -e
set -o pipefail -o errexit
# Please be aware of these important issues:
#
# - Old channel state is toxic and you can loose all your funds, if you or someone
# else closes a channel based on the backup with old state - and the state changes
# often! If you publish an old state (say from yesterday's backup) on chain, you
# WILL LOSE ALL YOUR FUNDS IN A CHANNEL, because the counterparty will publish a
# revocation key!
if [ "$(id -u)" != "0" ]; then
printf "\n🚨 This script must be run as root.\n"
printf "➡️ Use the command 'sudo su -' (include the trailing hypen) and try again.\n\n"
exit 1
fi
# preparation
docker_dir=$(docker volume inspect generated_btcpay_datadir --format="{{.Mountpoint}}" | sed -e "s%/volumes/.*%%g")
dbdump_name=postgres.sql.gz
btcpay_dir="$BTCPAY_BASE_DIRECTORY/btcpayserver-docker"
backup_dir="$docker_dir/volumes/backup_datadir/_data"
dbdump_path="$docker_dir/$dbdump_name"
backup_path="$backup_dir/backup.tar.gz"
# ensure backup dir exists
if [ ! -d "$backup_dir" ]; then
mkdir -p "$backup_dir"
fi
cd "$btcpay_dir"
. helpers.sh
dbcontainer=$(docker ps -a -q -f "name=postgres_1")
if [ -z "$dbcontainer" ]; then
printf "\n"
echo " Database container is not up and running. Starting BTCPay Server …"
docker volume create generated_postgres_datadir
docker-compose -f "$BTCPAY_DOCKER_COMPOSE" up -d postgres
printf "\n"
dbcontainer=$(docker ps -a -q -f "name=postgres_1")
if [ -z "$dbcontainer" ]; then
echo "🚨 Database container could not be started or found."
exit 1
fi
fi
printf "\n"
echo " Dumping database …"
{
docker exec "$dbcontainer" pg_dumpall -c -U postgres | gzip > "$dbdump_path"
echo "✅ Database dump done."
} || {
echo "🚨 Dumping failed. Please check the error message above."
exit 1
}
printf "\n Stopping BTCPay Server …\n\n"
btcpay_down
printf "\n"
cd $docker_dir
echo " Archiving files in $(pwd)"
{
tar \
--exclude="volumes/backup_datadir" \
--exclude="volumes/generated_bitcoin_datadir/_data/blocks" \
--exclude="volumes/generated_bitcoin_datadir/_data/chainstate" \
--exclude="volumes/generated_bitcoin_datadir/_data/debug.log" \
--exclude="volumes/generated_bitcoin_datadir/_data/testnet3/blocks" \
--exclude="volumes/generated_bitcoin_datadir/_data/testnet3/chainstate" \
--exclude="volumes/generated_bitcoin_datadir/_data/testnet3/debug.log" \
--exclude="volumes/generated_bitcoin_datadir/_data/regtest/blocks" \
--exclude="volumes/generated_bitcoin_datadir/_data/regtest/chainstate" \
--exclude="volumes/generated_bitcoin_datadir/_data/regtest/debug.log" \
--exclude="volumes/generated_postgres_datadir" \
--exclude="volumes/generated_tor_relay_datadir" \
--exclude="volumes/generated_clightning_bitcoin_datadir/_data/lightning-rpc" \
--exclude="**/logs/*" \
-cvzf "$backup_path" "$dbdump_name" volumes/generated_*
echo "✅ Archive done."
if [ ! -z "$BTCPAY_BACKUP_PASSPHRASE" ]; then
printf "\n"
echo "🔐 BTCPAY_BACKUP_PASSPHRASE is set, the backup will be encrypted."
{
gpg -o "$backup_path.gpg" --batch --yes -c --passphrase "$BTCPAY_BACKUP_PASSPHRASE" $backup_path
rm "$backup_path"
backup_path="$backup_path.gpg"
echo "✅ Encryption done."
} || {
echo "🚨 Encrypting failed. Please check the error message above."
printf "\n Restarting BTCPay Server …\n\n"
cd "$btcpay_dir"
#btcpay_up
exit 1
}
fi
} || {
echo "🚨 Archiving failed. Please check the error message above."
printf "\n Restarting BTCPay Server …\n\n"
cd $btcpay_dir
#btcpay_up
exit 1
}
printf "\n Restarting BTCPay Server …\n\n"
cd $btcpay_dir
#btcpay_up
printf "\n Cleaning up …\n\n"
rm $dbdump_path
printf "✅ Backup done => $backup_path\n\n"

View File

@ -1,115 +0,0 @@
#!/bin/bash -e
set -o pipefail -o errexit
if [ "$(id -u)" != "0" ]; then
echo "ERROR: This script must be run as root."
echo "➡️ Use the command 'sudo su -' (include the trailing hypen) and try again."
exit 1
fi
backup_path=$1
if [ -z "$backup_path" ]; then
echo "ERROR: Usage: btcpay-restore.sh /path/to/backup.tar.gz"
exit 1
fi
if [ ! -f "$backup_path" ]; then
echo "ERROR: $backup_path does not exist."
exit 1
fi
if [[ "$backup_path" == *.gpg && -z "$BTCPAY_BACKUP_PASSPHRASE" ]]; then
echo "INFO: $backup_path is encrypted. Please provide the passphrase to decrypt it."
echo "INFO: Usage: BTCPAY_BACKUP_PASSPHRASE=t0pSeCrEt btcpay-restore.sh /path/to/backup.tar.gz.gpg"
exit 1
fi
# preparation
docker_dir=$(docker volume inspect generated_btcpay_datadir --format="{{.Mountpoint}}" | sed -e "s%/volumes/.*%%g")
restore_dir="$docker_dir/volumes/backup_datadir/_data/restore"
dbdump_name=postgres.sql.gz
btcpay_dir="$BTCPAY_BASE_DIRECTORY/btcpayserver-docker"
# ensure clean restore dir
echo "INFO: Cleaning restore directory $restore_dir."
rm -rf "$restore_dir"
mkdir -p "$restore_dir"
if [[ "$backup_path" == *.gpg ]]; then
echo "🔐 Decrypting backup file …"
{
gpg -o "${backup_path%.*}" --batch --yes --passphrase "$BTCPAY_BACKUP_PASSPHRASE" -d "$backup_path"
backup_path="${backup_path%.*}"
echo "SUCESS: Decryption done."
} || {
echo "🚨 Decryption failed. Please check the error message above."
exit 1
}
fi
cd "$restore_dir"
echo " Extracting files in $(pwd)"
tar -h -xvf "$backup_path" -C "$restore_dir"
# basic control checks
if [ ! -f "$dbdump_name" ]; then
echo "ERROR: '$dbdump_name' does not exist."
exit 1
fi
if [ ! -d "volumes" ]; then
echo "ERROR: volumes directory does not exist."
exit 1
fi
cd "$btcpay_dir"
. helpers.sh
cd "$restore_dir"
{
echo "INFO: Restoring volumes."
# ensure volumes dir exists
if [ ! -d "$docker_dir/volumes" ]; then
mkdir -p "$docker_dir/volumes"
fi
# copy volume directories over
cp -r volumes/* "$docker_dir/volumes/"
# ensure datadirs excluded in backup exist
mkdir -p "$docker_dir/volumes/generated_postgres_datadir/_data"
echo "✅ Volume restore done."
} || {
echo "🚨 Restoring volumes failed. Please check the error message above."
exit 1
}
{
echo "INFO: Starting database container"
docker-compose -f "$BTCPAY_DOCKER_COMPOSE" up -d postgres
dbcontainer=$(docker ps -a -q -f "name=postgres")
if [ -z "$dbcontainer" ]; then
echo "ERROR: Database container could not be started or found."
exit 1
fi
} || {
echo "ERROR: Starting database container failed. Please check the error message above."
exit 1
}
cd "$restore_dir"
{
echo "INFO: Restoring database..."
gunzip -c $dbdump_name | docker exec -i "$dbcontainer" psql -U postgres postgres -a
echo "SUCCESS: Database restore done."
} || {
echo "ERROR: Restoring database failed. Please check the error message above."
exit 1
}
echo "INFO: Cleaning up."
rm -rf "$restore_dir"
echo "SUCCESS: Restore done"

View File

@ -1,29 +0,0 @@
#!/bin/bash
set -e
cd "$(dirname "$0")"
if [ -f "$BTCPAY_RESTORE_ARCHIVE_PATH" ]; then
# push the restoration archive to the remote server
echo "INFO: Restoring BTCPAY Server: $BTCPAY_RESTORE_ARCHIVE_PATH"
REMOTE_BACKUP_PATH="$REMOTE_HOME/backups/btcpayserver"
ssh "$FQDN" mkdir -p "$REMOTE_BACKUP_PATH"
REMOTE_BTCPAY_ARCHIVE_PATH="$REMOTE_BACKUP_PATH/btcpay.tar.gz"
scp "$BTCPAY_RESTORE_ARCHIVE_PATH" "$FQDN:$REMOTE_BTCPAY_ARCHIVE_PATH"
# we clean up any old containers first before restoring.
ssh "$FQDN" docker system prune -f
# push the modified restore script to the remote directory, set permissions, and execute.
scp ./remote_scripts/btcpay-restore.sh "$FQDN:$REMOTE_HOME/btcpay-restore.sh"
ssh "$FQDN" "sudo mv $REMOTE_HOME/btcpay-restore.sh $BTCPAY_SERVER_APPPATH/btcpay-restore.sh && sudo chmod 0755 $BTCPAY_SERVER_APPPATH/btcpay-restore.sh"
ssh "$FQDN" "cd $REMOTE_HOME/; sudo BTCPAY_BASE_DIRECTORY=$REMOTE_HOME BTCPAY_DOCKER_COMPOSE=$REMOTE_HOME/btcpayserver-docker/Generated/docker-compose.generated.yml bash -c '$BTCPAY_SERVER_APPPATH/btcpay-restore.sh $REMOTE_BTCPAY_ARCHIVE_PATH'"
# now, we're going to take things down because aparently we this needs to be re-exececuted.
ssh "$FQDN" "bash -c $BTCPAY_SERVER_APPPATH/btcpay-down.sh"
else
echo "ERROR: File does not exist."
exit 1
fi

View File

@ -1,72 +0,0 @@
#!/bin/bash
set -e
cd "$(dirname "$0")"
# export BTCPAY_FASTSYNC_ARCHIVE_FILENAME="utxo-snapshot-bitcoin-testnet-1445586.tar"
# BTCPAY_REMOTE_RESTORE_PATH="/var/lib/docker/volumes/generated_bitcoin_datadir/_data"
# This is the config for a basic proxy to the listening port 127.0.0.1:2368
# It also supports modern TLS, so SSL certs must be available.
#opt-add-nostr-relay;
cat > "$SITE_PATH/btcpay.sh" <<EOL
#!/bin/bash
set -e
cd "\$(dirname "\$0")"
# wait for cloud-init to complete yo
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done
if [ ! -d "btcpayserver-docker" ]; then
echo "cloning btcpayserver-docker";
git clone -b master https://github.com/btcpayserver/btcpayserver-docker btcpayserver-docker;
git config --global --add safe.directory /home/ubuntu/btcpayserver-docker
else
cd ./btcpayserver-docker
git pull
git pull --all --tags
cd -
fi
cd btcpayserver-docker
export BTCPAY_HOST="${BTCPAY_USER_FQDN}"
export NBITCOIN_NETWORK="${BTC_CHAIN}"
export LIGHTNING_ALIAS="${PRIMARY_DOMAIN}"
export BTCPAYGEN_LIGHTNING="clightning"
export BTCPAYGEN_CRYPTO1="btc"
export BTCPAYGEN_ADDITIONAL_FRAGMENTS="opt-save-storage-s;opt-add-btctransmuter;"
export BTCPAYGEN_REVERSEPROXY="nginx"
export BTCPAY_ENABLE_SSH=false
export BTCPAY_BASE_DIRECTORY=${REMOTE_HOME}
export BTCPAYGEN_EXCLUDE_FRAGMENTS="nginx-https"
export REVERSEPROXY_DEFAULT_HOST="$BTCPAY_USER_FQDN"
if [ "\$NBITCOIN_NETWORK" != regtest ]; then
# run fast_sync if it's not been done before.
if [ ! -f /home/ubuntu/fast_sync_completed ]; then
cd ./contrib/FastSync
./load-utxo-set.sh
touch /home/ubuntu/fast_sync_completed
cd -
fi
fi
# run the setup script.
. ./btcpay-setup.sh -i
EOL
# send an updated ~/.bashrc so we have quicker access to cli tools
scp ./bashrc.txt "ubuntu@$FQDN:$REMOTE_HOME/.bashrc"
ssh "$BTCPAY_FQDN" "chown ubuntu:ubuntu $REMOTE_HOME/.bashrc"
ssh "$BTCPAY_FQDN" "chmod 0664 $REMOTE_HOME/.bashrc"
# send the setup script to the remote machine.
scp "$SITE_PATH/btcpay.sh" "ubuntu@$FQDN:$REMOTE_HOME/btcpay_setup.sh"
ssh "$BTCPAY_FQDN" "chmod 0744 $REMOTE_HOME/btcpay_setup.sh"
ssh "$BTCPAY_FQDN" "sudo bash -c $REMOTE_HOME/btcpay_setup.sh"
ssh "$BTCPAY_FQDN" "touch $REMOTE_HOME/btcpay.complete"

View File

@ -3,59 +3,90 @@
set -eu
cd "$(dirname "$0")"
./stub_lxc_profile.sh sovereign-stack
. ./base.sh
# create the default storage pool if necessary
if ! lxc storage list --format csv | grep -q "sovereign-stack"; then
bash -c "./stub_lxc_profile.sh --lxd-hostname=$BASE_IMAGE_VM_NAME"
if [ "$DISK_TO_USE" != loop ]; then
# we omit putting a size here so, so LXD will consume the entire disk if '/dev/sdb' or partition if '/dev/sdb1'.
# TODO do some sanity/resource checking on DISK_TO_USE.
lxc storage create "sovereign-stack" zfs source="$DISK_TO_USE"
else
# if a disk is the default 'loop', then we create a zfs storage pool
# on top of the existing filesystem using a loop device, per LXD docs
lxc storage create "sovereign-stack" zfs
fi
if lxc list -q --project default | grep -q "$BASE_IMAGE_VM_NAME" ; then
lxc delete -f "$BASE_IMAGE_VM_NAME" --project=default
fi
# If our template doesn't exist, we create one.
if ! lxc image list --format csv "$VM_NAME" | grep -q "$VM_NAME"; then
# If the lxc VM does exist, then we will delete it (so we can start fresh)
if lxc list -q --format csv | grep -q "$VM_NAME"; then
lxc delete "$VM_NAME" --force
# let's download our base image.
if ! lxc image list --format csv --columns l | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
# copy the image down from canonical.
lxc image copy "images:$BASE_LXC_IMAGE" "$REMOTE_NAME": --alias "$UBUNTU_BASE_IMAGE_NAME" --public --vm --auto-update
fi
# remove the ssh known endpoint else we get warnings.
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$VM_NAME"
# If the lxc VM does exist, then we will delete it (so we can start fresh)
if lxc list --format csv -q | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
# if there's no snapshot, we dispense with the old image and try again.
if ! lxc info "$BASE_IMAGE_VM_NAME" | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
lxc delete "$BASE_IMAGE_VM_NAME" --force
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$BASE_IMAGE_VM_NAME"
fi
# let's download our base image.
if ! lxc image list --format csv --columns l | grep -q "ubuntu-base"; then
# if the image doesn't exist, download it from Ubuntu's image server
# TODO see if we can fetch this file from a more censorship-resistant source, e.g., ipfs
# we don't really need to cache this locally since it gets continually updated upstream.
lxc image copy "images:$BASE_LXC_IMAGE" "$CLUSTER_NAME": --alias "ubuntu-base" --public --vm
fi
# this vm is used temperarily with
lxc init --profile="sovereign-stack" "ubuntu-base" "$VM_NAME" --vm
# let's PIN the HW address for now so we don't exhaust IP
# and so we can set DNS internally.
else
# the base image is ubuntu:22.04.
lxc init --profile="$BASE_IMAGE_VM_NAME" "$UBUNTU_BASE_IMAGE_NAME" "$BASE_IMAGE_VM_NAME" --vm --project=default
# TODO move this sovereign-stack-base construction VM to separate dedicated IP
lxc config set "$VM_NAME" "volatile.enp5s0.hwaddr=$SOVEREIGN_STACK_MAC_ADDRESS"
lxc config set "$BASE_IMAGE_VM_NAME" --project=default
lxc start "$VM_NAME"
sleep 10
# for CHAIN in mainnet testnet; do
# for DATA in blocks chainstate; do
# lxc storage volume attach ss-base "$CHAIN-$DATA" "$BASE_IMAGE_VM_NAME" "/home/ubuntu/bitcoin/$DATA"
# done
# done
lxc start "$BASE_IMAGE_VM_NAME" --project=default
sleep 25
while lxc exec "$BASE_IMAGE_VM_NAME" --project=default -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done
# ensure the ssh service is listening at localhost
lxc exec "$BASE_IMAGE_VM_NAME" --project=default -- wait-for-it -t 100 127.0.0.1:22
# # If we have any chaninstate or blocks in our SSME, let's push them to the
# # remote host as a zfs volume that way deployments can share a common history
# # of chainstate/blocks.
# for CHAIN in testnet mainnet; do
# for DATA in blocks chainstate; do
# # if the storage snapshot doesn't yet exist, create it.
# if ! lxc storage volume list ss-base -q --format csv -c n | grep -q "$CHAIN-$DATA/snap0"; then
# DATA_PATH="/home/ubuntu/.ss/cache/bitcoin/$CHAIN/$DATA"
# if [ -d "$DATA_PATH" ]; then
# COMPLETE_FILE_PATH="$DATA_PATH/complete"
# if lxc exec "$BASE_IMAGE_VM_NAME" -- [ ! -f "$COMPLETE_FILE_PATH" ]; then
# lxc file push --recursive --project=default "$DATA_PATH/" "$BASE_IMAGE_VM_NAME""$DATA_PATH/"
# lxc exec "$BASE_IMAGE_VM_NAME" -- su ubuntu - bash -c "echo $(date) > $COMPLETE_FILE_PATH"
# lxc exec "$BASE_IMAGE_VM_NAME" -- chown -R 999:999 "$DATA_PATH/$DATA"
# else
# echo "INFO: it appears as though $CHAIN/$DATA has already been initialized. Continuing."
# fi
# fi
# fi
# done
# done
# let's wait for the LXC vm remote machine to get an IP address.
./wait_for_lxc_ip.sh "$VM_NAME"
# stop the VM and get a snapshot.
lxc stop "$VM_NAME"
lxc publish "$CLUSTER_NAME:$VM_NAME" --alias "$VM_NAME" --public
lxc stop "$BASE_IMAGE_VM_NAME" --project=default
lxc snapshot "$BASE_IMAGE_VM_NAME" "$UBUNTU_BASE_IMAGE_NAME" --project=default
fi
echo "INFO: Publishing '$BASE_IMAGE_VM_NAME' as image '$DOCKER_BASE_IMAGE_NAME'. Please wait."
lxc publish --public "$BASE_IMAGE_VM_NAME/$UBUNTU_BASE_IMAGE_NAME" --project=default --alias="$DOCKER_BASE_IMAGE_NAME" --compression none
echo "INFO: Success creating the base image. Deleting artifacts from the build process."
lxc delete -f "$BASE_IMAGE_VM_NAME" --project=default
# # now let's get a snapshot of each of the blocks/chainstate directories.
# for CHAIN in testnet mainnet; do
# for DATA in blocks chainstate; do
# if ! lxc storage volume list ss-base -q --format csv -c n | grep -q "$CHAIN-$DATA/snap0"; then
# echo "INFO: Creating a snapshot 'ss-base/$CHAIN-$DATA/snap0'."
# lxc storage volume snapshot ss-base --project=default "$CHAIN-$DATA"
# fi
# done
# done

130
deployment/deploy_vm.sh Executable file
View File

@ -0,0 +1,130 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
. ./base.sh
## This is a weird if clause since we need to LEFT-ALIGN the statement below.
SSH_STRING="Host ${FQDN}"
if ! grep -q "$SSH_STRING" "$SSH_HOME/config"; then
########## BEGIN
cat >> "$SSH_HOME/config" <<-EOF
${SSH_STRING}
HostName ${FQDN}
User ubuntu
EOF
###
fi
# if the machine doesn't exist, we create it.
if ! lxc list --format csv | grep -q "$LXD_VM_NAME"; then
# create a base image if needed and instantiate a VM.
if [ -z "$MAC_ADDRESS_TO_PROVISION" ]; then
echo "ERROR: You MUST define a MAC Address for all your machines in your project definition."
echo "INFO: IMPORTANT! You MUST have DHCP Reservations for these MAC addresses. You also need records established the DNS."
exit 1
fi
# TODO ensure we are only GROWING the volume--never shrinking per zfs volume docs.
VM_ID=
BACKUP_DISK_SIZE_GB=
SSDATA_DISK_SIZE_GB=
DOCKER_DISK_SIZE_GB=
if [ "$VIRTUAL_MACHINE" = www ]; then
if [ "$SKIP_WWW" = true ]; then
exit 0
fi
VM_ID="w"
BACKUP_DISK_SIZE_GB="$WWW_BACKUP_DISK_SIZE_GB"
SSDATA_DISK_SIZE_GB="$WWW_SSDATA_DISK_SIZE_GB"
DOCKER_DISK_SIZE_GB="$WWW_DOCKER_DISK_SIZE_GB"
fi
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
if [ "$SKIP_BTCPAYSERVER" = true ]; then
exit 0
fi
VM_ID="b"
BACKUP_DISK_SIZE_GB="$BTCPAYSERVER_BACKUP_DISK_SIZE_GB"
SSDATA_DISK_SIZE_GB="$BTCPAYSERVER_SSDATA_DISK_SIZE_GB"
DOCKER_DISK_SIZE_GB="$BTCPAYSERVER_DOCKER_DISK_SIZE_GB"
fi
if [ "$VIRTUAL_MACHINE" = lnplayserver ]; then
if [ "$SKIP_LNPLAY_SERVER" = true ]; then
exit 0
fi
VM_ID="c"
BACKUP_DISK_SIZE_GB="$BTCPAYSERVER_BACKUP_DISK_SIZE_GB"
SSDATA_DISK_SIZE_GB="$BTCPAYSERVER_SSDATA_DISK_SIZE_GB"
DOCKER_DISK_SIZE_GB="$BTCPAYSERVER_DOCKER_DISK_SIZE_GB"
fi
DOCKER_VOLUME_NAME="$PRIMARY_DOMAIN_IDENTIFIER-$VM_ID""d"
if ! lxc storage volume list ss-base | grep -q "$DOCKER_VOLUME_NAME"; then
lxc storage volume create ss-base "$DOCKER_VOLUME_NAME" --type=block
fi
# TODO ensure we are only GROWING the volume--never shrinking
lxc storage volume set ss-base "$DOCKER_VOLUME_NAME" size="${DOCKER_DISK_SIZE_GB}GB"
SSDATA_VOLUME_NAME="$PRIMARY_DOMAIN_IDENTIFIER-$VM_ID""s"
if ! lxc storage volume list ss-base | grep -q "$SSDATA_VOLUME_NAME"; then
lxc storage volume create ss-base "$SSDATA_VOLUME_NAME" --type=filesystem
fi
# TODO ensure we are only GROWING the volume--never shrinking per zfs volume docs.
lxc storage volume set ss-base "$SSDATA_VOLUME_NAME" size="${SSDATA_DISK_SIZE_GB}GB"
BACKUP_VOLUME_NAME="$PRIMARY_DOMAIN_IDENTIFIER-$VM_ID""b"
if ! lxc storage volume list ss-base | grep -q "$BACKUP_VOLUME_NAME"; then
lxc storage volume create ss-base "$BACKUP_VOLUME_NAME" --type=filesystem
fi
lxc storage volume set ss-base "$BACKUP_VOLUME_NAME" size="${BACKUP_DISK_SIZE_GB}GB"
bash -c "./stub_lxc_profile.sh --vm=$VIRTUAL_MACHINE --lxd-hostname=$LXD_VM_NAME --ss-volume-name=$SSDATA_VOLUME_NAME --backup-volume-name=$BACKUP_VOLUME_NAME"
# now let's create a new VM to work with.
#lxc init --profile="$LXD_VM_NAME" "$BASE_IMAGE_VM_NAME" "$LXD_VM_NAME" --vm
lxc init "$DOCKER_BASE_IMAGE_NAME" "$LXD_VM_NAME" --vm --profile="$LXD_VM_NAME"
# let's PIN the HW address for now so we don't exhaust IP
# and so we can set DNS internally.
lxc config set "$LXD_VM_NAME" "volatile.enp5s0.hwaddr=$MAC_ADDRESS_TO_PROVISION"
# attack the docker block device.
lxc storage volume attach ss-base "$DOCKER_VOLUME_NAME" "$LXD_VM_NAME"
# if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
# # attach any volumes
# for CHAIN in testnet mainnet; do
# for DATA in blocks chainstate; do
# MOUNT_PATH="/$CHAIN-$DATA"
# lxc config device add "$LXD_VM_NAME" "$CHAIN-$DATA" disk pool=ss-base source="$CHAIN-$DATA" path="$MOUNT_PATH"
# done
# done
# fi
lxc start "$LXD_VM_NAME"
sleep 10
bash -c "./wait_for_lxc_ip.sh --lxd-name=$LXD_VM_NAME"
# scan the remote machine and install it's identity in our SSH known_hosts file.
ssh-keyscan -H "$FQDN" >> "$SSH_HOME/known_hosts"
ssh "$FQDN" "sudo chown ubuntu:ubuntu $REMOTE_DATA_PATH"
ssh "$FQDN" "sudo chown -R ubuntu:ubuntu $REMOTE_BACKUP_PATH"
fi

View File

@ -1,70 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
# let's make sure we have an ssh keypair. We just use $SSH_HOME/id_rsa
# TODO convert this to SSH private key held on Trezor. THus trezor-T required for
# login operations. This should be configurable of course.
if [ ! -f "$SSH_HOME/id_rsa" ]; then
# generate a new SSH key for the base vm image.
ssh-keygen -f "$SSH_HOME/id_rsa" -t ecdsa -b 521 -N ""
fi
## This is a weird if clause since we need to LEFT-ALIGN the statement below.
SSH_STRING="Host ${FQDN}"
if ! grep -q "$SSH_STRING" "$SSH_HOME/config"; then
########## BEGIN
cat >> "$SSH_HOME/config" <<-EOF
${SSH_STRING}
HostName ${FQDN}
User ubuntu
EOF
###
fi
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$FQDN"
# if the machine doesn't exist, we create it.
if ! lxc list --format csv | grep -q "$LXD_VM_NAME"; then
# create a base image if needed and instantiate a VM.
if [ -z "$MAC_ADDRESS_TO_PROVISION" ]; then
echo "ERROR: You MUST define a MAC Address for all your machines by setting WWW_SERVER_MAC_ADDRESS, BTCPAYSERVER_MAC_ADDRESS in your site defintion."
echo "INFO: IMPORTANT! You MUST have DHCP Reservations for these MAC addresses. You also need records established the DNS."
exit 1
fi
./stub_lxc_profile.sh "$LXD_VM_NAME"
# now let's create a new VM to work with.
lxc init --profile="$LXD_VM_NAME" "$VM_NAME" "$LXD_VM_NAME" --vm
# let's PIN the HW address for now so we don't exhaust IP
# and so we can set DNS internally.
lxc config set "$LXD_VM_NAME" "volatile.enp5s0.hwaddr=$MAC_ADDRESS_TO_PROVISION"
lxc config device override "$LXD_VM_NAME" root size="${ROOT_DISK_SIZE_GB}GB"
lxc start "$LXD_VM_NAME"
./wait_for_lxc_ip.sh "$LXD_VM_NAME"
fi
# scan the remote machine and install it's identity in our SSH known_hosts file.
ssh-keyscan -H -t ecdsa "$FQDN" >> "$SSH_HOME/known_hosts"
# create a directory to store backup archives. This is on all new vms.
ssh "$FQDN" mkdir -p "$REMOTE_HOME/backups"
# if this execution is for btcpayserver, then we run the stub/btcpay setup script
# but only if it hasn't been executed before.
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
if [ "$(ssh "$BTCPAY_FQDN" [[ ! -f "$REMOTE_HOME/btcpay.complete" ]]; echo $?)" -eq 0 ]; then
./btcpayserver/stub_btcpay_setup.sh
fi
fi

View File

@ -0,0 +1,56 @@
#!/bin/bash
set -eu
# file paths
export SSH_HOME="$HOME/.ssh"
export PASS_HOME="$HOME/.password-store" #TODO
export SS_ROOT_PATH="$HOME/ss"
export REMOTES_PATH="$SS_ROOT_PATH/remotes"
export PROJECTS_PATH="$SS_ROOT_PATH/projects"
export SITES_PATH="$SS_ROOT_PATH/sites"
export LXD_CONFIG_PATH="$SS_ROOT_PATH/lxd"
export SS_CACHE_PATH="$SS_ROOT_PATH/cache"
export REMOTE_HOME="/home/ubuntu"
export REMOTE_DATA_PATH="$REMOTE_HOME/ss-data"
export REMOTE_DATA_PATH_LETSENCRYPT="$REMOTE_DATA_PATH/letsencrypt"
export REMOTE_BACKUP_PATH="$REMOTE_HOME/backups"
export BTCPAY_SERVER_APPPATH="$REMOTE_DATA_PATH/btcpayserver-docker"
export BITCOIN_CHAIN=regtest
# this space is for OS, docker images, etc
# values here are fine for regtest generally. Later scripts adjust
# these values based on testnet/mainnet
export WWW_SSDATA_DISK_SIZE_GB=20
export WWW_BACKUP_DISK_SIZE_GB=50
export WWW_DOCKER_DISK_SIZE_GB=30
export BTCPAYSERVER_SSDATA_DISK_SIZE_GB=20
export BTCPAYSERVER_BACKUP_DISK_SIZE_GB=20
export BTCPAYSERVER_DOCKER_DISK_SIZE_GB=30
export LNPLAY_SERVER_SSDATA_DISK_SIZE_GB=20
export LNPLAY_SERVER_BACKUP_DISK_SIZE_GB=20
export LNPLAY_SSERVER_DOCKER_DISK_SIZE_GB=100
export WWW_HOSTNAME="www"
export BTCPAY_SERVER_HOSTNAME="btcpayserver"
export LNPLAY_SERVER_HOSTNAME="lnplayserver"
export BTCPAY_HOSTNAME_IN_CERT="btcpay"
export NEXTCLOUD_HOSTNAME="nextcloud"
export GITEA_HOSTNAME="git"
export NOSTR_HOSTNAME="relay"
export REGISTRY_URL="https://index.docker.io/v1"
export BTCPAY_SERVER_CPU_COUNT="4"
export BTCPAY_SERVER_MEMORY_MB="4096"
export WWW_SERVER_CPU_COUNT="4"
export WWW_SERVER_MEMORY_MB="4096"
export LNPLAY_SERVER_CPU_COUNT="4"
export LNPLAY_SERVER_MEMORY_MB="4096"
export DOCKER_IMAGE_CACHE_FQDN="registry-1.docker.io"

16
deployment/domain_list.sh Executable file
View File

@ -0,0 +1,16 @@
#!/bin/bash
# the DOMAIN_LIST is a complete list of all our domains. We often iterate over this list.
DOMAIN_LIST="${PRIMARY_DOMAIN}"
if [ -n "$OTHER_SITES_LIST" ]; then
DOMAIN_LIST="${DOMAIN_LIST},${OTHER_SITES_LIST}"
fi
export DOMAIN_LIST="$DOMAIN_LIST"
export DOMAIN_COUNT=$(("$(echo "$DOMAIN_LIST" | tr -cd , | wc -c)"+1))
export OTHER_SITES_LIST="$OTHER_SITES_LIST"
export PRIMARY_WWW_FQDN="$WWW_HOSTNAME.$DOMAIN_NAME"
export BTCPAY_SERVER_FQDN="$BTCPAY_SERVER_HOSTNAME.$DOMAIN_NAME"
export LNPLAY_SERVER_FQDN="$LNPLAY_SERVER_HOSTNAME.$DOMAIN_NAME"

131
deployment/down.sh Executable file
View File

@ -0,0 +1,131 @@
#!/bin/bash
# https://www.sovereign-stack.org/ss-down/
set -exu
cd "$(dirname "$0")"
if lxc remote get-default -q | grep -q "local"; then
echo "ERROR: you are on the local lxc remote. Nothing to take down"
exit 1
fi
KEEP_DOCKER_VOLUME=true
OTHER_SITES_LIST=
SKIP_BTCPAYSERVER=false
SKIP_WWW=false
SKIP_LNPLAY_SERVER=false
BACKUP_WWW_APPS=true
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--purge)
KEEP_DOCKER_VOLUME=false
shift
;;
--skip-btcpayserver)
SKIP_BTCPAYSERVER=true
shift
;;
--skip-wwwserver)
SKIP_WWW=true
shift
;;
--skip-lnplayserver)
SKIP_LNPLAY_SERVER=true
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
SERVERS=
if [ "$SKIP_BTCPAYSERVER" = false ]; then
SERVERS="btcpayserver"
fi
if [ "$SKIP_WWW" = false ]; then
SERVERS="www $SERVERS"
fi
if [ "$SKIP_LNPLAY_SERVER" = false ]; then
SERVERS="lnplayserver $SERVERS"
fi
. ./deployment_defaults.sh
. ./remote_env.sh
. ./project_env.sh
# let's bring down services on the remote deployment if necessary.
export DOMAIN_NAME="$PRIMARY_DOMAIN"
export SITE_PATH="$SITES_PATH/$PRIMARY_DOMAIN"
source "$SITE_PATH/site.conf"
source ./project/domain_env.sh
source ./domain_list.sh
for VIRTUAL_MACHINE in $SERVERS; do
LXD_NAME="$VIRTUAL_MACHINE-${PRIMARY_DOMAIN//./-}"
if lxc list | grep -q "$LXD_NAME"; then
bash -c "./stop.sh --server=$VIRTUAL_MACHINE"
if [ "$VIRTUAL_MACHINE" = www ] && [ "$BACKUP_WWW_APPS" = true ]; then
APP_LIST="letsencrypt ghost nextcloud gitea nostr"
echo "INFO: Backing up WWW apps."
for APP in $APP_LIST; do
bash -c "$(pwd)/project/www/backup_www.sh --app=$APP"
done
fi
lxc stop "$LXD_NAME"
lxc delete "$LXD_NAME"
fi
# remove the ssh known endpoint else we get warnings.
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$VIRTUAL_MACHINE.$PRIMARY_DOMAIN" | exit
if lxc profile list | grep -q "$LXD_NAME"; then
lxc profile delete "$LXD_NAME"
fi
if [ "$KEEP_DOCKER_VOLUME" = false ]; then
# destroy the docker volume
VM_ID=w
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
VM_ID="b"
elif [ "$VIRTUAL_MACHINE" = lnplayserver ]; then
VM_ID="c"
fi
# d for docker; b for backup; s for ss-data
for DATA in d b s; do
VOLUME_NAME="$PRIMARY_DOMAIN_IDENTIFIER-$VM_ID""$DATA"
if lxc storage volume list ss-base -q | grep -q "$VOLUME_NAME"; then
RESPONSE=
read -r -p "Are you sure you want to delete the '$VOLUME_NAME' volume intended for '$LXD_NAME'?": RESPONSE
if [ "$RESPONSE" = "y" ]; then
lxc storage volume delete ss-base "$VOLUME_NAME"
fi
fi
done
else
# we maintain the volumes
# TODO make a snapshot on all the zfs storage volumes.
echo "TODO: create snapshot of ZFS volumes and pull them to mgmt machine."
fi
done
if lxc network list -q | grep -q ss-ovn; then
lxc network delete ss-ovn
fi

15
deployment/help.txt Normal file
View File

@ -0,0 +1,15 @@
You are in the Sovereign Stack Management Environment (SSME). From here, you can issue several commands:
ss-remote - Take a remote SSH endpoint under management of Sovereign Stack.
ss-reset - The opposite of ss-remote; de-provisions an existing remote.
ss-up - Instantiate a deployment to your active project according to your
various remote.conf, project.conf, and site.conf files.
ss-down - Reverses ss-up. Takes the active project down. Non-destructive of user data,
unless you provide the --purge flag.
ss-update - This is just ss-down then ss-up.
ss-show - show the lxd resources associated with the current remote.
For more infomation about all these topics, consult the Sovereign Stack website starting with:
- https://www.sovereign-stack.org/tag/deployment-management/

1
deployment/project Submodule

@ -0,0 +1 @@
Subproject commit 4a16abdf10e68b25cd846c7f6ceeb0055fbfb3d9

55
deployment/project_env.sh Executable file
View File

@ -0,0 +1,55 @@
#!/bin/bash
set -eu
PROJECT_NAME="$(lxc info | grep "project:" | awk '{print $2}')"
export PROJECT_NAME="$PROJECT_NAME"
if [ "$PROJECT_NAME" = default ]; then
echo "ERROR: You are on the default project. Use 'lxc project list' and 'lxc project switch <project>'."
exit 1
fi
BITCOIN_CHAIN=$(echo "$PROJECT_NAME" | cut -d'-' -f2)
export PROJECT_PATH="$PROJECTS_PATH/$PROJECT_NAME"
export BITCOIN_CHAIN="$BITCOIN_CHAIN"
PROJECT_DEFINITION_PATH="$PROJECT_PATH/project.conf"
if [ ! -f "$PROJECT_DEFINITION_PATH" ]; then
echo "ERROR: 'project.conf' not found $PROJECT_DEFINITION_PATH not found."
exit 1
fi
source "$PROJECT_DEFINITION_PATH"
export PRIMARY_SITE_DEFINITION_PATH="$SITES_PATH/$PRIMARY_DOMAIN/site.conf"
if [ ! -f "$PRIMARY_SITE_DEFINITION_PATH" ]; then
echo "ERROR: the site definition does not exist."
exit 1
fi
if [ -z "$PRIMARY_DOMAIN" ]; then
echo "ERROR: The PRIMARY_DOMAIN is not specified. Check your remote definition at '$PRIMARY_SITE_DEFINITION_PATH'."
exit 1
fi
source "$PRIMARY_SITE_DEFINITION_PATH"
SHASUM_OF_PRIMARY_DOMAIN="$(echo -n "$PRIMARY_DOMAIN" | sha256sum | awk '{print $1;}' )"
export PRIMARY_DOMAIN_IDENTIFIER="${SHASUM_OF_PRIMARY_DOMAIN: -6}"
export WWW_SSDATA_DISK_SIZE_GB="$WWW_SSDATA_DISK_SIZE_GB"
export WWW_BACKUP_DISK_SIZE_GB="$WWW_BACKUP_DISK_SIZE_GB"
export WWW_DOCKER_DISK_SIZE_GB="$WWW_DOCKER_DISK_SIZE_GB"
export BTCPAYSERVER_SSDATA_DISK_SIZE_GB="$BTCPAYSERVER_SSDATA_DISK_SIZE_GB"
export BTCPAYSERVER_BACKUP_DISK_SIZE_GB="$BTCPAYSERVER_BACKUP_DISK_SIZE_GB"
export BTCPAYSERVER_DOCKER_DISK_SIZE_GB="$BTCPAYSERVER_DOCKER_DISK_SIZE_GB"
export LNPLAY_SERVER_SSDATA_DISK_SIZE_GB="$LNPLAY_SERVER_SSDATA_DISK_SIZE_GB"
export LNPLAY_SERVER_BACKUP_DISK_SIZE_GB="$LNPLAY_SERVER_BACKUP_DISK_SIZE_GB"
export LNPLAY_SSERVER_DOCKER_DISK_SIZE_GB="$LNPLAY_SSERVER_DOCKER_DISK_SIZE_GB"

257
deployment/remote.sh Executable file
View File

@ -0,0 +1,257 @@
#!/bin/bash
set -e
cd "$(dirname "$0")"
# This script is meant to be executed on the management machine.
# it reaches out to an SSH endpoint and provisions that machine
# to use LXD.
DATA_PLANE_MACVLAN_INTERFACE=
DISK_TO_USE=
# override the remote name.
REMOTE_NAME="${1:-}"
if [ -z "$REMOTE_NAME" ]; then
echo "ERROR: The remote name was not provided. Syntax is: 'ss-remote <remote_name> <remote01.domain.tld>'"
echo " for example: 'ss-remote development clusterhost00.domain.tld"
exit 1
fi
. ./deployment_defaults.sh
. ./base.sh
export REMOTE_PATH="$REMOTES_PATH/$REMOTE_NAME"
REMOTE_DEFINITION="$REMOTE_PATH/remote.conf"
export REMOTE_DEFINITION="$REMOTE_DEFINITION"
mkdir -p "$REMOTE_PATH"
if [ ! -f "$REMOTE_DEFINITION" ]; then
# stub out a remote.conf.
cat >"$REMOTE_DEFINITION" <<EOL
# https://www.sovereign-stack.org/ss-remote
LXD_REMOTE_PASSWORD="$(gpg --gen-random --armor 1 14)"
DEPLOYMENT_STRING="(dev|regtest),(staging|testnet)"
# REGISTRY_URL=http://registry.domain.tld:5000
EOL
chmod 0744 "$REMOTE_DEFINITION"
echo "We stubbed out a '$REMOTE_DEFINITION' file for you."
echo "Use this file to customize your remote deployment;"
echo "Check out 'https://www.sovereign-stack.org/ss-remote' for more information."
exit 1
fi
source "$REMOTE_DEFINITION"
if ! lxc remote list | grep -q "$REMOTE_NAME"; then
FQDN="${2:-}"
if [ -z "$FQDN" ]; then
echo "ERROR: You MUST provide the FQDN of the remote host."
exit
fi
shift
if [ -z "$FQDN" ]; then
echo "ERROR: The Fully Qualified Domain Name of the new remote member was not set."
exit 1
fi
# let's check to ensure we have SSH access to the specified host.
if ! wait-for-it -t 5 "$FQDN:22"; then
echo "ERROR: We can't get an SSH connection to '$FQDN:22'. Ensure you have the host set up correctly."
exit 1
fi
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--data-plane-interface=*)
DATA_PLANE_MACVLAN_INTERFACE="${i#*=}"
shift
;;
--disk=*)
DISK_TO_USE="${i#*=}"
shift
;;
*)
;;
esac
done
# first let's copy our ssh pubkey to the remote server so we don't have to login constantly.
ssh-copy-id -i "$HOME/.ssh/id_rsa.pub" "ubuntu@$FQDN"
if [ -z "$DISK_TO_USE" ]; then
if ! ssh "ubuntu@$FQDN" lxc storage list -q | grep -q ss-base; then
echo "INFO: It looks like the DISK_TO_USE has not been set. Enter it now."
echo ""
ssh "ubuntu@$FQDN" lsblk --paths
echo "Please enter the disk or partition that Sovereign Stack will use to store data: "
read -r DISK_TO_USE
fi
fi
else
echo "ERROR: the remote already exists! You need to go delete your lxd remote if you want to re-create your remote."
echo " It's may also be helpful to reset/rename your remote path."
exit 1
fi
#ssh "ubuntu@$FQDN" 'sudo echo "ubuntu ALL=(ALL) NOPASSWD: /bin/su - a" >> /etc/sudoers'
# if the disk is loop-based, then we assume the / path exists.
if [ "$DISK_TO_USE" != loop ]; then
# ensure we actually have that disk/partition on the system.
if ! ssh "ubuntu@$FQDN" lsblk --paths | grep -q "$DISK_TO_USE"; then
echo "ERROR: We could not findthe disk you specified. Please run this command again and supply a different disk."
echo "NOTE: You can always specify on the command line by adding the '--disk=/dev/sdd', for example."
exit 1
fi
fi
# error out if the remote password is unset.
if [ -z "$LXD_REMOTE_PASSWORD" ]; then
echo "ERROR: LXD_REMOTE_PASSWORD must be set in your remote.conf file."
exit 1
fi
if ! command -v lxc >/dev/null 2>&1; then
if lxc profile list --format csv | grep -q "$BASE_IMAGE_VM_NAME"; then
lxc profile delete "$BASE_IMAGE_VM_NAME"
sleep 1
fi
if lxc network list --format csv -q --project default | grep -q lxdbr0; then
lxc network delete lxdbr0 --project default
sleep 1
fi
if lxc network list --format csv -q project default | grep -q lxdbr1; then
lxc network delete lxdbr1 --project default
sleep 1
fi
fi
# install dependencies.
ssh -t "ubuntu@$FQDN" 'sudo apt update && sudo apt upgrade -y && sudo apt install htop dnsutils nano -y'
if ! ssh "ubuntu@$FQDN" snap list | grep -q lxd; then
ssh -t "ubuntu@$FQDN" 'sudo snap install lxd --channel=5.18/stable'
sleep 5
fi
# install OVN for the project-specific bridge networks
ssh -t "ubuntu@$FQDN" "sudo apt-get install -y ovn-host ovn-central && sudo ovs-vsctl set open_vswitch . external_ids:ovn-remote=unix:/var/run/ovn/ovnsb_db.sock external_ids:ovn-encap-type=geneve external_ids:ovn-encap-ip=127.0.0.1"
# if the user did not specify the interface, we just use whatever is used for the default route.
if [ -z "$DATA_PLANE_MACVLAN_INTERFACE" ]; then
DATA_PLANE_MACVLAN_INTERFACE="$(ssh ubuntu@"$FQDN" ip route | grep "default via" | awk '{print $5}')"
fi
export DATA_PLANE_MACVLAN_INTERFACE="$DATA_PLANE_MACVLAN_INTERFACE"
MGMT_PLANE_IP="$(ssh ubuntu@"$FQDN" env | grep SSH_CONNECTION | cut -d " " -f 3)"
IP_OF_MGMT_MACHINE="$(ssh ubuntu@"$FQDN" env | grep SSH_CLIENT | cut -d " " -f 1 )"
IP_OF_MGMT_MACHINE="${IP_OF_MGMT_MACHINE#*=}"
IP_OF_MGMT_MACHINE="$(echo "$IP_OF_MGMT_MACHINE" | cut -d: -f1)"
# run lxd init on the remote server.
cat <<EOF | ssh ubuntu@"$FQDN" lxd init --preseed
config:
core.https_address: ${MGMT_PLANE_IP}:8443
core.trust_password: ${LXD_REMOTE_PASSWORD}
core.dns_address: ${MGMT_PLANE_IP}
images.auto_update_interval: 15
networks:
- name: lxdbr0
description: "ss-config,${DATA_PLANE_MACVLAN_INTERFACE:-error}"
type: bridge
config:
ipv4.address: 10.9.9.1/24
ipv4.dhcp.ranges: 10.9.9.10-10.9.9.127
ipv4.nat: true
ipv6.address: none
dns.mode: managed
- name: lxdbr1
description: "Non-natting bridge needed for ovn networks."
type: bridge
config:
ipv4.address: 10.10.10.1/24
ipv4.dhcp.ranges: 10.10.10.10-10.10.10.63
ipv4.ovn.ranges: 10.10.10.64-10.10.10.254
ipv4.nat: false
ipv6.address: none
profiles:
- config: {}
description: "default profile for sovereign-stack instances."
devices:
root:
path: /
pool: ss-base
type: disk
name: default
cluster:
server_name: ${REMOTE_NAME}
enabled: true
member_config: []
cluster_address: ""
cluster_certificate: ""
server_address: ""
cluster_password: ""
cluster_certificate_path: ""
cluster_token: ""
EOF
# ensure the lxd service is available over the network, then add a lxc remote, then switch the active remote to it.
if wait-for-it -t 20 "$FQDN:8443"; then
# now create a remote on your local LXC client and switch to it.
# the software will now target the new remote.
lxc remote add "$REMOTE_NAME" "$FQDN" --password="$LXD_REMOTE_PASSWORD" --protocol=lxd --auth-type=tls --accept-certificate
lxc remote switch "$REMOTE_NAME"
echo "INFO: A new remote named '$REMOTE_NAME' has been created. Your LXC client has been switched to it."
else
echo "ERROR: Could not detect the LXD endpoint. Something went wrong."
exit 1
fi
# create the default storage pool if necessary
if ! lxc storage list --format csv | grep -q ss-base; then
if [ "$DISK_TO_USE" != loop ]; then
# we omit putting a size here so, so LXD will consume the entire disk if '/dev/sdb' or partition if '/dev/sdb1'.
# TODO do some sanity/resource checking on DISK_TO_USE. Impelment full-disk encryption?
lxc storage create ss-base zfs source="$DISK_TO_USE"
else
# if a disk is the default 'loop', then we create a zfs storage pool
# on top of the existing filesystem using a loop device, per LXD docs
lxc storage create ss-base zfs
fi
# # create the testnet/mainnet blocks/chainstate subvolumes.
# for CHAIN in mainnet testnet; do
# for DATA in blocks chainstate; do
# if ! lxc storage volume list ss-base | grep -q "$CHAIN-$DATA"; then
# lxc storage volume create ss-base "$CHAIN-$DATA" --type=filesystem
# fi
# done
# done
else
echo "WARNING! The host '$FQDN' appears to have Sovereign Stack worksloads already provisioned."
echo "INFO: Here are your current Deployments."
lxc project list -q
fi

96
deployment/remote_env.sh Executable file
View File

@ -0,0 +1,96 @@
#!/bin/bash
set -eu
CURRENT_REMOTE="$(lxc remote get-default)"
if echo "$CURRENT_REMOTE" | grep -q "production"; then
echo "WARNING: You are running a migration procedure on a production system."
echo ""
RESPONSE=
read -r -p " Are you sure you want to continue (y) ": RESPONSE
if [ "$RESPONSE" != "y" ]; then
echo "STOPPING."
exit 1
fi
# check if there are any uncommited changes. It's dangerous to
# alter production systems when you have commits to make or changes to stash.
if git update-index --refresh | grep -q "needs update"; then
echo "ERROR: You have uncommited changes! Better stash your work with 'git stash'."
exit 1
fi
fi
. ./deployment_defaults.sh
export REMOTE_PATH="$REMOTES_PATH/$CURRENT_REMOTE"
REMOTE_DEFINITION="$REMOTE_PATH/remote.conf"
export REMOTE_DEFINITION="$REMOTE_DEFINITION"
# ensure the remote definition exists.
if [ ! -f "$REMOTE_DEFINITION" ]; then
echo "ERROR: The remote definition could not be found. You may need to run 'ss-remote'."
echo "INFO: Consult https://www.sovereign-stack.org/ss-remote for more information."
exit 1
fi
source "$REMOTE_DEFINITION"
# ensure our projects are provisioned according to DEPLOYMENT_STRING
for PROJECT_CHAIN in ${DEPLOYMENT_STRING//,/ }; do
NO_PARENS="${PROJECT_CHAIN:1:${#PROJECT_CHAIN}-2}"
PROJECT_PREFIX=$(echo "$NO_PARENS" | cut -d'|' -f1)
BITCOIN_CHAIN=$(echo "$NO_PARENS" | cut -d'|' -f2)
PROJECT_NAME="$PROJECT_PREFIX-$BITCOIN_CHAIN"
# create the lxc project as specified by PROJECT_NAME
if ! lxc project list | grep -q "$PROJECT_NAME"; then
lxc project create "$PROJECT_NAME"
lxc project set "$PROJECT_NAME" features.networks=true features.images=false features.storage.volumes=true
lxc project switch "$PROJECT_NAME"
fi
# default values are already at regtest mode.
if [ "$BITCOIN_CHAIN" = testnet ]; then
WWW_SSDATA_DISK_SIZE_GB=30
WWW_BACKUP_DISK_SIZE_GB=30
WWW_DOCKER_DISK_SIZE_GB=50
BTCPAYSERVER_SSDATA_DISK_SIZE_GB=30
BTCPAYSERVER_BACKUP_DISK_SIZE_GB=30
BTCPAYSERVER_DOCKER_DISK_SIZE_GB=100
LNPLAY_SERVER_SSDATA_DISK_SIZE_GB=20
LNPLAY_SERVER_BACKUP_DISK_SIZE_GB=20
LNPLAY_SSERVER_DOCKER_DISK_SIZE_GB=20
elif [ "$BITCOIN_CHAIN" = mainnet ]; then
WWW_SSDATA_DISK_SIZE_GB=40
WWW_BACKUP_DISK_SIZE_GB=40
WWW_DOCKER_DISK_SIZE_GB=1000
BTCPAYSERVER_SSDATA_DISK_SIZE_GB=30
BTCPAYSERVER_BACKUP_DISK_SIZE_GB=30
BTCPAYSERVER_DOCKER_DISK_SIZE_GB=500
LNPLAY_SERVER_SSDATA_DISK_SIZE_GB=20
LNPLAY_SERVER_BACKUP_DISK_SIZE_GB=20
LNPLAY_SSERVER_DOCKER_DISK_SIZE_GB=400
fi
export WWW_SSDATA_DISK_SIZE_GB="$WWW_SSDATA_DISK_SIZE_GB"
export WWW_BACKUP_DISK_SIZE_GB="$WWW_BACKUP_DISK_SIZE_GB"
export WWW_DOCKER_DISK_SIZE_GB="$WWW_DOCKER_DISK_SIZE_GB"
export BTCPAYSERVER_SSDATA_DISK_SIZE_GB="$BTCPAYSERVER_SSDATA_DISK_SIZE_GB"
export BTCPAYSERVER_BACKUP_DISK_SIZE_GB="$BTCPAYSERVER_BACKUP_DISK_SIZE_GB"
export BTCPAYSERVER_DOCKER_DISK_SIZE_GB="$BTCPAYSERVER_DOCKER_DISK_SIZE_GB"
done

86
deployment/reset.sh Executable file
View File

@ -0,0 +1,86 @@
#!/bin/bash
set -e
cd "$(dirname "$0")"
PURGE_LXD=false
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--purge)
PURGE_LXD=true
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
source ../defaults.sh
./down.sh
# these only get initialzed upon creation, so we MUST delete here so they get recreated.
if lxc profile list | grep -q "$BASE_IMAGE_VM_NAME"; then
lxc profile delete "$BASE_IMAGE_VM_NAME"
fi
if lxc image list | grep -q "$BASE_IMAGE_VM_NAME"; then
lxc image rm "$BASE_IMAGE_VM_NAME"
fi
if lxc image list | grep -q "$DOCKER_BASE_IMAGE_NAME"; then
lxc image rm "$DOCKER_BASE_IMAGE_NAME"
fi
CURRENT_PROJECT="$(lxc info | grep "project:" | awk '{print $2}')"
if ! lxc info | grep -q "project: default"; then
lxc project switch default
lxc project delete "$CURRENT_PROJECT"
fi
if [ "$PURGE_LXD" = true ]; then
if lxc profile show default | grep -q "root:"; then
lxc profile device remove default root
fi
if lxc profile show default| grep -q "eth0:"; then
lxc profile device remove default eth0
fi
if lxc network list --format csv -q --project default | grep -q lxdbr0; then
lxc network delete lxdbr0 --project default
fi
if lxc network list --format csv -q --project default | grep -q lxdbr1; then
lxc network delete lxdbr1 --project default
fi
# # create the testnet/mainnet blocks/chainstate subvolumes.
# for CHAIN in mainnet testnet; do
# for DATA in blocks chainstate; do
# if lxc storage volume list ss-base | grep -q "$CHAIN-$DATA"; then
# lxc storage volume delete ss-base "$CHAIN-$DATA"
# fi
# done
# done
echo "WARNING: ss-basae NOT DELETED. NEED TO TEST THIS SCRIPT"
# if lxc storage list --format csv | grep -q ss-base; then
# lxc storage delete ss-base
# fi
CURRENT_REMOTE="$(lxc remote get-default)"
if ! lxc remote get-default | grep -q "local"; then
lxc remote switch local
lxc remote remove "$CURRENT_REMOTE"
echo "INFO: The remote '$CURRENT_REMOTE' has been removed! You are now controlling your local instance."
fi
fi

4
deployment/restore.sh Executable file
View File

@ -0,0 +1,4 @@
#!/bin/bash
set -exu

42
deployment/show.sh Executable file
View File

@ -0,0 +1,42 @@
#!/bin/bash
set -e
cd "$(dirname "$0")"
. ./deployment_defaults.sh
. ./remote_env.sh
echo "Global Settings:"
lxc image list
lxc storage list
echo
echo
PROJECT_NAME="$(lxc info | grep "project:" | awk '{print $2}')"
export export="$PROJECT_NAME"
export PROJECT_PATH="$PROJECTS_PATH/$PROJECT_NAME"
echo
echo
echo "Active project: $PROJECT_NAME"
echo "----------------------------------------------------------"
echo " Networks:"
lxc network list
echo
echo " Storage Volumes:"
lxc storage volume list ss-base
echo
echo " Profiles:"
lxc profile list
echo
echo " Instances (VMs):"
lxc list

66
deployment/stop.sh Executable file
View File

@ -0,0 +1,66 @@
#!/bin/bash
# https://www.sovereign-stack.org/ss-down/
set -eu
cd "$(dirname "$0")"
if lxc remote get-default -q | grep -q "local"; then
echo "ERROR: you are on the local lxc remote. Nothing to take down"
exit 1
fi
SERVER_TO_STOP=
OTHER_SITES_LIST=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--server=*)
SERVER_TO_STOP="${i#*=}"
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
if [ -z "$SERVER_TO_STOP" ]; then
echo "ERROR: you MUST specify a server to stop with '--server=www' for example."
exit 1
fi
. ./deployment_defaults.sh
. ./remote_env.sh
. ./project_env.sh
# let's bring down services on the remote deployment if necessary.
export DOMAIN_NAME="$PRIMARY_DOMAIN"
export SITE_PATH="$SITES_PATH/$PRIMARY_DOMAIN"
source "$SITE_PATH/site.conf"
source ./project/domain_env.sh
source ./domain_list.sh
if [ "$SERVER_TO_STOP" = www ]; then
DOCKER_HOST="ssh://ubuntu@$PRIMARY_WWW_FQDN" ./project/www/stop_docker_stacks.sh
fi
if [ "$SERVER_TO_STOP" = btcpayserver ]; then
if wait-for-it -t 5 "$BTCPAY_SERVER_FQDN":22; then
ssh "$BTCPAY_SERVER_FQDN" "bash -c $BTCPAY_SERVER_APPPATH/btcpay-down.sh"
else
echo "ERROR: the remote BTCPAY Server is not available on ssh."
exit 1
fi
fi
if [ "$SERVER_TO_STOP" = lnplayserver ]; then
DOCKER_HOST="ssh://ubuntu@$LNPLAY_SERVER_FQDN" ./project/lnplay/down.sh
fi

View File

@ -1,11 +1,41 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
LXD_HOSTNAME="$1"
VIRTUAL_MACHINE=base
LXD_HOSTNAME=
SSDATA_VOLUME_NAME=
BACKUP_VOLUME_NAME=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--lxd-hostname=*)
LXD_HOSTNAME="${i#*=}"
shift
;;
--vm=*)
VIRTUAL_MACHINE="${i#*=}"
shift
;;
--ss-volume-name=*)
SSDATA_VOLUME_NAME="${i#*=}"
shift
;;
--backup-volume-name=*)
BACKUP_VOLUME_NAME="${i#*=}"
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
# generate the custom cloud-init file. Cloud init installs and configures sshd
SSH_AUTHORIZED_KEY=$(<"$SSH_HOME/id_rsa.pub")
SSH_AUTHORIZED_KEY=$(<"$SSH_PUBKEY_PATH")
eval "$(ssh-agent -s)"
ssh-add "$SSH_HOME/id_rsa"
export SSH_AUTHORIZED_KEY="$SSH_AUTHORIZED_KEY"
@ -17,24 +47,55 @@ YAML_PATH="$PROJECT_PATH/cloud-init/$FILENAME"
# If we are deploying the www, we attach the vm to the underlay via macvlan.
cat > "$YAML_PATH" <<EOF
config:
limits.cpu: "${DEV_CPU_COUNT}"
limits.memory: "${DEV_MEMORY_MB}MB"
EOF
if [ "$VIRTUAL_MACHINE" = base ]; then
cat >> "$YAML_PATH" <<EOF
limits.cpu: 4
limits.memory: 4096MB
EOF
fi
if [ "$VIRTUAL_MACHINE" = www ]; then
cat >> "$YAML_PATH" <<EOF
limits.cpu: "${WWW_SERVER_CPU_COUNT}"
limits.memory: "${WWW_SERVER_MEMORY_MB}MB"
EOF
fi
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
cat >> "$YAML_PATH" <<EOF
limits.cpu: "${BTCPAY_SERVER_CPU_COUNT}"
limits.memory: "${BTCPAY_SERVER_MEMORY_MB}MB"
EOF
# if VIRTUAL_MACHINE=sovereign-stack then we are building the base image.
if [ "$LXD_HOSTNAME" = "sovereign-stack" ]; then
elif [ "$VIRTUAL_MACHINE" = lnplayserver ]; then
cat >> "$YAML_PATH" <<EOF
limits.cpu: "${LNPLAY_SERVER_CPU_COUNT}"
limits.memory: "${LNPLAY_SERVER_MEMORY_MB}MB"
EOF
fi
. ./target.sh
# if VIRTUAL_MACHINE=base, then we doing the base image.
if [ "$VIRTUAL_MACHINE" = base ]; then
# this is for the base image only...
cat >> "$YAML_PATH" <<EOF
user.vendor-data: |
#cloud-config
apt_mirror: http://us.archive.ubuntu.com/ubuntu/
package_update: true
package_upgrade: false
package_reboot_if_required: false
preserve_hostname: false
fqdn: sovereign-stack
fqdn: ${BASE_IMAGE_VM_NAME}
packages:
- curl
@ -69,107 +130,47 @@ if [ "$LXD_HOSTNAME" = "sovereign-stack" ]; then
ssh_authorized_keys:
- ${SSH_AUTHORIZED_KEY}
EOF
if [ "$REGISTRY_URL" != "https://index.docker.io/v1" ]; then
cat >> "$YAML_PATH" <<EOF
write_files:
- path: /home/ubuntu/docker.asc
content: |
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth
lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh
38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq
L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7
UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N
cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht
ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo
vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD
G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ
XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj
q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB
tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3
BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO
v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd
tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk
jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m
6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P
XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc
FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8
g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm
ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh
9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5
G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW
FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB
EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF
M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx
Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu
w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk
z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8
eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb
VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa
1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X
zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ
pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7
ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ
BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY
1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp
YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI
mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES
KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7
JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ
cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0
6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5
U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z
VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f
irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk
SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz
QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W
9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw
24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe
dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y
Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR
H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh
/nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ
M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S
xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O
jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG
YT90qFF93M3v01BbxP+EIY2/9tiIPbrd
=0YYh
-----END PGP PUBLIC KEY BLOCK-----
- path: /etc/ssh/ssh_config
content: |
Port 22
ListenAddress 0.0.0.0
Protocol 2
ChallengeResponseAuthentication no
PasswordAuthentication no
UsePAM no
LogLevel INFO
- path: /etc/docker/daemon.json
permissions: 0644
owner: root
content: |
{
{
"registry-mirrors": [
"${REGISTRY_URL}"
],
"labels": [
"PROJECT_COMMIT=${TARGET_PROJECT_GIT_COMMIT}"
]
}
}
runcmd:
- cat /home/ubuntu/docker.asc | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
- sudo rm /home/ubuntu/docker.asc
- echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
- sudo apt-get update
- sudo apt-get install -y docker-ce docker-ce-cli containerd.io
- echo "alias ll='ls -lah'" >> /home/ubuntu/.bash_profile
- sudo curl -s -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- sudo apt-get install -y openssh-server
EOF
else
# all other machines.
fi
fi
if [ "$VIRTUAL_MACHINE" = base ]; then
cat >> "$YAML_PATH" <<EOF
runcmd:
- sudo mkdir -m 0755 -p /etc/apt/keyrings
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
- sudo apt-get update
- sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- sudo DEBIAN_FRONTEND=noninteractive apt-get install -y openssh-server
EOF
fi
if [ "$VIRTUAL_MACHINE" != base ]; then
# all other machines that are not the base image
cat >> "$YAML_PATH" <<EOF
user.vendor-data: |
#cloud-config
@ -178,88 +179,141 @@ else
package_upgrade: false
package_reboot_if_required: false
preserve_hostname: false
preserve_hostname: true
fqdn: ${FQDN}
resize_rootfs: false
disk_setup:
/dev/sdb:
table_type: 'gpt'
layout: true
overwrite: false
fs_setup:
- label: docker-data
filesystem: 'ext4'
device: '/dev/sdb1'
overwrite: false
mounts:
- [ sdb, /var/lib/docker ]
mount_default_fields: [ None, None, "auto", "defaults,nofail", "0", "2" ]
EOF
fi
if [ "$VIRTUAL_MACHINE" != base ]; then
cat >> "$YAML_PATH" <<EOF
user.network-config: |
version: 2
ethernets:
enp5s0:
dhcp4: true
dhcp4-overrides:
route-metric: 50
match:
macaddress: ${MAC_ADDRESS_TO_PROVISION}
set-name: enp5s0
enp6s0:
dhcp4: false
EOF
if [[ "$LXD_HOSTNAME" = $WWW_HOSTNAME-* ]]; then
cat >> "$YAML_PATH" <<EOF
addresses: [10.139.144.5/24]
nameservers:
addresses: [10.139.144.1]
EOF
fi
if [[ "$LXD_HOSTNAME" = $BTCPAY_HOSTNAME-* ]]; then
cat >> "$YAML_PATH" <<EOF
addresses: [10.139.144.10/24]
nameservers:
addresses: [10.139.144.1]
EOF
fi
fi
# If we are deploying the www, we attach the vm to the underlay via macvlan.
# TODO try to get DHCP working reliably.
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
addresses:
- 10.10.10.66/24
EOF
fi
if [ "$VIRTUAL_MACHINE" = www ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
addresses:
- 10.10.10.65/24
EOF
fi
# All profiles get a root disk and cloud-init config.
cat >> "$YAML_PATH" <<EOF
description: Default LXD profile for ${FILENAME}
devices:
root:
path: /
pool: sovereign-stack
pool: ss-base
type: disk
config:
source: cloud-init:config
type: disk
EOF
# Stub out the network piece for the base image.
if [ "$LXD_HOSTNAME" = sovereign-stack ] ; then
if [ "$VIRTUAL_MACHINE" != base ]; then
cat >> "$YAML_PATH" <<EOF
ss-data:
path: ${REMOTE_DATA_PATH}
pool: ss-base
source: ${SSDATA_VOLUME_NAME}
type: disk
ss-backup:
path: ${REMOTE_BACKUP_PATH}
pool: ss-base
source: ${BACKUP_VOLUME_NAME}
type: disk
EOF
fi
# If we are deploying the www, we attach the vm to the underlay via macvlan.
cat >> "$YAML_PATH" <<EOF
enp5s0:
name: enp5s0
nictype: macvlan
parent: ${DATA_PLANE_MACVLAN_INTERFACE}
# Stub out the network piece for the base image.
if [ "$VIRTUAL_MACHINE" = base ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
name: enp6s0
network: lxdbr0
type: nic
name: ${FILENAME}
EOF
else
# If we are deploying the www, we attach the vm to the underlay via macvlan.
cat >> "$YAML_PATH" <<EOF
# If we are deploying a VM that attaches to the network underlay.
cat >> "$YAML_PATH" <<EOF
enp5s0:
nictype: macvlan
parent: ${DATA_PLANE_MACVLAN_INTERFACE}
type: nic
EOF
if [ "$VIRTUAL_MACHINE" = www ] || [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
name: enp6s0
network: lxdbrSS
network: ss-ovn
type: nic
EOF
fi
name: ${FILENAME}
cat >> "$YAML_PATH" <<EOF
name: ${PRIMARY_DOMAIN}
EOF
fi
# let's create a profile for the BCM TYPE-1 VMs. This is per VM.
if ! lxc profile list --format csv | grep -q "$LXD_HOSTNAME"; then
lxc profile create "$LXD_HOSTNAME"
if [ "$VIRTUAL_MACHINE" = base ]; then
if ! lxc profile list --format csv --project default | grep -q "$LXD_HOSTNAME"; then
lxc profile create "$LXD_HOSTNAME" --project default
fi
# configure the profile with our generated cloud-init.yml file.
cat "$YAML_PATH" | lxc profile edit "$LXD_HOSTNAME" --project default
else
if ! lxc profile list --format csv | grep -q "$LXD_HOSTNAME"; then
lxc profile create "$LXD_HOSTNAME"
fi
# configure the profile with our generated cloud-init.yml file.
cat "$YAML_PATH" | lxc profile edit "$LXD_HOSTNAME"
fi
# configure the profile with our generated cloud-init.yml file.
cat "$YAML_PATH" | lxc profile edit "$LXD_HOSTNAME"

3
deployment/target.sh Executable file
View File

@ -0,0 +1,3 @@
#!/bin/bash
export TARGET_PROJECT_GIT_COMMIT=ca069c7decdc74d2719a7f34927bda49159da2ae

460
deployment/up.sh Executable file
View File

@ -0,0 +1,460 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
. ./target.sh
# check to ensure dependencies are met.
for cmd in wait-for-it dig rsync sshfs lxc; do
if ! command -v "$cmd" >/dev/null 2>&1; then
echo "This script requires \"${cmd}\" to be installed. Please run 'install.sh'."
exit 1
fi
done
# do a spot check; if we are on production warn.
if lxc remote get-default | grep -q "production"; then
echo "WARNING: You are running command against a production system!"
echo ""
# check if there are any uncommited changes. It's dangerous to
# alter production systems when you have commits to make or changes to stash.
if git update-index --refresh | grep -q "needs update"; then
echo "ERROR: You have uncommited changes! You MUST commit or stash all changes to continue."
exit 1
fi
RESPONSE=
read -r -p " Are you sure you want to continue (y) ": RESPONSE
if [ "$RESPONSE" != "y" ]; then
echo "STOPPING."
exit 1
fi
fi
OTHER_SITES_LIST=
PRIMARY_DOMAIN=
RUN_CERT_RENEWAL=true
SKIP_BASE_IMAGE_CREATION=false
RESTORE_WWW=false
RESTORE_CERTS=false
BACKUP_CERTS=true
BACKUP_BTCPAY=true
SKIP_BTCPAYSERVER=false
SKIP_WWW=false
SKIP_LNPLAY_SERVER=false
BACKUP_BTCPAY_ARCHIVE_PATH=
RESTORE_BTCPAY=false
UPDATE_BTCPAY=false
REMOTE_NAME="$(lxc remote get-default)"
USER_SAYS_YES=false
WWW_SERVER_MAC_ADDRESS=
BTCPAY_SERVER_MAC_ADDRESS=
LNPLAY_SERVER_MAC_ADDRESS=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--restore-certs)
RESTORE_CERTS=true
shift
;;
--restore-wwwserver)
RESTORE_WWW=true
shift
;;
--restore-btcpay)
RESTORE_BTCPAY=true
shift
;;
--skip-btcpayserver)
SKIP_BTCPAYSERVER=true
shift
;;
--skip-wwwserver)
SKIP_WWW=true
shift
;;
--skip-lnplayserver)
SKIP_LNPLAY_SERVER=true
shift
;;
--backup-btcpayserver)
BACKUP_BTCPAY=true
shift
;;
--backup-archive-path=*)
BACKUP_BTCPAY_ARCHIVE_PATH="${i#*=}"
shift
;;
--update-btcpay)
UPDATE_BTCPAY=true
shift
;;
--skip-base-image)
SKIP_BASE_IMAGE_CREATION=true
shift
;;
--no-cert-renew)
RUN_CERT_RENEWAL=false
shift
;;
-y)
USER_SAYS_YES=true
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
if [ "$RESTORE_BTCPAY" = true ] && [ -z "$BACKUP_BTCPAY_ARCHIVE_PATH" ]; then
echo "ERROR: Use the '--backup-archive-path=/path/to/btcpay/archive.tar.gz' option when restoring btcpay server."
exit 1
fi
if [ "$RESTORE_BTCPAY" = true ] && [ ! -f "$BACKUP_BTCPAY_ARCHIVE_PATH" ]; then
echo "ERROR: The backup archive path you specified DOES NOT exist!"
exit 1
fi
. ./remote_env.sh
export REGISTRY_DOCKER_IMAGE="registry:2"
export BACKUP_CERTS="$BACKUP_CERTS"
export RESTORE_BTCPAY="$RESTORE_BTCPAY"
export RESTORE_WWW="$RESTORE_WWW"
export BACKUP_BTCPAY="$BACKUP_BTCPAY"
export RUN_CERT_RENEWAL="$RUN_CERT_RENEWAL"
export REMOTE_NAME="$REMOTE_NAME"
export REMOTE_PATH="$REMOTES_PATH/$REMOTE_NAME"
export USER_SAYS_YES="$USER_SAYS_YES"
export BACKUP_BTCPAY_ARCHIVE_PATH="$BACKUP_BTCPAY_ARCHIVE_PATH"
export RESTORE_CERTS="$RESTORE_CERTS"
# todo convert this to Trezor-T
SSH_PUBKEY_PATH="$SSH_HOME/id_rsa.pub"
export SSH_PUBKEY_PATH="$SSH_PUBKEY_PATH"
# ensure our remote path is created.
mkdir -p "$REMOTE_PATH"
REMOTE_DEFINITION="$REMOTE_PATH/remote.conf"
if [ ! -f "$REMOTE_DEFINITION" ]; then
echo "ERROR: The remote definition could not be found. You may need to re-run 'ss-remote'."
exit 1
fi
export REMOTE_DEFINITION="$REMOTE_DEFINITION"
source "$REMOTE_DEFINITION"
export LXD_REMOTE_PASSWORD="$LXD_REMOTE_PASSWORD"
export DEPLOYMENT_STRING="$DEPLOYMENT_STRING"
# this is our password generation mechanism. Relying on GPG for secure password generation
function new_pass {
gpg --gen-random --armor 1 25
}
function stub_site_definition {
mkdir -p "$SITE_PATH" "$PROJECT_PATH/sites"
# create a symlink from the PROJECT_PATH/sites/DOMAIN_NAME to the ss-sites/domain name
DOMAIN_SYMLINK_PATH="$PROJECT_PATH/sites/$DOMAIN_NAME"
if [ ! -L "$DOMAIN_SYMLINK_PATH" ]; then
ln -r -s "$SITE_PATH" "$DOMAIN_SYMLINK_PATH"
fi
if [ ! -f "$SITE_PATH/site.conf" ]; then
# check to see if the enf file exists. exist if not.
SITE_DEFINITION_PATH="$SITE_PATH/site.conf"
if [ ! -f "$SITE_DEFINITION_PATH" ]; then
# stub out a site.conf with new passwords.
cat >"$SITE_DEFINITION_PATH" <<EOL
# https://www.sovereign-stack.org/ss-up/#siteconf
DOMAIN_NAME="${DOMAIN_NAME}"
# BTCPAY_ALT_NAMES="tip,store,pay,send"
SITE_LANGUAGE_CODES="en"
DUPLICITY_BACKUP_PASSPHRASE="$(new_pass)"
DEPLOY_GHOST=true
DEPLOY_NEXTCLOUD=false
DEPLOY_NOSTR=false
NOSTR_ACCOUNT_PUBKEY=
DEPLOY_GITEA=false
GHOST_MYSQL_PASSWORD="$(new_pass)"
GHOST_MYSQL_ROOT_PASSWORD="$(new_pass)"
NEXTCLOUD_MYSQL_PASSWORD="$(new_pass)"
NEXTCLOUD_MYSQL_ROOT_PASSWORD="$(new_pass)"
GITEA_MYSQL_PASSWORD="$(new_pass)"
GITEA_MYSQL_ROOT_PASSWORD="$(new_pass)"
#GHOST_DEPLOY_SMTP=true
#MAILGUN_FROM_ADDRESS=false
#MAILGUN_SMTP_USERNAME=
#MAILGUN_SMTP_PASSWORD=
EOL
chmod 0744 "$SITE_DEFINITION_PATH"
echo "INFO: we stubbed a new site.conf for you at '$SITE_DEFINITION_PATH'. Go update it!"
exit 1
fi
fi
}
PROJECT_NAME="$(lxc info | grep "project:" | awk '{print $2}')"
export PROJECT_NAME="$PROJECT_NAME"
export PROJECT_PATH="$PROJECTS_PATH/$PROJECT_NAME"
export SKIP_BTCPAYSERVER="$SKIP_BTCPAYSERVER"
export SKIP_WWW="$SKIP_WWW"
export SKIP_LNPLAY_SERVER="$SKIP_LNPLAY_SERVER"
mkdir -p "$PROJECT_PATH" "$REMOTE_PATH/projects"
# create a symlink from ./remotepath/projects/project
PROJECT_SYMLINK="$REMOTE_PATH/projects/$PROJECT_NAME"
if [ ! -L "$PROJECT_SYMLINK" ]; then
ln -r -s "$PROJECT_PATH" "$PROJECT_SYMLINK"
fi
# check to see if the enf file exists. exist if not.
PROJECT_DEFINITION_PATH="$PROJECT_PATH/project.conf"
if [ ! -f "$PROJECT_DEFINITION_PATH" ]; then
# stub out a project.conf
cat >"$PROJECT_DEFINITION_PATH" <<EOL
# see https://www.sovereign-stack.org/ss-up/#projectconf for more info.
PRIMARY_DOMAIN="domain0.tld"
# OTHER_SITES_LIST="domain1.tld,domain2.tld,domain3.tld"
WWW_SERVER_MAC_ADDRESS=
# WWW_SSDATA_DISK_SIZE_GB=100
# WWW_SERVER_CPU_COUNT="6"
# WWW_SERVER_MEMORY_MB="4096"
BTCPAY_SERVER_MAC_ADDRESS=
# BTCPAY_SERVER_CPU_COUNT="4"
# BTCPAY_SERVER_MEMORY_MB="4096"
LNPLAY_SERVER_MAC_ADDRESS=
# LNPLAY_SERVER_CPU_COUNT="4"
# LNPLAY_SERVER_MEMORY_MB="4096"
EOL
chmod 0744 "$PROJECT_DEFINITION_PATH"
echo "INFO: we stubbed a new project.conf for you at '$PROJECT_DEFINITION_PATH'. Go update it!"
echo "INFO: Learn more at https://www.sovereign-stack.org/ss-up/"
exit 1
fi
. ./project_env.sh
if [ -z "$PRIMARY_DOMAIN" ]; then
echo "ERROR: The PRIMARY_DOMAIN is not specified. Check your project.conf."
exit 1
fi
if [ -z "$WWW_SERVER_MAC_ADDRESS" ]; then
echo "WARNING: the WWW_SERVER_MAC_ADDRESS is not specified. Check your project.conf."
fi
if [ -z "$BTCPAY_SERVER_MAC_ADDRESS" ]; then
echo "WARNING: the BTCPAY_SERVER_MAC_ADDRESS is not specified. Check your project.conf."
fi
if [ -z "$LNPLAY_SERVER_MAC_ADDRESS" ]; then
echo "WARNING: the LNPLAY_SERVER_MAC_ADDRESS is not specified. Check your project.conf."
fi
source ./domain_list.sh
# let's provision our primary domain first.
export DOMAIN_NAME="$PRIMARY_DOMAIN"
export PRIMARY_DOMAIN="$PRIMARY_DOMAIN"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
stub_site_definition
# bring the VMs up under the primary domain name.
export UPDATE_BTCPAY="$UPDATE_BTCPAY"
# iterate over all our server endpoints and provision them if needed.
# www
VPS_HOSTNAME=
. ./base.sh
if ! lxc image list --format csv | grep -q "$DOCKER_BASE_IMAGE_NAME"; then
# create the lxd base image.
if [ "$SKIP_BASE_IMAGE_CREATION" = false ]; then
./create_lxc_base.sh
fi
fi
for VIRTUAL_MACHINE in www btcpayserver lnplayserver; do
if [ "$VIRTUAL_MACHINE" = btcpayserver ] && [ -z "$BTCPAY_SERVER_MAC_ADDRESS" ]; then
continue
fi
if [ "$VIRTUAL_MACHINE" = lnplayserver ] && [ -z "$LNPLAY_SERVER_MAC_ADDRESS" ]; then
continue
fi
if [ "$VIRTUAL_MACHINE" = www ] && [ -z "$WWW_SERVER_MAC_ADDRESS" ]; then
continue
fi
export VIRTUAL_MACHINE="$VIRTUAL_MACHINE"
FQDN=
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
source "$SITE_PATH/site.conf"
source ./project/domain_env.sh
# VALIDATE THE INPUT from the ENVFILE
if [ -z "$DOMAIN_NAME" ]; then
echo "ERROR: DOMAIN_NAME not specified in your site.conf."
exit 1
fi
# Goal is to get the macvlan interface.
LXD_SS_CONFIG_LINE=
if lxc network list --format csv --project=default | grep lxdbr0 | grep -q "ss-config"; then
LXD_SS_CONFIG_LINE="$(lxc network list --format csv --project=default | grep lxdbr0 | grep ss-config)"
fi
if [ -z "$LXD_SS_CONFIG_LINE" ]; then
echo "ERROR: the MACVLAN interface has not been specified. You may need to run 'ss-remote' again."
exit 1
fi
CONFIG_ITEMS="$(echo "$LXD_SS_CONFIG_LINE" | awk -F'"' '{print $2}')"
DATA_PLANE_MACVLAN_INTERFACE="$(echo "$CONFIG_ITEMS" | cut -d ',' -f2)"
export DATA_PLANE_MACVLAN_INTERFACE="$DATA_PLANE_MACVLAN_INTERFACE"
# Now let's switch to the new project to ensure new resources are created under the project scope.
if ! lxc info | grep "project:" | grep -q "$PROJECT_NAME"; then
lxc project switch "$PROJECT_NAME"
fi
# check if the OVN network exists in this project.
if ! lxc network list | grep -q "ss-ovn"; then
lxc network create ss-ovn --type=ovn network=lxdbr1 ipv6.address=none
fi
export MAC_ADDRESS_TO_PROVISION=
export VPS_HOSTNAME="$VPS_HOSTNAME"
export FQDN="$VPS_HOSTNAME.$DOMAIN_NAME"
if [ "$VIRTUAL_MACHINE" = www ] && [ -n "$WWW_SERVER_MAC_ADDRESS" ]; then
FQDN="$WWW_HOSTNAME.$DOMAIN_NAME"
VPS_HOSTNAME="$WWW_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$WWW_SERVER_MAC_ADDRESS"
elif [ "$VIRTUAL_MACHINE" = btcpayserver ] && [ -n "$BTCPAY_SERVER_MAC_ADDRESS" ]; then
FQDN="$BTCPAY_SERVER_HOSTNAME.$DOMAIN_NAME"
VPS_HOSTNAME="$BTCPAY_SERVER_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$BTCPAY_SERVER_MAC_ADDRESS"
elif [ "$VIRTUAL_MACHINE" = lnplayserver ] && [ -n "$LNPLAY_SERVER_MAC_ADDRESS" ]; then
FQDN="$LNPLAY_SERVER_HOSTNAME.$DOMAIN_NAME"
VPS_HOSTNAME="$LNPLAY_SERVER_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$LNPLAY_SERVER_MAC_ADDRESS"
elif [ "$VIRTUAL_MACHINE" = "$BASE_IMAGE_VM_NAME" ]; then
export FQDN="$BASE_IMAGE_VM_NAME"
else
echo "ERROR: VIRTUAL_MACHINE not within allowable bounds."
exit
fi
export FQDN="$FQDN"
export LXD_VM_NAME="${FQDN//./-}"
export MAC_ADDRESS_TO_PROVISION="$MAC_ADDRESS_TO_PROVISION"
export PROJECT_PATH="$PROJECT_PATH"
./deploy_vm.sh
done
# let's stub out the rest of our site definitions, if any.
for DOMAIN_NAME in ${OTHER_SITES_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# stub out the site_defition if it's doesn't exist.
stub_site_definition
done
if [ "$SKIP_BTCPAYSERVER" = false ]; then
if [ -n "$BTCPAY_SERVER_MAC_ADDRESS" ]; then
export DOCKER_HOST="ssh://ubuntu@$BTCPAY_SERVER_FQDN"
./project/btcpayserver/go.sh
fi
fi
if [ "$SKIP_WWW" = false ]; then
# now let's run the www and btcpay-specific provisioning scripts.
if [ -n "$WWW_SERVER_MAC_ADDRESS" ]; then
export DOCKER_HOST="ssh://ubuntu@$WWW_FQDN"
# enable docker swarm mode so we can support docker stacks.
if docker info | grep -q "Swarm: inactive"; then
docker swarm init --advertise-addr enp6s0
fi
./project/www/go.sh
fi
fi
# don't run lnplay stuff if user specifies --skip-lnplay
if [ "$SKIP_LNPLAY_SERVER" = false ]; then
# now let's run the www and btcpay-specific provisioning scripts.
if [ -n "$LNPLAY_SERVER_MAC_ADDRESS" ]; then
export DOCKER_HOST="ssh://ubuntu@$LNPLAY_SERVER_FQDN"
# set the active env to our LNPLAY_SERVER_FQDN
cat >./project/lnplay/active_env.txt <<EOL
${LNPLAY_SERVER_FQDN}
EOL
LNPLAY_ENV_FILE=./project/lnplay/environments/"$LNPLAY_SERVER_FQDN"
# only stub out the file if it doesn't exist. otherwise we leave it be.
if [ ! -f "$LNPLAY_ENV_FILE" ]; then
# and we have to set our environment file as well.
cat > "$LNPLAY_ENV_FILE" <<EOL
DOCKER_HOST=ssh://ubuntu@${LNPLAY_SERVER_FQDN}
DOMAIN_NAME=${PRIMARY_DOMAIN}
ENABLE_TLS=true
BTC_CHAIN=${BITCOIN_CHAIN}
CLN_COUNT=200
CHANNEL_SETUP=none
LNPLAY_SERVER_PATH=${SITES_PATH}/${PRIMARY_DOMAIN}/lnplayserver
EOL
fi
bash -c "./project/lnplay/up.sh -y"
fi
fi

View File

@ -2,7 +2,33 @@
set -e
LXC_INSTANCE_NAME="$1"
LXC_INSTANCE_NAME=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--lxd-name=*)
LXC_INSTANCE_NAME="${i#*=}"
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
# if the invoker did not set the instance name, throw an error.
if [ -z "$LXC_INSTANCE_NAME" ]; then
echo "ERROR: The lxc instance name was not specified. Use '--lxc-name' when calling wait_for_lxc_ip.sh."
exit 1
fi
if ! lxc list --format csv | grep -q "$LXC_INSTANCE_NAME"; then
echo "ERROR: the lxc instance '$LXC_INSTANCE_NAME' does not exist."
exit 1
fi
IP_V4_ADDRESS=
while true; do
IP_V4_ADDRESS="$(lxc list "$LXC_INSTANCE_NAME" --format csv --columns=4 | grep enp5s0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')" || true
@ -10,7 +36,6 @@ while true; do
if [ -n "$IP_V4_ADDRESS" ]; then
# give the machine extra time to spin up.
wait-for-it -t 300 "$IP_V4_ADDRESS:22"
echo ""
break
else
sleep 1
@ -18,11 +43,9 @@ while true; do
fi
done
# Let's remove any entry in our known_hosts, then add it back.
# we are using IP address here so we don't have to rely on external DNS
# configuration for the base image preparataion.
ssh-keygen -R "$IP_V4_ADDRESS"
# wait for cloud-init to complet before returning.
while lxc exec "$LXC_INSTANCE_NAME" -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done
ssh-keyscan -H -t ecdsa "$IP_V4_ADDRESS" >> "$SSH_HOME/known_hosts"
ssh "ubuntu@$IP_V4_ADDRESS" sudo chown -R ubuntu:ubuntu /home/ubuntu
sleep 1

View File

@ -1,36 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
# TODO: We are using extra space on the remote VPS at the moment for the duplicity backup files.
# we could eliminate that and simply save duplicity backups to the management machine running the script
# this could be done by using a local path and mounting it on the remote VPS.
# maybe something like https://superuser.com/questions/616182/how-to-mount-local-directory-to-remote-like-sshfs
# step 1: run duplicity on the remote system to backup all files to the remote system.
# --allow-source-mismatch
# if the source files to backup don't exist on the remote host, we return.
if ! ssh "$PRIMARY_WWW_FQDN" "[ -d $REMOTE_SOURCE_BACKUP_PATH ]"; then
echo "INFO: The path to backup does not exist. There's nothing to backup! That's ok, execution will continue."
exit 0
fi
ssh "$PRIMARY_WWW_FQDN" sudo PASSPHRASE="$DUPLICITY_BACKUP_PASSPHRASE" duplicity "$REMOTE_SOURCE_BACKUP_PATH" "file://$REMOTE_BACKUP_PATH"
ssh "$PRIMARY_WWW_FQDN" sudo chown -R ubuntu:ubuntu "$REMOTE_BACKUP_PATH"
SSHFS_PATH="/tmp/sshfs_temp"
mkdir -p "$SSHFS_PATH"
# now let's pull down the latest files from the backup directory.
# create a temp directory to serve as the mountpoint for the remote machine backups directory
sshfs "$PRIMARY_WWW_FQDN:$REMOTE_BACKUP_PATH" "$SSHFS_PATH"
# rsync the files from the remote server to our local backup path.
rsync -av "$SSHFS_PATH/" "$LOCAL_BACKUP_PATH/"
# step 4: unmount the SSHFS filesystem and cleanup.
umount "$SSHFS_PATH"
rm -rf "$SSHFS_PATH"

View File

@ -1,43 +0,0 @@
#!/bin/bash
set -ex
# let's do a refresh of the certificates. Let's Encrypt will not run if it's not time.
docker pull certbot/certbot:latest
# iterate over each domain and call certbot
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source "$RESPOSITORY_PATH/reset_env.sh"
source "$SITE_PATH/site_definition"
source "$RESPOSITORY_PATH/domain_env.sh"
# with the lxd side, we are trying to expose ALL OUR services from one IP address, which terminates
# at a cachehing reverse proxy that runs nginx.
ssh "$PRIMARY_WWW_FQDN" sudo mkdir -p "$REMOTE_HOME/letsencrypt/$DOMAIN_NAME/_logs"
# this is minimum required; www and btcpay.
DOMAIN_STRING="-d $DOMAIN_NAME -d $WWW_FQDN -d $BTCPAY_USER_FQDN"
if [ "$DEPLOY_NEXTCLOUD" = true ]; then DOMAIN_STRING="$DOMAIN_STRING -d $NEXTCLOUD_FQDN"; fi
if [ "$DEPLOY_GITEA" = true ]; then DOMAIN_STRING="$DOMAIN_STRING -d $GITEA_FQDN"; fi
if [ "$DEPLOY_NOSTR_RELAY" = true ]; then DOMAIN_STRING="$DOMAIN_STRING -d $NOSTR_FQDN"; fi
# if BTCPAY_ALT_NAMES has been set by the admin, iterate over the list
# and append the domain names to the certbot request
if [ -n "$BTCPAY_ALT_NAMES" ]; then
# let's stub out the rest of our site definitions, if any.
for ALT_NAME in ${BTCPAY_ALT_NAMES//,/ }; do
DOMAIN_STRING="$DOMAIN_STRING -d $ALT_NAME.$DOMAIN_NAME"
done
fi
GENERATE_CERT_STRING="docker run -it --rm --name certbot -p 80:80 -p 443:443 -v $REMOTE_HOME/letsencrypt/$DOMAIN_NAME:/etc/letsencrypt -v /var/lib/letsencrypt:/var/lib/letsencrypt -v $REMOTE_HOME/letsencrypt/$DOMAIN_NAME/_logs:/var/log/letsencrypt certbot/certbot certonly -v --noninteractive --agree-tos --key-type ecdsa --standalone --expand ${DOMAIN_STRING} --email $CERTIFICATE_EMAIL_ADDRESS"
# execute the certbot command that we dynamically generated.
eval "$GENERATE_CERT_STRING"
done

View File

@ -1,141 +0,0 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
# Create the nginx config file which covers all domains.
bash -c ./stub/nginx_config.sh
# redirect all docker commands to the remote host.
export DOCKER_HOST="ssh://ubuntu@$PRIMARY_WWW_FQDN"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source "$RESPOSITORY_PATH/reset_env.sh"
source "$SITE_PATH/site_definition"
source "$RESPOSITORY_PATH/domain_env.sh"
### Let's check to ensure all the requiredsettings are set.
if [ "$DEPLOY_GHOST" = true ]; then
if [ -z "$GHOST_MYSQL_PASSWORD" ]; then
echo "ERROR: Ensure GHOST_MYSQL_PASSWORD is configured in your site_definition."
exit 1
fi
if [ -z "$GHOST_MYSQL_ROOT_PASSWORD" ]; then
echo "ERROR: Ensure GHOST_MYSQL_ROOT_PASSWORD is configured in your site_definition."
exit 1
fi
fi
if [ "$DEPLOY_GITEA" = true ]; then
if [ -z "$GITEA_MYSQL_PASSWORD" ]; then
echo "ERROR: Ensure GITEA_MYSQL_PASSWORD is configured in your site_definition."
exit 1
fi
if [ -z "$GITEA_MYSQL_ROOT_PASSWORD" ]; then
echo "ERROR: Ensure GITEA_MYSQL_ROOT_PASSWORD is configured in your site_definition."
exit 1
fi
fi
if [ "$DEPLOY_NEXTCLOUD" = true ]; then
if [ -z "$NEXTCLOUD_MYSQL_ROOT_PASSWORD" ]; then
echo "ERROR: Ensure NEXTCLOUD_MYSQL_ROOT_PASSWORD is configured in your site_definition."
exit 1
fi
if [ -z "$NEXTCLOUD_MYSQL_PASSWORD" ]; then
echo "ERROR: Ensure NEXTCLOUD_MYSQL_PASSWORD is configured in your site_definition."
exit 1
fi
fi
if [ "$DEPLOY_NOSTR_RELAY" = true ]; then
if [ -z "$NOSTR_ACCOUNT_PUBKEY" ]; then
echo "ERROR: Ensure NOSTR_ACCOUNT_PUBKEY is configured in your site_definition."
exit 1
fi
fi
if [ -z "$DUPLICITY_BACKUP_PASSPHRASE" ]; then
echo "ERROR: Ensure DUPLICITY_BACKUP_PASSPHRASE is configured in your site_definition."
exit 1
fi
if [ -z "$DOMAIN_NAME" ]; then
echo "ERROR: Ensure DOMAIN_NAME is configured in your site_definition."
exit 1
fi
if [ -z "$NOSTR_ACCOUNT_PUBKEY" ]; then
echo "ERROR: You MUST specify a Nostr public key. This is how you get all your social features."
echo "INFO: Go to your site_definition file and set the NOSTR_ACCOUNT_PUBKEY variable."
exit 1
fi
TOR_CONFIG_PATH=
done
./stop_docker_stacks.sh
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# # ensure the tor image is built
# docker build -t tor:latest ./tor
# # if the tor folder doesn't exist, we provision a new one. Otherwise you need to restore.
# # this is how we generate a new torv3 endpoint.
# if ! ssh "$PRIMARY_WWW_FQDN" "[ -d $REMOTE_HOME/tor/www ]"; then
# ssh "$PRIMARY_WWW_FQDN" "mkdir -p $REMOTE_HOME/tor"
# TOR_CONFIG_PATH="$(pwd)/tor/torrc-init"
# export TOR_CONFIG_PATH="$TOR_CONFIG_PATH"
# docker stack deploy -c ./tor.yml torstack
# sleep 20
# docker stack rm torstack
# sleep 20
# fi
# ONION_ADDRESS="$(ssh "$PRIMARY_WWW_FQDN" sudo cat "${REMOTE_HOME}"/tor/www/hostname)"
# export ONION_ADDRESS="$ONION_ADDRESS"
# # # Since we run a separate ghost process, we create a new directory and symlink it to the original
# # if ! ssh "$PRIMARY_WWW_FQDN" "[ -L $REMOTE_HOME/tor_ghost ]"; then
# # ssh "$PRIMARY_WWW_FQDN" ln -s "$REMOTE_HOME/ghost_site/themes $REMOTE_HOME/tor_ghost/themes"
# # fi
# fi
# nginx gets deployed first since it "owns" the docker networks of downstream services.
./stub/nginx_yml.sh
# next run our application stub logic. These deploy the apps too if configured to do so.
./stub/ghost_yml.sh
./stub/nextcloud_yml.sh
./stub/gitea_yml.sh
./stub/nostr_yml.sh
# # start a browser session; point it to port 80 to ensure HTTPS redirect.
# # WWW_FQDN is in our certificate, so we resolve to that.
# wait-for-it -t 320 "$WWW_FQDN:80"
# wait-for-it -t 320 "$WWW_FQDN:443"
# # open bowser tabs.
# if [ "$DEPLOY_GHOST" = true ]; then
# xdg-open "http://$WWW_FQDN" > /dev/null 2>&1
# fi
# if [ "$DEPLOY_NEXTCLOUD" = true ]; then
# xdg-open "http://$NEXTCLOUD_FQDN" > /dev/null 2>&1
# fi
# if [ "$DEPLOY_GITEA" = true ]; then
# xdg-open "http://$GITEA_FQDN" > /dev/null 2>&1
# fi
# if [ "$DEPLOY_BTCPAY_SERVER" = true ]; then
# xdg-open "http://$BTCPAY_USER_FQDN" > /dev/null 2>&1
# fi

View File

@ -1,31 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
FILE_COUNT="$(find "$LOCAL_BACKUP_PATH" -type f | wc -l)"
if [ "$FILE_COUNT" = 0 ]; then
echo "ERROR: there are no files in the local backup path '$LOCAL_BACKUP_PATH'."
echo "We're going to continue with execution."
exit 0
fi
RESPONSE=
read -r -p "Are you sure you want to restore the local path '$LOCAL_BACKUP_PATH' to the remote server at '$PRIMARY_WWW_FQDN' (y/n)": RESPONSE
if [ "$RESPONSE" != "y" ]; then
echo "STOPPING."
exit 0
fi
# delete the target backup path so we can push restoration files from the management machine.
ssh "$PRIMARY_WWW_FQDN" sudo rm -rf "$REMOTE_SOURCE_BACKUP_PATH"
# scp our local backup directory to the remote machine
ssh "$PRIMARY_WWW_FQDN" sudo mkdir -p "$REMOTE_BACKUP_PATH"
ssh "$PRIMARY_WWW_FQDN" sudo chown ubuntu:ubuntu "$REMOTE_BACKUP_PATH"
scp -r "$LOCAL_BACKUP_PATH" "$PRIMARY_WWW_FQDN:$REMOTE_BACKUP_PATH"
# now we run duplicity to restore the archive.
ssh "$PRIMARY_WWW_FQDN" sudo PASSPHRASE="$DUPLICITY_BACKUP_PASSPHRASE" duplicity --force restore "file://$REMOTE_BACKUP_PATH/$BACKUP_TIMESTAMP" "$REMOTE_SOURCE_BACKUP_PATH/"

View File

@ -1,100 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
# bring down ghost instances.
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source "$RESPOSITORY_PATH/reset_env.sh"
source "$SITE_PATH/site_definition"
source "$RESPOSITORY_PATH/domain_env.sh"
### Stop all services.
for APP in ghost nextcloud gitea nostr; do
# backup each language for each app.
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
STACK_NAME="$DOMAIN_IDENTIFIER-$APP-$LANGUAGE_CODE"
if docker stack list --format "{{.Name}}" | grep -q "$STACK_NAME"; then
docker stack rm "$STACK_NAME"
sleep 2
fi
# these variable are used by both backup/restore scripts.
export APP="$APP"
export REMOTE_BACKUP_PATH="$REMOTE_HOME/backups/www/$APP/$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
export REMOTE_SOURCE_BACKUP_PATH="$REMOTE_HOME/$APP/$DOMAIN_NAME"
# ensure our local backup path exists so we can pull down the duplicity archive to the management machine.
export LOCAL_BACKUP_PATH="$SITE_PATH/backups/www/$APP/$BACKUP_TIMESTAMP"
# ensure our local backup path exists.
if [ ! -d "$LOCAL_BACKUP_PATH" ]; then
mkdir -p "$LOCAL_BACKUP_PATH"
fi
if [ "$RESTORE_WWW" = true ]; then
./restore_path.sh
#ssh "$PRIMARY_WWW_FQDN" sudo chown ubuntu:ubuntu "$REMOTE_HOME/$APP"
elif [ "$BACKUP_APPS" = true ]; then
# if we're not restoring, then we may or may not back up.
./backup_path.sh
fi
done
done
done
# remove the nginx stack
if docker stack list --format "{{.Name}}" | grep -q reverse-proxy; then
sleep 2
docker stack rm reverse-proxy
# wait for all docker containers to stop.
# TODO see if there's a way to check for this.
sleep 15
fi
#
if [ "$STOP_SERVICES" = true ]; then
echo "STOPPING as indicated by the --stop flag."
exit 1
fi
# generate the certs and grab a backup
if [ "$RUN_CERT_RENEWAL" = true ]; then
./generate_certs.sh
fi
# Back each domain's certificates under /home/ubuntu/letsencrypt/domain
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
source "$RESPOSITORY_PATH/domain_env.sh"
# these variable are used by both backup/restore scripts.
export APP="letsencrypt"
export REMOTE_BACKUP_PATH="$REMOTE_HOME/backups/www/$APP/$DOMAIN_IDENTIFIER"
export REMOTE_SOURCE_BACKUP_PATH="$REMOTE_HOME/$APP/$DOMAIN_NAME"
# ensure our local backup path exists so we can pull down the duplicity archive to the management machine.
export LOCAL_BACKUP_PATH="$SITE_PATH/backups/www/$APP/$BACKUP_TIMESTAMP"
mkdir -p "$LOCAL_BACKUP_PATH"
if [ "$RESTORE_WWW" = true ]; then
sleep 5
echo "STARTING restore_path.sh for letsencrypt."
./restore_path.sh
#ssh "$PRIMARY_WWW_FQDN" sudo chown ubuntu:ubuntu "$REMOTE_HOME/$APP"
elif [ "$BACKUP_APPS" = true ]; then
# if we're not restoring, then we may or may not back up.
./backup_path.sh
fi
done

View File

@ -1,112 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source "$RESPOSITORY_PATH/reset_env.sh"
source "$SITE_PATH/site_definition"
source "$RESPOSITORY_PATH/domain_env.sh"
# for each language specified in the site_definition, we spawn a separate ghost container
# at https://www.domain.com/$LANGUAGE_CODE
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
STACK_NAME="$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
# ensure directories on remote host exist so we can mount them into the containers.
ssh "$PRIMARY_WWW_FQDN" mkdir -p "$REMOTE_HOME/ghost/$DOMAIN_NAME"
ssh "$PRIMARY_WWW_FQDN" mkdir -p "$REMOTE_HOME/ghost/$DOMAIN_NAME/$LANGUAGE_CODE/ghost" "$REMOTE_HOME/ghost/$DOMAIN_NAME/$LANGUAGE_CODE/db"
export GHOST_STACK_TAG="ghost-$STACK_NAME"
export GHOST_DB_STACK_TAG="ghostdb-$STACK_NAME"
# todo append domain number or port number.
WEBSTACK_PATH="$SITE_PATH/webstack"
mkdir -p "$WEBSTACK_PATH"
export DOCKER_YAML_PATH="$WEBSTACK_PATH/ghost-$LANGUAGE_CODE.yml"
# here's the NGINX config. We support ghost and nextcloud.
cat > "$DOCKER_YAML_PATH" <<EOL
version: "3.8"
services:
EOL
# This is the ghost for HTTPS (not over Tor)
cat >>"$DOCKER_YAML_PATH" <<EOL
${GHOST_STACK_TAG}:
image: ${GHOST_IMAGE}
networks:
- ghostnet-${DOMAIN_IDENTIFIER}-${LANGUAGE_CODE}
- ghostdbnet-${DOMAIN_IDENTIFIER}-${LANGUAGE_CODE}
volumes:
- ${REMOTE_HOME}/ghost/${DOMAIN_NAME}/${LANGUAGE_CODE}/ghost:/var/lib/ghost/content
environment:
EOL
if [ "$LANGUAGE_CODE" = "en" ]; then
cat >>"$DOCKER_YAML_PATH" <<EOL
- url=https://${WWW_FQDN}
EOL
else
cat >>"$DOCKER_YAML_PATH" <<EOL
- url=https://${WWW_FQDN}/${LANGUAGE_CODE}
EOL
fi
cat >>"$DOCKER_YAML_PATH" <<EOL
- database__client=mysql
- database__connection__host=${GHOST_DB_STACK_TAG}
- database__connection__user=ghost
- database__connection__password=\${GHOST_MYSQL_PASSWORD}
- database__connection__database=ghost
- database__pool__min=0
- privacy__useStructuredData=true
deploy:
restart_policy:
condition: on-failure
${GHOST_DB_STACK_TAG}:
image: ${GHOST_DB_IMAGE}
networks:
- ghostdbnet-${DOMAIN_IDENTIFIER}-${LANGUAGE_CODE}
volumes:
- ${REMOTE_HOME}/ghost/${DOMAIN_NAME}/${LANGUAGE_CODE}/db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=\${GHOST_MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=ghost
- MYSQL_USER=ghost
- MYSQL_PASSWORD=\${GHOST_MYSQL_PASSWORD}
deploy:
restart_policy:
condition: on-failure
EOL
cat >>"$DOCKER_YAML_PATH" <<EOL
networks:
EOL
if [ "$DEPLOY_GHOST" = true ]; then
GHOSTNET_NAME="ghostnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
GHOSTDBNET_NAME="ghostdbnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
cat >>"$DOCKER_YAML_PATH" <<EOL
${GHOSTNET_NAME}:
name: "reverse-proxy_ghostnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
external: true
${GHOSTDBNET_NAME}:
EOL
fi
docker stack deploy -c "$DOCKER_YAML_PATH" "$DOMAIN_IDENTIFIER-ghost-$LANGUAGE_CODE"
sleep 2
done # language code
done # domain list

View File

@ -1,88 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source "$RESPOSITORY_PATH/reset_env.sh"
source "$SITE_PATH/site_definition"
source "$RESPOSITORY_PATH/domain_env.sh"
if [ "$DEPLOY_GITEA" = true ]; then
GITEA_PATH="$REMOTE_GITEA_PATH/$DOMAIN_NAME/${LANGUAGE_CODE}"
ssh "$PRIMARY_WWW_FQDN" mkdir -p "$GITEA_PATH/data" "$GITEA_PATH/db"
STACK_NAME="$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
export STACK_TAG="gitea-$STACK_NAME"
export DB_STACK_TAG="giteadb-$STACK_NAME"
export DOCKER_YAML_PATH="$SITE_PATH/webstack/gitea-en.yml"
NET_NAME="giteanet-$DOMAIN_IDENTIFIER"
DBNET_NAME="giteadbnet-$DOMAIN_IDENTIFIER"
# here's the NGINX config. We support ghost and nextcloud.
echo "" > "$DOCKER_YAML_PATH"
cat >>"$DOCKER_YAML_PATH" <<EOL
version: "3.8"
services:
${STACK_TAG}:
image: ${GITEA_IMAGE}
volumes:
- ${GITEA_PATH}/data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- USER_UID=1000
- USER_GID=1000
- ROOT_URL=https://${GITEA_FQDN}
- GITEA__database__DB_TYPE=mysql
- GITEA__database__HOST=${DB_STACK_TAG}:3306
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__PASSWD=\${GITEA_MYSQL_PASSWORD}
networks:
- ${NET_NAME}
- ${DBNET_NAME}
deploy:
restart_policy:
condition: on-failure
${DB_STACK_TAG}:
image: ${GITEA_DB_IMAGE}
networks:
- ${DBNET_NAME}
volumes:
- ${GITEA_PATH}/db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=\${GITEA_MYSQL_ROOT_PASSWORD}
- MYSQL_PASSWORD=\${GITEA_MYSQL_PASSWORD}
- MYSQL_DATABASE=gitea
- MYSQL_USER=gitea
deploy:
restart_policy:
condition: on-failure
networks:
EOL
cat >>"$DOCKER_YAML_PATH" <<EOL
${NET_NAME}:
name: "reverse-proxy_${NET_NAME}-${LANGUAGE_CODE}"
external: true
${DBNET_NAME}:
EOL
docker stack deploy -c "$DOCKER_YAML_PATH" "$DOMAIN_IDENTIFIER-gitea-$LANGUAGE_CODE"
sleep 1
fi
done

View File

@ -1,80 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source "$RESPOSITORY_PATH/reset_env.sh"
source "$SITE_PATH/site_definition"
source "$RESPOSITORY_PATH/domain_env.sh"
# ensure remote directories exist
if [ "$DEPLOY_NEXTCLOUD" = true ]; then
ssh "$PRIMARY_WWW_FQDN" "mkdir -p $REMOTE_NEXTCLOUD_PATH/$DOMAIN_NAME/en/db"
ssh "$PRIMARY_WWW_FQDN" "mkdir -p $REMOTE_NEXTCLOUD_PATH/$DOMAIN_NAME/en/html"
sleep 2
WEBSTACK_PATH="$SITE_PATH/webstack"
mkdir -p "$WEBSTACK_PATH"
export DOCKER_YAML_PATH="$WEBSTACK_PATH/nextcloud-en.yml"
# here's the NGINX config. We support ghost and nextcloud.
cat > "$DOCKER_YAML_PATH" <<EOL
version: "3.8"
services:
${NEXTCLOUD_STACK_TAG}:
image: ${NEXTCLOUD_IMAGE}
networks:
- nextcloud-${DOMAIN_IDENTIFIER}-en
- nextclouddb-${DOMAIN_IDENTIFIER}-en
volumes:
- ${REMOTE_HOME}/nextcloud/${DOMAIN_NAME}/en/html:/var/www/html
environment:
- MYSQL_PASSWORD=\${NEXTCLOUD_MYSQL_PASSWORD}
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=${NEXTCLOUD_DB_STACK_TAG}
- NEXTCLOUD_TRUSTED_DOMAINS=${DOMAIN_NAME}
- OVERWRITEHOST=${NEXTCLOUD_FQDN}
- OVERWRITEPROTOCOL=https
- SERVERNAME=${NEXTCLOUD_FQDN}
deploy:
restart_policy:
condition: on-failure
${NEXTCLOUD_DB_STACK_TAG}:
image: ${NEXTCLOUD_DB_IMAGE}
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb_read_only_compressed=OFF
networks:
- nextclouddb-${DOMAIN_IDENTIFIER}-en
volumes:
- ${REMOTE_HOME}/nextcloud/${DOMAIN_NAME}/en/db:/var/lib/mysql
environment:
- MARIADB_ROOT_PASSWORD=\${NEXTCLOUD_MYSQL_ROOT_PASSWORD}
- MYSQL_PASSWORD=\${NEXTCLOUD_MYSQL_PASSWORD}
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
deploy:
restart_policy:
condition: on-failure
networks:
nextcloud-${DOMAIN_IDENTIFIER}-en:
name: "reverse-proxy_nextcloudnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
external: true
nextclouddb-${DOMAIN_IDENTIFIER}-en:
EOL
docker stack deploy -c "$DOCKER_YAML_PATH" "$DOMAIN_IDENTIFIER-nextcloud-en"
fi
done

View File

@ -1,507 +0,0 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
# here's the NGINX config. We support ghost and nextcloud.
NGINX_CONF_PATH="$PROJECT_PATH/nginx.conf"
# clear the existing nginx config.
echo "" > "$NGINX_CONF_PATH"
# iterate over all our domains and create the nginx config file.
iteration=0
echo "DOMAIN_LIST: $DOMAIN_LIST"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
export CONTAINER_TLS_PATH="/etc/letsencrypt/${DOMAIN_NAME}/live/${DOMAIN_NAME}"
# source the site path so we know what features it has.
source "$RESPOSITORY_PATH/reset_env.sh"
source "$SITE_PATH/site_definition"
source "$RESPOSITORY_PATH/domain_env.sh"
if [ $iteration = 0 ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
events {
worker_connections 1024;
}
http {
client_max_body_size 100m;
server_tokens off;
# next two sets commands and connection_upgrade block come from https://docs.btcpayserver.org/FAQ/Deployment/#can-i-use-an-existing-nginx-server-as-a-reverse-proxy-with-ssl-termination
# Needed to allow very long URLs to prevent issues while signing PSBTs
server_names_hash_bucket_size 128;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
client_header_buffer_size 500k;
large_client_header_buffers 4 500k;
# Needed websocket support (used by Ledger hardware wallets)
map \$http_upgrade \$connection_upgrade {
default upgrade;
'' close;
}
# return 403 for all non-explicit hostnames
server {
listen 80 default_server;
return 301 https://${WWW_FQDN}\$request_uri;
}
EOL
fi
# ghost http to https redirects.
cat >>"$NGINX_CONF_PATH" <<EOL
# http://${DOMAIN_NAME} redirect to https://${WWW_FQDN}
server {
listen 80;
listen [::]:80;
server_name ${DOMAIN_NAME};
location / {
# request MAY get another redirect at https://domain.tld for www.
return 301 https://${DOMAIN_NAME}\$request_uri;
}
}
EOL
cat >>"$NGINX_CONF_PATH" <<EOL
# http://${WWW_FQDN} redirect to https://${WWW_FQDN}
server {
listen 80;
listen [::]:80;
server_name ${WWW_FQDN};
return 301 https://${WWW_FQDN}\$request_uri;
}
EOL
# nextcloud http-to-https redirect
if [ "$DEPLOY_NEXTCLOUD" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# http://${NEXTCLOUD_FQDN} redirect to https://${NEXTCLOUD_FQDN}
server {
listen 80;
listen [::]:80;
server_name ${NEXTCLOUD_FQDN};
return 301 https://${NEXTCLOUD_FQDN}\$request_uri;
}
EOL
fi
# gitea http to https redirect.
if [ "$DEPLOY_GITEA" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# http://${GITEA_FQDN} redirect to https://${GITEA_FQDN}
server {
listen 80;
listen [::]:80;
server_name ${GITEA_FQDN};
return 301 https://${GITEA_FQDN}\$request_uri;
}
EOL
fi
# let's iterate over BTCPAY_ALT_NAMES and generate our SERVER_NAMES for btcpay server.
BTCPAY_SERVER_NAMES="$BTCPAY_USER_FQDN"
if [ -n "$BTCPAY_ALT_NAMES" ]; then
# let's stub out the rest of our site definitions, if any.
for ALT_NAME in ${BTCPAY_ALT_NAMES//,/ }; do
BTCPAY_SERVER_NAMES="$BTCPAY_SERVER_NAMES $ALT_NAME.$DOMAIN_NAME"
done
fi
# BTCPAY server http->https redirect
cat >>"$NGINX_CONF_PATH" <<EOL
# http://${BTCPAY_USER_FQDN} redirect to https://${BTCPAY_USER_FQDN}
server {
listen 80;
listen [::]:80;
server_name ${BTCPAY_SERVER_NAMES};
return 301 https://${BTCPAY_USER_FQDN}\$request_uri;
}
EOL
if [ "$iteration" = 0 ]; then
# TLS config for ghost.
cat >>"$NGINX_CONF_PATH" <<EOL
# global TLS settings
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1.3;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
add_header Strict-Transport-Security "max-age=63072000" always;
ssl_stapling on;
ssl_stapling_verify on;
resolver 198.54.117.10;
# TODO change resolver to local DNS resolver, or inherit from system.
# default server if hostname not specified.
server {
listen 443 default_server;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
return 403;
}
# maybe helps with Twitter cards.
#map \$http_user_agent \$og_prefix {
# ~*(googlebot|twitterbot)/ /open-graph;
#}
# this map allows us to route the clients request to the correct Ghost instance
# based on the clients browser language setting.
map \$http_accept_language \$lang {
default "";
~es es;
}
EOL
fi
cat >>"$NGINX_CONF_PATH" <<EOL
# https://${DOMAIN_NAME} redirect to https://${WWW_FQDN}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
server_name ${DOMAIN_NAME};
EOL
if [ "$DEPLOY_NOSTR_RELAY" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# We return a JSON object with name/pubkey mapping per NIP05.
# https://www.reddit.com/r/nostr/comments/rrzk76/nip05_mapping_usernames_to_dns_domains_by_fiatjaf/sssss
# TODO I'm not sure about the security of this Access-Control-Allow-Origin. Read up and restrict it if possible.
location = /.well-known/nostr.json {
add_header Content-Type application/json;
add_header Access-Control-Allow-Origin *;
return 200 '{ "names": { "_": "${NOSTR_ACCOUNT_PUBKEY}" } }';
}
EOL
fi
cat >>"$NGINX_CONF_PATH" <<EOL
# catch all; send request to ${WWW_FQDN}
location / {
return 301 https://${WWW_FQDN}\$request_uri;
}
}
#access_log /var/log/nginx/ghost-access.log;
#error_log /var/log/nginx/ghost-error.log;
EOL
if [ "$DEPLOY_NOSTR_RELAY" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# wss://$NOSTR_FQDN server block
server {
listen 443 ssl;
server_name ${NOSTR_FQDN};
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
keepalive_timeout 70;
location / {
# redirect all HTTP traffic to btcpay server
proxy_pass http://nostr-${DOMAIN_IDENTIFIER}:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host \$host;
}
}
EOL
fi
cat >>"$NGINX_CONF_PATH" <<EOL
# https server block for https://${BTCPAY_SERVER_NAMES}
server {
listen 443 ssl http2;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
server_name ${BTCPAY_SERVER_NAMES};
# Route everything to the real BTCPay server
location / {
# URL of BTCPay Server
proxy_pass http://10.139.144.10:80;
proxy_set_header Host \$http_host;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
# For websockets (used by Ledger hardware wallets)
proxy_set_header Upgrade \$http_upgrade;
}
}
EOL
echo " # set up cache paths for nginx caching" >>"$NGINX_CONF_PATH"
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
STACK_NAME="$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
cat >>"$NGINX_CONF_PATH" <<EOL
proxy_cache_path /tmp/${STACK_NAME} levels=1:2 keys_zone=${STACK_NAME}:600m max_size=100m inactive=24h;
EOL
done
# the open server block for the HTTPS listener for ghost
cat >>"$NGINX_CONF_PATH" <<EOL
# Main HTTPS listener for https://${WWW_FQDN}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
server_name ${WWW_FQDN};
# Set the crawler policy.
location = /robots.txt {
add_header Content-Type text/plain;
return 200 "User-Agent: *\\nAllow: /\\n";
}
EOL
# # add the Onion-Location header if specifed.
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$NGINX_CONF_PATH" <<EOL
# add_header Onion-Location https://${ONION_ADDRESS}\$request_uri;
# EOL
# fi
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
STACK_NAME="$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
if [ "$LANGUAGE_CODE" = en ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
location ~ ^/(ghost/|p/|private/) {
EOL
else
cat >>"$NGINX_CONF_PATH" <<EOL
location ~ ^/${LANGUAGE_CODE}/(ghost/|p/|private/) {
EOL
fi
cat >>"$NGINX_CONF_PATH" <<EOL
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header Host \$http_host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_intercept_errors on;
proxy_pass http://ghost-${STACK_NAME}:2368;
}
EOL
done
ROOT_SITE_LANGUAGE_CODES="$SITE_LANGUAGE_CODES"
for LANGUAGE_CODE in ${ROOT_SITE_LANGUAGE_CODES//,/ }; do
cat >>"$NGINX_CONF_PATH" <<EOL
# Location block to back https://${WWW_FQDN}/${LANGUAGE_CODE} or https://${WWW_FQDN}/ if english.
EOL
if [ "$LANGUAGE_CODE" = en ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
location / {
EOL
if (( "$LANGUAGE_CODE_COUNT" > 1 )); then
# we only need this clause if we know there is more than once lanuage being rendered.
cat >>"$NGINX_CONF_PATH" <<EOL
# Redirect the user to the correct language using the map above.
if ( \$http_accept_language !~* '^en(.*)\$' ) {
#rewrite (.*) \$1/\$lang;
return 302 https://${WWW_FQDN}/\$lang;
}
EOL
fi
else
cat >>"$NGINX_CONF_PATH" <<EOL
location /${LANGUAGE_CODE} {
EOL
fi
cat >>"$NGINX_CONF_PATH" <<EOL
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header Host \$http_host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_intercept_errors on;
proxy_pass http://ghost-${DOMAIN_IDENTIFIER}-${LANGUAGE_CODE}:2368;
# https://stanislas.blog/2019/08/ghost-nginx-cache/ for nginx caching instructions
# Remove cookies which are useless for anonymous visitor and prevent caching
proxy_ignore_headers Set-Cookie Cache-Control;
proxy_hide_header Set-Cookie;
# Add header for cache status (miss or hit)
add_header X-Cache-Status \$upstream_cache_status;
proxy_cache ${DOMAIN_IDENTIFIER}-${LANGUAGE_CODE};
# Default TTL: 1 day
proxy_cache_valid 5s;
# Cache 404 pages for 1h
proxy_cache_valid 404 1h;
# use conditional GET requests to refresh the content from origin servers
proxy_cache_revalidate on;
proxy_buffering on;
# Allows starting a background subrequest to update an expired cache item,
# while a stale cached response is returned to the client.
proxy_cache_background_update on;
# Bypass cache for errors
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
}
EOL
done
# this is the closing server block for the ghost HTTPS segment
cat >>"$NGINX_CONF_PATH" <<EOL
}
EOL
if [ "$DEPLOY_NEXTCLOUD" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# TLS listener for ${NEXTCLOUD_FQDN}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
server_name ${NEXTCLOUD_FQDN};
location / {
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header Host \$host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://${NEXTCLOUD_STACK_TAG}:80;
}
# https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/reverse_proxy_configuration.html
location /.well-known/carddav {
return 301 \$scheme://\$host/remote.php/dav;
}
location /.well-known/caldav {
return 301 \$scheme://\$host/remote.php/dav;
}
}
EOL
fi
# TODO this MIGHT be part of the solution for Twitter Cards.
# location /contents {
# resolver 127.0.0.11 ipv6=off valid=5m;
# proxy_set_header X-Real-IP \$remote_addr;
# proxy_set_header Host \$http_host;
# proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Proto \$scheme;
# proxy_intercept_errors on;
# proxy_pass http://ghost-${DOMAIN_IDENTIFIER}-${SITE_LANGUAGE_CODES}::2368\$og_prefix\$request_uri;
# }
# this piece is for GITEA.
if [ "$DEPLOY_GITEA" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# TLS listener for ${GITEA_FQDN}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
server_name ${GITEA_FQDN};
location / {
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header Host \$host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://gitea-${DOMAIN_IDENTIFIER}-en:3000;
}
}
EOL
fi
iteration=$((iteration+1))
done
# add the closing brace.
cat >>"$NGINX_CONF_PATH" <<EOL
}
EOL

View File

@ -1,133 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
#https://github.com/fiatjaf/expensive-relay
# NOSTR RELAY WHICH REQUIRES PAYMENTS.
DOCKER_YAML_PATH="$PROJECT_PATH/nginx.yml"
cat > "$DOCKER_YAML_PATH" <<EOL
version: "3.8"
services:
nginx:
image: ${NGINX_IMAGE}
ports:
- 0.0.0.0:443:443
- 0.0.0.0:80:80
networks:
EOL
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source "$RESPOSITORY_PATH/reset_env.sh"
source "$SITE_PATH/site_definition"
source "$RESPOSITORY_PATH/domain_env.sh"
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
# We create another ghost instance under /
cat >> "$DOCKER_YAML_PATH" <<EOL
- ghostnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE
EOL
if [ "$LANGUAGE_CODE" = en ]; then
if [ "$DEPLOY_GITEA" = "true" ]; then
cat >> "$DOCKER_YAML_PATH" <<EOL
- giteanet-$DOMAIN_IDENTIFIER-en
EOL
fi
if [ "$DEPLOY_NEXTCLOUD" = "true" ]; then
cat >> "$DOCKER_YAML_PATH" <<EOL
- nextcloudnet-$DOMAIN_IDENTIFIER-en
EOL
fi
if [ "$DEPLOY_NOSTR_RELAY" = "true" ]; then
cat >> "$DOCKER_YAML_PATH" <<EOL
- nostrnet-$DOMAIN_IDENTIFIER-en
EOL
fi
fi
done
done
cat >> "$DOCKER_YAML_PATH" <<EOL
volumes:
- ${REMOTE_HOME}/letsencrypt:/etc/letsencrypt:ro
configs:
- source: nginx-config
target: /etc/nginx/nginx.conf
deploy:
restart_policy:
condition: on-failure
configs:
nginx-config:
file: ${PROJECT_PATH}/nginx.conf
EOL
################ NETWORKS SECTION
cat >> "$DOCKER_YAML_PATH" <<EOL
networks:
EOL
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source "$RESPOSITORY_PATH/reset_env.sh"
source "$SITE_PATH/site_definition"
source "$RESPOSITORY_PATH/domain_env.sh"
# for each language specified in the site_definition, we spawn a separate ghost container
# at https://www.domain.com/$LANGUAGE_CODE
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
cat >> "$DOCKER_YAML_PATH" <<EOL
ghostnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE:
attachable: true
EOL
if [ "$LANGUAGE_CODE" = en ]; then
if [ "$DEPLOY_GITEA" = true ]; then
cat >> "$DOCKER_YAML_PATH" <<EOL
giteanet-$DOMAIN_IDENTIFIER-en:
attachable: true
EOL
fi
if [ "$DEPLOY_NEXTCLOUD" = true ]; then
cat >> "$DOCKER_YAML_PATH" <<EOL
nextcloudnet-$DOMAIN_IDENTIFIER-en:
attachable: true
EOL
fi
if [ "$DEPLOY_NOSTR_RELAY" = true ]; then
cat >> "$DOCKER_YAML_PATH" <<EOL
nostrnet-$DOMAIN_IDENTIFIER-en:
attachable: true
EOL
fi
fi
done
done
docker stack deploy -c "$DOCKER_YAML_PATH" "reverse-proxy"
# iterate over all our domains and create the nginx config file.

View File

@ -1,83 +0,0 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
docker pull "$NOSTR_RELAY_IMAGE"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source "$RESPOSITORY_PATH/reset_env.sh"
source "$SITE_PATH/site_definition"
source "$RESPOSITORY_PATH/domain_env.sh"
if [ "$DEPLOY_NOSTR_RELAY" = true ]; then
REMOTE_NOSTR_PATH="$REMOTE_HOME/nostr"
NOSTR_PATH="$REMOTE_NOSTR_PATH/$DOMAIN_NAME"
NOSTR_CONFIG_PATH="$SITE_PATH/webstack/nostr.config"
ssh "$PRIMARY_WWW_FQDN" mkdir -p "$NOSTR_PATH/data" "$NOSTR_PATH/db"
export STACK_TAG="nostr-$DOMAIN_IDENTIFIER"
export DOCKER_YAML_PATH="$SITE_PATH/webstack/nostr.yml"
NET_NAME="nostrnet-$DOMAIN_IDENTIFIER"
DBNET_NAME="nostrdbnet-$DOMAIN_IDENTIFIER"
# here's the NGINX config. We support ghost and nextcloud.
echo "" > "$DOCKER_YAML_PATH"
cat >>"$DOCKER_YAML_PATH" <<EOL
version: "3.8"
services:
${STACK_TAG}:
image: ${NOSTR_RELAY_IMAGE}
volumes:
- ${NOSTR_PATH}/data:/usr/src/app/db
# environment:
# - USER_UID=1000
networks:
- ${NET_NAME}
configs:
- source: nostr-config
target: /usr/src/app/config.toml
deploy:
restart_policy:
condition: on-failure
networks:
${NET_NAME}:
name: "reverse-proxy_${NET_NAME}-en"
external: true
configs:
nostr-config:
file: ${NOSTR_CONFIG_PATH}
EOL
# documentation: https://git.sr.ht/~gheartsfield/nostr-rs-relay/tree/0.7.0/item/config.toml
cat >>"$NOSTR_CONFIG_PATH" <<EOL
[info]
relay_url = "wss://${NOSTR_FQDN}/"
name = "${DOMAIN_NAME}"
# administrative contact pubkey TODO
#pubkey = ""
[options]
reject_future_seconds = 1800
[limits]
messages_per_sec = 3
#max_event_bytes = 131072
EOL
docker stack deploy -c "$DOCKER_YAML_PATH" "$DOMAIN_IDENTIFIER-nostr-$LANGUAGE_CODE"
sleep 1
fi
done

View File

@ -1,23 +0,0 @@
# # tor config
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$NGINX_CONF_PATH" <<EOL
# # server listener for tor v3 onion endpoint
# server {
# listen 443 ssl http2;
# listen [::]:443 ssl http2;
# server_name ${ONION_ADDRESS};
# #access_log /var/log/nginx/tor-www.log;
# # administration not allowed over tor interface.
# location /ghost { deny all; }
# location / {
# proxy_set_header X-Forwarded-For 1.1.1.1;
# proxy_set_header X-Forwarded-Proto https;
# proxy_set_header X-Real-IP 1.1.1.1;
# proxy_set_header Host \$http_host;
# proxy_pass http://tor-ghost:2368;
# }
# }
# EOL
# fi

View File

@ -1,149 +0,0 @@
# if [ "$DEPLOY_NEXTCLOUD" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# nextcloud-db:
# image: ${NEXTCLOUD_DB_IMAGE}
# command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --log-bin --innodb_read_only_compressed=OFF
# networks:
# - nextclouddb-net
# volumes:
# - ${REMOTE_HOME}/nextcloud/db/data:/var/lib/mysql
# environment:
# - MARIADB_ROOT_PASSWORD=\${NEXTCLOUD_MYSQL_ROOT_PASSWORD}
# - MYSQL_PASSWORD=\${NEXTCLOUD_MYSQL_PASSWORD}
# - MYSQL_DATABASE=nextcloud
# - MYSQL_USER=nextcloud
# deploy:
# restart_policy:
# condition: on-failure
# nextcloud:
# image: ${NEXTCLOUD_IMAGE}
# networks:
# - nextclouddb-net
# - nextcloud-net
# volumes:
# - ${REMOTE_HOME}/nextcloud/html:/var/www/html
# environment:
# - MYSQL_PASSWORD=\${NEXTCLOUD_MYSQL_PASSWORD}
# - MYSQL_DATABASE=nextcloud
# - MYSQL_USER=nextcloud
# - MYSQL_HOST=nextcloud-db
# - NEXTCLOUD_TRUSTED_DOMAINS=${DOMAIN_NAME}
# - OVERWRITEHOST=${NEXTCLOUD_FQDN}
# - OVERWRITEPROTOCOL=https
# - SERVERNAME=${NEXTCLOUD_FQDN}
# deploy:
# restart_policy:
# condition: on-failure
# EOL
# fi
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# # a hidden service that routes to the nginx container at http://onionurl.onion server block
# tor-onion:
# image: tor:latest
# networks:
# - tor-net
# volumes:
# - ${REMOTE_HOME}/tor:/var/lib/tor
# - tor-logs:/var/log/tor
# configs:
# - source: tor-config
# target: /etc/tor/torrc
# mode: 0644
# deploy:
# mode: replicated
# replicas: 1
# restart_policy:
# condition: on-failure
# tor-ghost:
# image: ${GHOST_IMAGE}
# networks:
# - ghostdb-net
# - ghost-net
# volumes:
# - ${REMOTE_HOME}/tor_ghost:/var/lib/ghost/content
# environment:
# - url=https://${ONION_ADDRESS}
# - database__client=mysql
# - database__connection__host=ghostdb
# - database__connection__user=ghost
# - database__connection__password=\${GHOST_MYSQL_PASSWORD}
# - database__connection__database=ghost
# deploy:
# restart_policy:
# condition: on-failure
# EOL
# fi
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# - torghost-net
# EOL
# fi
# if [ "$DEPLOY_NEXTCLOUD" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# - nextcloud-net
# EOL
# fi
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# - tor-net
# EOL
# fi
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# volumes:
# tor-data:
# tor-logs:
# EOL
# fi
# #-------------------------
# if [ "$DEPLOY_NEXTCLOUD" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# nextclouddb-net:
# nextcloud-net:
# EOL
# fi
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# tor-net:
# torghost-net:
# EOL
# fi
# # -------------------------------
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# tor-config:
# file: $(pwd)/tor/torrc
# EOL
# fi
# # -----------------------------

View File

@ -1,11 +0,0 @@
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y tor
#COPY ./torrc /etc/tor/torrc
#RUN chown root:root /etc/tor/torrc
#RUN chmod 0644 /etc/tor/torrc
#RUN mkdir /data
#VOLUME /data
# RUN chown 1000:1000 -R /data
#USER 1000:1000
CMD tor -f /etc/tor/torrc

View File

@ -1,8 +0,0 @@
# we configure a hidden service that listens on onion:80 and redirects to nginx:80 at the at the torv3 onion address
SocksPort 0
HiddenServiceDir /var/lib/tor/www
HiddenServiceVersion 3
HiddenServicePort 443 nginx:443
Log info file /var/log/tor/tor.log

View File

@ -1,5 +0,0 @@
HiddenServiceDir /var/lib/tor/www
HiddenServiceVersion 3
HiddenServicePort 443 127.0.0.1:443
Log info file /var/log/tor/tor.log

View File

@ -1,32 +0,0 @@
#!/bin/bash
set -e
export NEXTCLOUD_FQDN="$NEXTCLOUD_HOSTNAME.$DOMAIN_NAME"
export BTCPAY_FQDN="$BTCPAY_HOSTNAME.$DOMAIN_NAME"
export BTCPAY_USER_FQDN="$BTCPAY_HOSTNAME_IN_CERT.$DOMAIN_NAME"
export WWW_FQDN="$WWW_HOSTNAME.$DOMAIN_NAME"
export GITEA_FQDN="$GITEA_HOSTNAME.$DOMAIN_NAME"
export NOSTR_FQDN="$NOSTR_HOSTNAME.$DOMAIN_NAME"
export ADMIN_ACCOUNT_USERNAME="info"
export CERTIFICATE_EMAIL_ADDRESS="$ADMIN_ACCOUNT_USERNAME@$DOMAIN_NAME"
export REMOTE_NEXTCLOUD_PATH="$REMOTE_HOME/nextcloud"
export REMOTE_GITEA_PATH="$REMOTE_HOME/gitea"
export BTC_CHAIN="$BTC_CHAIN"
export BTCPAY_ADDITIONAL_HOSTNAMES="$BTCPAY_ADDITIONAL_HOSTNAMES"
BACKUP_TIMESTAMP="$(date +"%Y-%m")"
UNIX_BACKUP_TIMESTAMP="$(date +%s)"
SHASUM_OF_DOMAIN="$(echo -n "$DOMAIN_NAME" | sha256sum | awk '{print $1;}' )"
export DOMAIN_IDENTIFIER="${SHASUM_OF_DOMAIN: -6}"
echo "$DOMAIN_IDENTIFIER" > "$SITE_PATH/domain_id"
export BACKUP_TIMESTAMP="$BACKUP_TIMESTAMP"
export UNIX_BACKUP_TIMESTAMP="$UNIX_BACKUP_TIMESTAMP"
export LANGUAGE_CODE_COUNT=$(("$(echo "$SITE_LANGUAGE_CODES" | tr -cd , | wc -c)"+1))
STACK_NAME="$DOMAIN_IDENTIFIER-en"
export NEXTCLOUD_STACK_TAG="nextcloud-$STACK_NAME"
export NEXTCLOUD_DB_STACK_TAG="nextclouddb-$STACK_NAME"

14
farscapian.gpg Normal file
View File

@ -0,0 +1,14 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mFIEAAAAABMIKoZIzj0DAQcCAwQ3hQeRT8HLyQEtKJ5C3dKilfWhSpqkPFtfuE0I
i+MNLavAM7tL9gShij7tEcyZe0Iy2hc38TizSlQJciIdgtHUtCJEZXJlayBTbWl0
aCA8ZGVyZWtAZmFyc2NhcGlhbi5jb20+iIAEExMIABwFAgAAAAACCwkCGwMEFQgJ
CgQWAgMBAheAAh4BABYJELRD5TChThyQCxpUUkVaT1ItR1BHXMcA/2k4QtiV0eNQ
299XW4Wvoac1Be6+WTPRIaC/PYnd0pR7AP4hi5ou6uyKtqkfhLtRQHN/9ny3MBEG
whGxb/bCIzOdILhWBAAAAAASCCqGSM49AwEHAgMEI0VBpCTeIpfdH2UcWiSPYGAJ
Z1Rsp0uKf6HzZnpGRAdCTNgCh+pVBibP0Cz0pNdM7IfHSfS+OP4/Lb1B5N9BSAMB
CAeIbQQYEwgACQUCAAAAAAIbDAAWCRC0Q+UwoU4ckAsaVFJFWk9SLUdQRxM4AQCw
m24svH13uNAebQurOloy/1qZgNdXANBQQ05oi1tEyAD/eGFFVdgs5L6Hpg/GJLvo
X8bd1+1sa2d9TldbgfNfRA0=
=vZGY
-----END PGP PUBLIC KEY BLOCK-----

View File

@ -3,68 +3,247 @@
set -eu
cd "$(dirname "$0")"
# let's check to ensure the management machine is on the Baseline ubuntu 21.04
if ! lsb_release -d | grep -q "Ubuntu 22.04 LTS"; then
echo "ERROR: Your machine is not running the Ubuntu 22.04 LTS baseline OS on your management machine."
# https://www.sovereign-stack.org/install/
# this script is not meant to be executed from the SSME; Let's let's check and abort if so.
if [ "$(hostname)" = ss-mgmt ]; then
echo "ERROR: This command is meant to be executed from the bare metal management machine -- not the SSME."
exit 1
fi
if [ ! -f /usr/share/keyrings/docker-archive-keyring.gpg ]; then
cat ./certs/docker.gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
DISK_OR_PARTITION=
DISK=loop
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--disk-or-partition=*)
DISK_OR_PARTITION="${i#*=}"
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
# ensure the iptables forward policy is set to ACCEPT so your host can act as a router
# Note this is necessary if docker is running (or has been previuosly installed) on the
# same host running LXD.
sudo iptables -F FORWARD
sudo iptables -P FORWARD ACCEPT
# if the user didn't specify the disk or partition, we create a loop device under
# the user's home directory. If the user does specify a disk or partition, we will
# create the ZFS pool there.
if [ -z "$DISK_OR_PARTITION" ]; then
DISK="$DISK_OR_PARTITION"
fi
sudo apt-get update
export DISK="$DISK"
# TODO REVIEW management machine software requirements
# is docker-ce actually needed here? prefer to move docker registry
# to a host on SERVERS LAN so that it can operate
# TODO document which dependencies are required by what software, e.g., trezor, docker, etc.
sudo apt-get install -y wait-for-it dnsutils rsync sshfs curl gnupg \
apt-transport-https ca-certificates lsb-release \
docker-ce-cli docker-ce containerd.io docker-compose-plugin \
python3-pip python3-dev libusb-1.0-0-dev libudev-dev pinentry-curses \
libcanberra-gtk-module
# for trezor installation
pip3 install setuptools wheel
pip3 install trezor_agent
if [ ! -f /etc/udev/rules.d/51-trezor.rules ]; then
sudo cp ./51-trezor.rules /etc/udev/rules.d/51-trezor.rules
fi
# install lxd as a snap if it's not installed. We only really use the client part of this package
# on the management machine.
# install lxd snap and initialize it
if ! snap list | grep -q lxd; then
sudo snap install lxd --candidate
sudo snap install lxd --channel=5.18/stable
sleep 5
# run lxd init
cat <<EOF | lxd init --preseed
config: {}
networks:
- config:
ipv4.address: auto
ipv4.dhcp: true
ipv4.nat: true
ipv6.address: none
description: "Default network bridge for ss-mgmt outbound network access."
name: lxdbr0
type: bridge
storage_pools:
- config:
source: ${DISK}
description: ""
name: sovereign-stack
driver: zfs
profiles:
- config: {}
description: ""
devices:
enp5s0:
name: enp5s0
network: lxdbr0
type: nic
root:
path: /
pool: sovereign-stack
type: disk
name: default
projects: []
EOF
fi
# make ss-deploy available to the user
if ! groups | grep -q docker; then
sudo groupadd docker
. ./deployment/deployment_defaults.sh
. ./deployment/base.sh
# we need to get the base image. IMport it if it's cached, else download it then cache it.
if ! lxc image list | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
# if the image if cached locally, import it from disk, otherwise download it from ubuntu
IMAGE_PATH="$HOME/ss/cache/ss-ubuntu-jammy"
IMAGE_IDENTIFIER=$(find "$IMAGE_PATH" | grep ".qcow2" | head -n1 | cut -d "." -f1)
METADATA_FILE="$IMAGE_PATH/meta-$IMAGE_IDENTIFIER.tar.xz"
IMAGE_FILE="$IMAGE_PATH/$IMAGE_IDENTIFIER.qcow2"
if [ -d "$IMAGE_PATH" ] && [ -f "$METADATA_FILE" ] && [ -f "$IMAGE_FILE" ]; then
lxc image import "$METADATA_FILE" "$IMAGE_FILE" --alias "$UBUNTU_BASE_IMAGE_NAME"
else
lxc image copy "images:$BASE_LXC_IMAGE" local: --alias "$UBUNTU_BASE_IMAGE_NAME" --vm --auto-update
mkdir -p "$IMAGE_PATH"
lxc image export "$UBUNTU_BASE_IMAGE_NAME" "$IMAGE_PATH" --vm
fi
fi
sudo usermod -aG docker "$USER"
# if the ss-mgmt doesn't exist, create it.
SSH_PATH="$HOME/.ssh"
SSH_PRIVKEY_PATH="$SSH_PATH/id_rsa"
SSH_PUBKEY_PATH="$SSH_PRIVKEY_PATH.pub"
# make the Sovereign Stack commands available to the user.
# we use ~/.bashrc
if [ ! -f "$SSH_PRIVKEY_PATH" ]; then
ssh-keygen -f "$SSH_PRIVKEY_PATH" -t rsa -b 4096
fi
chmod 700 "$HOME/.ssh"
chmod 600 "$HOME/.ssh/config"
# add SSH_PUBKEY_PATH to authorized_keys
grep -qxF "$(cat $SSH_PUBKEY_PATH)" "$SSH_PATH/authorized_keys" || cat "$SSH_PUBKEY_PATH" >> "$SSH_PATH/authorized_keys"
FROM_BUILT_IMAGE=false
if ! lxc list --format csv | grep -q ss-mgmt; then
# TODO check to see if there's an existing ss-mgmt image to spawn from, otherwise do this.
if lxc image list | grep -q ss-mgmt; then
FROM_BUILT_IMAGE=true
lxc init ss-mgmt ss-mgmt --vm -c limits.cpu=4 -c limits.memory=4GiB --profile=default
else
lxc init "images:$BASE_LXC_IMAGE" ss-mgmt --vm -c limits.cpu=4 -c limits.memory=4GiB --profile=default
fi
fi
# mount the pre-verified sovereign stack git repo into the new vm
if ! lxc config device show ss-mgmt | grep -q ss-code; then
lxc config device add ss-mgmt ss-code disk source="$(pwd)" path=/home/ubuntu/sovereign-stack
fi
# create the ~/ss path and mount it into the vm.
source ./deployment/deployment_defaults.sh
source ./deployment/base.sh
mkdir -p "$SS_ROOT_PATH"
if ! lxc config device show ss-mgmt | grep -q ss-root; then
lxc config device add ss-mgmt ss-root disk source="$SS_ROOT_PATH" path=/home/ubuntu/ss
fi
# if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
BITCOIN_DIR="$HOME/.bitcoin"
REMOTE_BITCOIN_CACHE_PATH="/home/ubuntu/ss/cache/bitcoin"
BITCOIN_TESTNET_BLOCKS_PATH="$BITCOIN_DIR/testnet3/blocks"
if [ -d "$BITCOIN_TESTNET_BLOCKS_PATH" ]; then
if ! lxc config device show ss-mgmt | grep -q ss-testnet-blocks; then
lxc config device add ss-mgmt ss-testnet-blocks disk source="$BITCOIN_TESTNET_BLOCKS_PATH" path=$REMOTE_BITCOIN_CACHE_PATH/testnet/blocks
fi
fi
# if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
BITCOIN_TESTNET_CHAINSTATE_PATH="$BITCOIN_DIR/testnet3/chainstate"
if [ -d "$BITCOIN_TESTNET_CHAINSTATE_PATH" ]; then
if ! lxc config device show ss-mgmt | grep -q ss-testnet-chainstate; then
lxc config device add ss-mgmt ss-testnet-chainstate disk source="$BITCOIN_TESTNET_CHAINSTATE_PATH" path=$REMOTE_BITCOIN_CACHE_PATH/testnet/chainstate
fi
fi
# if a ~/.bitcoin/blocks dir exists, mount it in.
BITCOIN_MAINNET_BLOCKS_PATH="$BITCOIN_DIR/blocks"
if [ -d "$BITCOIN_MAINNET_BLOCKS_PATH" ]; then
if ! lxc config device show ss-mgmt | grep -q ss-mainnet-blocks; then
lxc config device add ss-mgmt ss-mainnet-blocks disk source="$BITCOIN_MAINNET_BLOCKS_PATH" path=$REMOTE_BITCOIN_CACHE_PATH/mainnet/blocks
fi
fi
# if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
BITCOIN_MAINNET_CHAINSTATE_PATH="$BITCOIN_DIR/chainstate"
if [ -d "$BITCOIN_MAINNET_CHAINSTATE_PATH" ]; then
if ! lxc config device show ss-mgmt | grep -q ss-mainnet-blocks; then
lxc config device add ss-mgmt ss-mainnet-chainstate disk source="$BITCOIN_MAINNET_CHAINSTATE_PATH" path=$REMOTE_BITCOIN_CACHE_PATH/mainnet/chainstate
fi
fi
# mount the ssh directory in there.
if [ -f "$SSH_PUBKEY_PATH" ]; then
if ! lxc config device show ss-mgmt | grep -q ss-ssh; then
lxc config device add ss-mgmt ss-ssh disk source="$HOME/.ssh" path=/home/ubuntu/.ssh
fi
fi
# start the vm if it's not already running
if lxc list --format csv | grep -q "ss-mgmt,STOPPED"; then
lxc start ss-mgmt
sleep 10
fi
# wait for the vm to have an IP address
. ./management/wait_for_lxc_ip.sh
# do some other preparations for user experience
lxc file push ./management/bash_aliases ss-mgmt/home/ubuntu/.bash_aliases
lxc file push ./management/bash_profile ss-mgmt/home/ubuntu/.bash_profile
lxc file push ./management/bashrc ss-mgmt/home/ubuntu/.bashrc
lxc file push ./management/motd ss-mgmt/etc/update-motd.d/sovereign-stack
# install SSH
lxc exec ss-mgmt apt-get update
lxc exec ss-mgmt -- apt-get install -y openssh-server
lxc file push ./management/sshd_config ss-mgmt/etc/ssh/sshd_config
lxc exec ss-mgmt -- sudo systemctl restart sshd
# add 'ss-manage' to the bare metal ~/.bashrc
ADDED_COMMAND=false
if ! < "$HOME/.bashrc" grep -q "ss-deploy"; then
echo "alias ss-deploy='/home/$USER/sovereign-stack/deploy.sh \$@'" >> "$HOME/.bashrc"
if ! < "$HOME/.bashrc" grep -q "ss-manage"; then
echo "alias ss-manage='$(pwd)/manage.sh \$@'" >> "$HOME/.bashrc"
ADDED_COMMAND=true
fi
if ! < "$HOME/.bashrc" grep -q "ss-cluster"; then
echo "alias ss-cluster='/home/$USER/sovereign-stack/cluster.sh \$@'" >> "$HOME/.bashrc"
ADDED_COMMAND=true
fi
# Let's remove any entry in our known_hosts, then add it back.
# we are using IP address here so we don't have to rely on external DNS
# configuration for the base image preparataion.
ssh-keygen -R "$IP_V4_ADDRESS"
if ! < "$HOME/.bashrc" grep -q "ss-projects"; then
echo "alias ss-projects='/home/$USER/sovereign-stack/projects.sh \$@'" >> "$HOME/.bashrc"
ADDED_COMMAND=true
ssh-keyscan -H "$IP_V4_ADDRESS" >> "$SSH_HOME/known_hosts"
ssh "ubuntu@$IP_V4_ADDRESS" sudo chown -R ubuntu:ubuntu /home/ubuntu
if [ "$FROM_BUILT_IMAGE" = false ]; then
ssh "ubuntu@$IP_V4_ADDRESS" /home/ubuntu/sovereign-stack/management/provision.sh
lxc stop ss-mgmt
if ! lxc image list | grep -q "ss-mgmt"; then
echo "Publishing image. Please wait, this may take a while..."
lxc publish ss-mgmt --alias=ss-mgmt
fi
lxc start ss-mgmt
fi
if [ "$ADDED_COMMAND" = true ]; then
echo "WARNING! You need to run 'source ~/.bashrc' before continuing."
fi
echo "NOTICE! You need to run 'source ~/.bashrc' before continuing. After that, type 'ss-manage' to enter your management environment."
fi

39
manage.sh Executable file
View File

@ -0,0 +1,39 @@
#!/bin/bash
# https://www.sovereign-stack.org/ss-manage/
set -eu
cd "$(dirname "$0")"
# check to ensure dependencies are met.
if ! command -v lxc >/dev/null 2>&1; then
echo "This script requires 'lxd/lxc' to be installed. Please run 'install.sh'."
exit 1
fi
if ! lxc remote get-default | grep -q "local"; then
lxc remote switch "local"
fi
if ! lxc list -q --format csv | grep -q ss-mgmt; then
echo "ERROR: the 'ss-mgmt' VM does not exist. You may need to run install.sh"
exit 1
fi
# if the mgmt machine doesn't exist, then warn the user to perform ./install.sh
if ! lxc list --format csv | grep -q "ss-mgmt"; then
echo "ERROR: the management machine VM does not exist. You probably need to run './install.sh'."
echo "INFO: check out https://www.sovereign-stack.org/tag/code-lifecycle-management/ for more information."
fi
# if the machine does exist, let's make sure it's RUNNING.
if lxc list --format csv | grep -q "ss-mgmt,STOPPED"; then
echo "INFO: The SSME was in a STOPPED state. Starting the environment. Please wait."
lxc start ss-mgmt
sleep 30
fi
. ./management/wait_for_lxc_ip.sh
# let's ensure ~/.ssh/ssh_config is using the correct IP address for ss-mgmt.
ssh ubuntu@"$IP_V4_ADDRESS"

View File

@ -7,10 +7,6 @@
# put this into /usr/lib/udev/rules.d or /lib/udev/rules.d
# depending on your distribution
# Trezor
SUBSYSTEM=="usb", ATTR{idVendor}=="534c", ATTR{idProduct}=="0001", MODE="0660", GROUP="plugdev", TAG+="uaccess", TAG+="udev-acl", SYMLINK+="trezor%n"
KERNEL=="hidraw*", ATTRS{idVendor}=="534c", ATTRS{idProduct}=="0001", MODE="0660", GROUP="plugdev", TAG+="uaccess", TAG+="udev-acl"
# Trezor v2
SUBSYSTEM=="usb", ATTR{idVendor}=="1209", ATTR{idProduct}=="53c0", MODE="0660", GROUP="plugdev", TAG+="uaccess", TAG+="udev-acl", SYMLINK+="trezor%n"
SUBSYSTEM=="usb", ATTR{idVendor}=="1209", ATTR{idProduct}=="53c1", MODE="0660", GROUP="plugdev", TAG+="uaccess", TAG+="udev-acl", SYMLINK+="trezor%n"

12
management/bash_aliases Normal file
View File

@ -0,0 +1,12 @@
#!/bin/bash
alias ss-help='cat /home/ubuntu/sovereign-stack/deployment/help.txt'
alias ss-show='/home/ubuntu/sovereign-stack/deployment/show.sh $@'
alias ss-remote='/home/ubuntu/sovereign-stack/deployment/remote.sh $@'
alias ss-up='/home/ubuntu/sovereign-stack/deployment/up.sh $@'
alias ss-down='/home/ubuntu/sovereign-stack/deployment/down.sh $@'
alias ss-reset='/home/ubuntu/sovereign-stack/deployment/reset.sh $@'
alias ss-stop='/home/ubuntu/sovereign-stack/deployment/stop.sh $@'
alias ss-start='/home/ubuntu/sovereign-stack/deployment/start.sh $@'
alias ss-restore='/home/ubuntu/sovereign-stack/deployment/restore.sh $@'
alias ll='ls -lah'

6
management/bash_profile Normal file
View File

@ -0,0 +1,6 @@
#!/bin/bash
# this wires up the aliases for remote ssh sessions.
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

117
management/bashrc Normal file
View File

@ -0,0 +1,117 @@
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
# don't put duplicate lines or lines starting with space in the history.
# See bash(1) for more options
HISTCONTROL=ignoreboth
# append to the history file, don't overwrite it
shopt -s histappend
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=1000
HISTFILESIZE=2000
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# If set, the pattern "**" used in a pathname expansion context will
# match all files and zero or more directories and subdirectories.
#shopt -s globstar
# make less more friendly for non-text input files, see lesspipe(1)
[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
#force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=
fi
fi
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
else
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
;;
*)
;;
esac
# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
fi
# colored GCC warnings and errors
export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
# Add an "alert" alias for long running commands. Use like so:
# sleep 10; alert
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi

4
management/motd Normal file
View File

@ -0,0 +1,4 @@
#!/bin/bash
echo "Welcome to the management environment. Run 'ss-help' to get started."

53
management/provision.sh Executable file
View File

@ -0,0 +1,53 @@
#!/bin/bash
set -e
cd "$(dirname "$0")"
# NOTE! This script MUST be executed as root.
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release jq bc
sudo mkdir -m 0755 -p /etc/apt/keyrings
# add the docker gpg key to keyring for docker-ce-cli
if [ ! -f /etc/apt/keyrings/docker.gpg ]; then
cat /home/ubuntu/sovereign-stack/certs/docker.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null 2>&1
fi
# TODO REVIEW mgmt software requirements
sudo apt-get update
sudo apt-get install -y wait-for-it dnsutils rsync sshfs apt-transport-https docker-ce-cli libcanberra-gtk-module snapd nano git
sleep 10
# #apt install python3-pip python3-dev libusb-1.0-0-dev libudev-dev pinentry-curses for trezor stuff
# # for trezor installation
# #pip3 install setuptools wheel
# #pip3 install trezor_agent
# # ensure the trezor-t udev rules are in place.
# # if [ ! -f /etc/udev/rules.d/51-trezor.rules ]; then
# # sudo cp ./51-trezor.rules /etc/udev/rules.d/51-trezor.rules
# # fi
# install snap
if ! snap list | grep -q lxd; then
sudo snap install htop
sudo snap install lxd --channel=5.18/stable
sleep 6
# We just do an auto initialization. All we are using is the LXD client inside the management environment.
sudo lxd init --auto
fi
# run a lxd command so we don't we a warning upon first invocation
lxc list > /dev/null 2>&1
# add groups for docker and lxd
if ! groups ubuntu | grep -q docker; then
sudo addgroup docker
sudo usermod -aG docker ubuntu
sudo usermod -aG lxd ubuntu
fi

116
management/sshd_config Normal file
View File

@ -0,0 +1,116 @@
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options override the
# default value.
Include /etc/ssh/sshd_config.d/*.conf
Port 22
#AddressFamily any
ListenAddress 0.0.0.0
#ListenAddress ::
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed25519_key
# Ciphers and keying
#RekeyLimit default none
# Logging
#SyslogFacility AUTH
#LogLevel INFO
# Authentication:
#LoginGraceTime 2m
#PermitRootLogin prohibit-password
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
#PubkeyAuthentication yes
# Expect .ssh/authorized_keys2 to be disregarded by default in future.
#AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2
#AuthorizedPrincipalsFile none
#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
#PermitEmptyPasswords no
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
KbdInteractiveAuthentication no
# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the KbdInteractiveAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via KbdInteractiveAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and KbdInteractiveAuthentication to 'no'.
UsePAM yes
#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
#PrintMotd no
#PrintLastLog yes
#TCPKeepAlive yes
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
#PidFile /run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none
# no default banner path
#Banner none
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
# override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
PrintMotd yes

32
management/wait_for_lxc_ip.sh Executable file
View File

@ -0,0 +1,32 @@
#!/bin/bash
set -e
IP_V4_ADDRESS=
while true; do
# wait for
if lxc list ss-mgmt | grep -q enp5s0; then
break;
else
sleep 1
fi
done
while true; do
IP_V4_ADDRESS=$(lxc list ss-mgmt --format csv --columns=4 | grep enp5s0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')
if [ -n "$IP_V4_ADDRESS" ]; then
# give the machine extra time to spin up.
break;
else
sleep 1
printf '.'
fi
done
export IP_V4_ADDRESS="$IP_V4_ADDRESS"
# wait for the VM to complete its default cloud-init.
while lxc exec ss-mgmt -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done

View File

@ -1,12 +0,0 @@
# move all migration logic into this script.
if machine exists, then
get backup.
don't restart services.
Then
delete machine.
Then re-run script with --restor option.

22
publish.sh Executable file
View File

@ -0,0 +1,22 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
# this script will tag the repo then push it to origin
TAG_NAME="$(head -n 1 ./version.txt)"
TAG_MESSAGE="Creating tag $TAG_NAME on $(date)."
# create the git tag.
if ! git tag | grep -q "$TAG_NAME"; then
git tag -a "$TAG_NAME" -m "$TAG_MESSAGE" -s
fi
## note this will only work if you have permissions to update HEAD on https://git.sovereign-stack.org/ss/sovereign-stack.git
RESPONSE=
read -r -p " Would you like to push this to the main Sovereign Stack repo? (y) ": RESPONSE
if [ "$RESPONSE" = "y" ]; then
# optional; push to remote
git push --set-upstream origin --all
git push --set-upstream origin --tags
fi

View File

@ -1,39 +0,0 @@
#!/bin/bash
set -e
SSH_ENDPOINT_HOSTNAME="atlantis"
SSH_ENDPOINT_DOMAIN_NAME="ancapistan.io"
TEST_DOMAIN="ancapistan.casa"
CLUSTER_NAME="development"
export LXD_VM_NAME="${TEST_DOMAIN//./-}"
if [ -n "$TEST_DOMAIN" ]; then
lxc delete --force www-"$LXD_VM_NAME"
lxc delete --force btcpay-"$LXD_VM_NAME"
lxc delete --force sovereign-stack
lxc delete --force sovereign-stack-base
lxc profile delete www-"$LXD_VM_NAME"
lxc profile delete btcpay-"$LXD_VM_NAME"
fi
lxc profile delete sovereign-stack
lxc image rm sovereign-stack-base
lxc image rm ubuntu-base
lxc network delete lxdbrSS
lxc storage delete sovereign-stack
lxc remote switch "local"
lxc remote remove "$CLUSTER_NAME"
source "$HOME/.bashrc"
./cluster.sh create "$CLUSTER_NAME" "$SSH_ENDPOINT_HOSTNAME.$SSH_ENDPOINT_DOMAIN_NAME"
#--data-plane-interface=enp89s0
#./deploy.sh

View File

@ -1,23 +0,0 @@
#!/bin/bash
set -eu
export DOMAIN_NAME=
export DUPLICITY_BACKUP_PASSPHRASE=
export BTCPAY_HOSTNAME_IN_CERT=
export DEPLOY_GHOST=true
export DEPLOY_NEXTCLOUD=false
export DEPLOY_NOSTR_RELAY=true
export NOSTR_ACCOUNT_PUBKEY=
export DEPLOY_GITEA=false
export DEPLOY_ONION_SITE=false
export GHOST_MYSQL_PASSWORD=
export GHOST_MYSQL_ROOT_PASSWORD=
export NEXTCLOUD_MYSQL_PASSWORD=
export NEXTCLOUD_MYSQL_ROOT_PASSWORD=
export GITEA_MYSQL_PASSWORD=
export GITEA_MYSQL_ROOT_PASSWORD=
export LANGUAGE_CODE="en"
SCRIPT_DIR="$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
source "$SCRIPT_DIR/defaults.sh"

View File

@ -1,7 +0,0 @@
#!/bin/bash
lxc list
lxc network list
lxc profile list
lxc storage list
lxc image list

View File

@ -1,17 +0,0 @@
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y wait-for-it dnsutils rsync sshfs snapd lxd-client
RUN mkdir /sovereign-stack
COPY ./deployment /sovereign-stack
WORKDIR /sovereign-stack
RUN mkdir /domain
VOLUME /domain
ENV SITE_PATH=/domain
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod 0744 /entrypoint.sh
CMD /entrypoint.sh

View File

@ -1,8 +0,0 @@
#!/bin/bash
if [ -z "$DOMAIN_NAME" ]; then
echo "ERROR: DOMAIN_NAME not defined.".
exit 1
fi
/sovereign-stack/deploy.sh --domain="$DOMAIN_NAME"

View File

@ -1,32 +0,0 @@
version: "3.8"
services:
# a hidden service that routes to the nginx container at http://onionurl.onion server block
tor-onion:
image: tor:latest
networks:
- tor-net
volumes:
- ${REMOTE_HOME}/tor:/var/lib/tor
- tor-logs:/var/log/tor
configs:
- source: tor-config
target: /etc/tor/torrc
mode: 0644
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
volumes:
tor-data:
tor-logs:
networks:
tor-net:
attachable: true
configs:
tor-config:
file: ${TOR_CONFIG_PATH}

90
uninstall.sh Executable file
View File

@ -0,0 +1,90 @@
#!/bin/bash
set -eu
PURGE_LXD=false
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--purge)
PURGE_LXD=true
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
# this script undoes install.sh
if ! command -v lxc >/dev/null 2>&1; then
echo "This script requires 'lxc' to be installed. Please run 'install.sh'."
exit 1
fi
if ! lxc remote get-default | grep -q "local"; then
echo "ERROR: You MUST be on the local remote when uninstalling the SSME."
echo "INFO: You can use 'lxc remote switch local' to do this."
exit 1
fi
if ! lxc project list | grep -q "default (current)"; then
echo "ERROR: You MUST be on the default project when uninstalling the SSME."
echo "INFO: You can use 'lxc project switch default' to do this."
exit 1
fi
if lxc list --format csv | grep -q "ss-mgmt"; then
if lxc list --format csv -q | grep -q "ss-mgmt,RUNNING"; then
lxc stop ss-mgmt
fi
if lxc config device list ss-mgmt -q | grep -q "ss-code"; then
lxc config device remove ss-mgmt ss-code
fi
if lxc config device list ss-mgmt -q | grep -q "ss-root"; then
lxc config device remove ss-mgmt ss-root
fi
if lxc config device list ss-mgmt -q | grep -q "ss-ssh"; then
lxc config device remove ss-mgmt ss-ssh
fi
lxc delete ss-mgmt
fi
if [ "$PURGE_LXD" = true ]; then
if lxc profile device list default | grep -q root; then
lxc profile device remove default root
fi
if lxc profile device list default | grep -q enp5s0; then
lxc profile device remove default enp5s0
fi
if lxc network list --project default | grep -q lxdbr0; then
lxc network delete lxdbr0
fi
# this file contains the BASE_IMAGE_NAME
. ./deployment/base.sh
if lxc image list | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
lxc image delete "$UBUNTU_BASE_IMAGE_NAME"
fi
if lxc storage list --format csv | grep -q sovereign-stack; then
lxc storage delete sovereign-stack
fi
if snap list | grep -q lxd; then
sudo snap remove lxd
fi
fi

1
version.txt Normal file
View File

@ -0,0 +1 @@
23.08