1
1
Fork 1

Compare commits

...

282 Commits
list ... incus

Author SHA1 Message Date
Derek Smith 29f5b56223
Update project. 2024-02-15 10:22:48 -05:00
Derek Smith 1a0ef1914d
Update project. 2024-02-15 10:09:59 -05:00
Derek Smith aa30705933
Update project head. 2024-02-15 10:05:16 -05:00
Derek Smith f3137873f0
Update project head. 2024-02-11 20:31:35 -05:00
Derek Smith afcc6b5fec
Make LNPLAY server updates. 2024-02-11 20:31:27 -05:00
Derek Smith 01c86c6ed0
Switch to using loop device instead of partition. 2024-02-11 20:14:38 -05:00
Derek Smith cb93e58591
Update project head. 2024-02-09 11:33:58 -05:00
Derek Smith e34a765c69
DISABLE reset.sh 2024-02-09 11:33:44 -05:00
Derek Smith 091e34462e
Update remote.sh for incus. 2024-02-09 11:33:24 -05:00
Derek Smith 10801851c7
Check for prod. 2024-02-09 11:33:01 -05:00
Derek Smith ea7bd1d42c
Add some snapshot logic there. 2024-02-09 11:32:33 -05:00
Derek Smith c457118aec
Fix down script bug. 2024-02-09 11:31:35 -05:00
Derek Smith 5828fd1a38
Run incus init in a pseudo shell 2024-02-09 11:31:09 -05:00
Derek Smith 1973ee54f8
Wire up lnplayserver deployment correctly. 2024-02-09 11:30:42 -05:00
Derek Smith 17e9398588
Nitpicks. 2024-02-09 11:28:29 -05:00
Derek Smith 0edb94cdea
Remove target.sh 2024-02-09 11:26:10 -05:00
Derek Smith 355adb13ab
Revert to apt-get per Zabbly install instructions 2023-12-19 11:29:45 -05:00
Derek Smith 17d7e9c75a
Do not stub out LNPLAY_ENV_FILE if exists 2023-12-19 09:53:59 -05:00
Derek Smith 575deff1d2
Starting of zfs storage volume-based backups. 2023-12-19 09:53:29 -05:00
Derek Smith 6fef36096e
Nits 2023-12-19 09:53:01 -05:00
Derek Smith 0c6e00deb2
Switch to apt and force true on incus install. 2023-12-19 09:51:11 -05:00
Derek Smith 8b6db37380
Restart lnplayserver after image publish. 2023-12-17 16:52:05 -05:00
Derek Smith 5dd4de442d
Fix mac address defaults. 2023-12-17 16:51:21 -05:00
Derek Smith 7abf79e0a7
Spawn lnplay from image if exists. 2023-12-17 16:50:41 -05:00
Derek Smith 56916a7d33
Nitpicks 2023-12-17 16:50:24 -05:00
Derek Smith c4183a0b63
Install urlencode in ss-mgmt. 2023-12-17 16:49:54 -05:00
Derek Smith 4fde7a8a70
Update project. 2023-12-15 16:47:27 -05:00
Derek Smith 7007891e1d
Disable www duplicity backups; switching to storag 2023-12-15 16:47:13 -05:00
Derek Smith a33a0bc459
Update project. 2023-12-14 19:54:25 -05:00
Derek Smith 6f83bf5195
Fix execution bug. 2023-12-14 19:54:18 -05:00
Derek Smith aaab6f3515
Only allow volume sizing after creation. 2023-12-14 19:54:09 -05:00
Derek Smith 6f1f2df6dd
Update mainnet storage volume sizes. 2023-12-14 16:19:42 -05:00
Derek Smith f3ec53521d
Nitpicks and update project. 2023-12-14 12:52:16 -05:00
Derek Smith 042cdb5b7e
Fix storage volume size 2023-12-14 12:51:52 -05:00
Derek Smith 882b5c6bf0
Export BITCOIN_CHAIN 2023-12-14 12:51:18 -05:00
Derek Smith 60efc0a6fc
Remove lnplay storage volumes. 2023-12-14 12:50:49 -05:00
Derek Smith 50d65ebe7c
Fix Server SKIP flags. 2023-12-14 12:47:16 -05:00
Derek Smith 9d7d08b0ed
Update project. 2023-12-12 15:10:07 -05:00
Derek Smith 1eff32efed
Update incus snapshot sytax. 2023-12-12 15:09:19 -05:00
Derek Smith 3f30fa11e2
Fix storage volume names. 2023-12-12 15:09:07 -05:00
Derek Smith 7512cbfd90
Updates Undo 2023-11-30 16:34:44 -05:00
Derek Smith 3004a167c8
Update project 2023-11-29 21:34:29 -05:00
Derek Smith 57e75a88ec
More incus updates. 2023-11-29 21:33:45 -05:00
Derek Smith a2c29b189e
Incus updates for ss-mgmt 2023-11-29 14:04:24 -05:00
Derek Smith 44c2859a84
Update project. 2023-11-29 11:44:40 -05:00
Derek Smith 19997bcf89
Remove install_incus.sh from tabconf branch. 2023-09-23 12:33:31 -06:00
Derek Smith 27e0dc6ccf
rename to incusbr0 2023-09-23 10:26:51 -06:00
Derek Smith f75f8d1bf7
Update project head. 2023-09-22 17:55:38 -06:00
Derek Smith a273488646
Initial Switch to incus. 2023-09-22 17:46:07 -06:00
Derek Smith 559d5f11f4
Scope to default. 2023-09-15 19:52:15 -04:00
Derek Smith 3f15800635
Ensure default project. 2023-09-15 19:34:53 -04:00
Derek Smith a44832c9ba
Add controls over base image creation. 2023-09-15 19:28:48 -04:00
Derek Smith 7e76d5ca43
Update project. 2023-09-15 10:42:50 -04:00
Derek Smith 643fc70873
Update project 2023-09-15 10:28:44 -04:00
Derek Smith 65dfcf7aae
Update project. 2023-09-15 10:25:53 -04:00
Derek Smith a2050f4796
Use '-q' with lxc init commands. 2023-09-15 09:51:25 -04:00
Derek Smith e5bdd0d6a5
Ensure initial image copy goes to default project. 2023-09-15 09:46:32 -04:00
Derek Smith 972a2df31e
Update project 2023-09-15 09:38:24 -04:00
Derek Smith e78656d191
Enable debuggin on create_lxc_base.sh 2023-09-15 09:32:49 -04:00
Derek Smith ab95b46745
Enable debugging. 2023-09-15 09:30:01 -04:00
Derek Smith 636d4c873d
Update project. 2023-09-15 09:26:28 -04:00
Derek Smith 75086610ca
Update project/ remove warnings. 2023-09-14 22:35:59 -04:00
Derek Smith c02ed8e6f2
Update project head. 2023-09-14 22:27:34 -04:00
Derek Smith 9ffeb57981
Update project head. 2023-09-14 22:19:08 -04:00
Derek Smith 9182c7eb0a
Update project head. 2023-09-14 16:57:06 -04:00
Derek Smith 8fa432672a
Update project head. 2023-09-14 16:46:49 -04:00
Derek Smith f952b05de3
Update project head. 2023-09-14 12:58:06 -04:00
Derek Smith c02bb690e8
Update project head. 2023-09-14 12:04:25 -04:00
Derek Smith 855a91beb7
Update project head. 2023-09-14 09:49:05 -04:00
Derek Smith 8780f9e908
SKIP proj provis if is DEPLOYMENT_STRING is null 2023-09-14 09:48:55 -04:00
Derek Smith f8d6d2acfc
Add the project git submodule back in there. 2023-09-13 10:40:25 -04:00
Derek Smith 0c253e9a49
Remove git submodules file. 2023-09-13 10:17:57 -04:00
Derek Smith 8d28a68bd3
Remove the project git submodule. 2023-09-13 10:17:21 -04:00
Derek Smith ffd4b607c2
deleted project path in prep for submodule fix 2023-09-13 09:14:29 -04:00
Derek Smith eaa4f0abb1
changes from tabconf 2023-09-05 22:01:57 -04:00
Derek Smith 334b138770
Update Version. 2023-08-20 21:23:13 -04:00
Derek Smith 68f487a5dc
Update to latest project head. 2023-08-20 20:47:27 -04:00
Derek Smith 88291a2ee0
Update project root. 2023-08-13 12:18:06 -04:00
Derek Smith 132e32b806
Update clams-server defaults + advance project. 2023-08-13 11:58:30 -04:00
Derek Smith 7819cbe564
Add deployment/project as submodule. 2023-08-12 12:51:46 -04:00
Derek Smith 22ecde022f
Removed submodules 2023-08-12 12:45:23 -04:00
Derek Smith 9d7c9a51f6
Remove project .gitignore. 2023-08-12 12:27:55 -04:00
Derek Smith 08235de798
Nitpicks. 2023-08-12 12:25:23 -04:00
Derek Smith 5369016265
WWW updates. 2023-08-12 12:22:34 -04:00
Derek Smith 0ca1fdbea9
Add domain_list.sh and stop.sh 2023-08-12 12:21:15 -04:00
Derek Smith d88814e6e4
Clams server updates. 2023-08-12 12:19:44 -04:00
Derek Smith ddcc119030
BTCPayserver upates. 2023-08-12 12:16:44 -04:00
Derek Smith 919acbf172
Add --skip logic for VMs. 2023-08-12 12:14:00 -04:00
Derek Smith f41601ad04
Remove project git clone logic in lie of submodule 2023-08-12 12:11:50 -04:00
Derek Smith d49f3f58b6
Switch to the lxd stable channel. 2023-08-11 10:28:06 -04:00
Derek Smith 62c7fd0504
Add iptables FORWARDING due to dockerd conflict. 2023-08-11 10:26:23 -04:00
Derek Smith e602f8be90
Add --disk-or-partition flag, default loop 2023-08-11 10:24:02 -04:00
Derek Smith d8903eb5bc
Add clams-server in lieu of existing integration. 2023-08-11 09:46:08 -04:00
Derek Smith 0b38d6d635
Update to grab git commit in docker info. 2023-05-16 22:08:07 -04:00
Derek Smith beb1d7b5a4
Update project ref. 2023-05-16 22:04:47 -04:00
Derek Smith 5d15d5f690
Various updates. 2023-05-16 22:02:32 -04:00
Derek Smith 6c4d262c43
New Version. 2023-04-14 13:39:38 -04:00
Derek Smith d571680ff1
Update target commit. 2023-04-14 13:17:42 -04:00
Derek Smith 6e6c55e801
Add remote/project checks on uninstall.sh 2023-04-14 13:16:41 -04:00
Derek Smith e762d05828
The rest. 2023-04-12 10:09:47 -04:00
Derek Smith bccef77d4e
Update env sourcing. 2023-04-12 10:07:41 -04:00
Derek Smith 0d8ece8362
Update comments. 2023-04-12 10:06:37 -04:00
Derek Smith d68d673c69
Reenable lxc image cache. 2023-04-12 10:05:49 -04:00
Derek Smith 4f466381f6
Refactor ssh; revert to rsa. 2023-04-12 10:05:25 -04:00
Derek Smith 6e2514b12f
Refactor vm readiness logic. 2023-04-12 10:03:46 -04:00
Derek Smith f615e14988
Fix typo and re-enable uncommited changes checking 2023-04-07 14:11:56 -04:00
Derek Smith 086577f32a
Remove debugging flag. 2023-04-07 14:04:43 -04:00
Derek Smith 879ec0c34f
Other updates. 2023-04-07 14:02:24 -04:00
Derek Smith 8b65f22797
Incoming from other repo. 2023-04-07 14:02:04 -04:00
Derek Smith 7604c95aa0
Move deploy.sh to main repo as up.sh 2023-04-07 14:01:31 -04:00
Derek Smith 754ebc81a4
Moved btcpay bashrc to project repo. 2023-04-07 14:01:11 -04:00
Derek Smith 8ec3af6703
Add skip-base-image flag. 2023-04-07 10:57:54 -04:00
Derek Smith 42a22f4942
Refactorings. 2023-04-07 10:41:18 -04:00
Derek Smith d6eb1f3031
Refactorings. 2023-04-07 10:23:04 -04:00
Derek Smith 200e6afc76
fix typo. 2023-04-06 15:22:32 -04:00
Derek Smith 8321345030
Update adjust vscode color. 2023-04-06 15:10:35 -04:00
Derek Smith b665857921
Update project ref. 2023-04-06 15:09:38 -04:00
Derek Smith 925d17d22c
Update project ref. 2023-04-06 11:14:32 -04:00
Derek Smith be69d5774d
Update ss location. 2023-04-05 20:16:23 -04:00
Derek Smith e4a19010c6
Updates 2023-04-05 20:08:57 -04:00
Derek Smith 9c0e29f772
Update comment. 2023-04-05 15:40:43 -04:00
Derek Smith 54be0f6b3c
Update project ref. 2023-04-05 12:09:31 -04:00
Derek Smith bf5ff6903e
Fix cert renewal logic. 2023-04-05 12:00:39 -04:00
Derek Smith a3b29e129a
Make shutdown process less harsh. 2023-04-05 11:11:11 -04:00
Derek Smith 6a0255a326
All other changes. 2023-04-04 16:25:58 -04:00
Derek Smith a1c50b083d
Comment all blockchain/chainstate stuff 2023-04-04 16:25:20 -04:00
Derek Smith 0f65419d55
Make volumes project-specific. 2023-04-04 16:24:39 -04:00
Derek Smith cae11df6a5
Update project ref. 2023-04-04 11:44:56 -04:00
Derek Smith 18bff0d2e9
Update target git commit. 2023-04-04 11:35:00 -04:00
Derek Smith 425bfd8069
Leftovers. 2023-04-04 11:34:48 -04:00
Derek Smith b82813231c
Remote update. 2023-04-04 11:34:20 -04:00
Derek Smith f2b02e474d
Add dependency check. 2023-04-04 11:33:50 -04:00
Derek Smith 5a54400831
Scope to correct project. 2023-04-04 11:32:54 -04:00
Derek Smith 69ab9d0a42
Supporting stuff for volume mount work. 2023-04-04 11:31:24 -04:00
Derek Smith 35afd1bbd4
Path updates. 2023-04-04 11:30:07 -04:00
Derek Smith a44ce22e00
Minor nostr updates. 2023-04-04 11:28:39 -04:00
Derek Smith 33a855e801
Rename ss-destroy to ss-down 2023-04-04 11:27:27 -04:00
Derek Smith 69161e726e
Improve install/uninstall.sh 2023-04-02 09:43:55 -04:00
Derek Smith 622741ce06
Add/update variable names. 2023-04-02 09:42:55 -04:00
Derek Smith a4301c0a1c
Remove iteration over PROJECTS. 2023-04-02 09:41:42 -04:00
Derek Smith d786227a5f
undo 2023-03-30 11:10:42 -04:00
Derek Smith 7d839e9724
Add removal of docker volume. 2023-03-30 11:08:34 -04:00
Derek Smith 441eff3f26
Remove unnecessary output. 2023-03-27 11:08:09 -04:00
Derek Smith 1b7e871758
Update target git ref 2023-03-27 11:07:52 -04:00
Derek Smith fbf16a38a0
Change to weekly base image creation. 2023-03-27 11:07:42 -04:00
Derek Smith a8dccf6b5c
Make clams browser-app non-default. 2023-03-27 11:07:05 -04:00
Derek Smith fc441f72f4
Switch to mysql for ghost. 2023-03-25 12:07:05 -04:00
Derek Smith efac5e4f45
Create weekly basse images. 2023-03-24 17:10:43 -04:00
Derek Smith f823ba361e
Improve output. 2023-03-24 17:10:04 -04:00
Derek Smith dc40e732ee
Scope destroy to external project. 2023-03-23 18:58:46 -04:00
Derek Smith dc269fe29c
Mindor updates. 2023-03-23 16:34:25 -04:00
Derek Smith d35a882d8b
Scope update invoke to project level. 2023-03-23 15:35:07 -04:00
Derek Smith 3884c24e48
Iterate over projects in show.sh 2023-03-23 14:25:31 -04:00
Derek Smith 2730bda32b
Run latest lxd for development. 2023-03-22 16:36:31 -04:00
Derek Smith 73eacb9125
Update project ref + bugfix. 2023-03-21 13:20:28 -04:00
Derek Smith 9723474b7b
Update project ref. 2023-03-20 20:14:08 -04:00
Derek Smith a595491ee6
Control flow, documentation and update PROJECT ref 2023-03-20 13:03:01 -04:00
Derek Smith 46dedcf6b2
Load cached lxd image at the management machine. 2023-03-20 13:02:20 -04:00
Derek Smith d5c2d67562
Fix bash_aliases in the SSME. 2023-03-20 13:01:50 -04:00
Derek Smith b5c2b0ef81
Update project ref. 2023-03-20 10:33:05 -04:00
Derek Smith f06a3545d2
Make management machine persistence mandatory. 2023-03-20 10:25:35 -04:00
Derek Smith 786d34df93
Improve control flow. 2023-03-19 16:42:48 -04:00
Derek Smith 5dc7194a62
Whitespace and documentation updates. 2023-03-19 13:49:57 -04:00
Derek Smith 1fdf96bfa2
Add --purge to reset.sh 2023-03-19 13:49:37 -04:00
Derek Smith 9e7d9d3a90
Unify device naming convention 2023-03-19 13:49:00 -04:00
Derek Smith ab08923543
Add mainnet data cache on ss-mgmt 2023-03-19 13:48:26 -04:00
Derek Smith 27a02534b6
Update project REF 2023-03-19 13:45:35 -04:00
Derek Smith 2992f34092
Update project ref. 2023-03-18 18:33:36 -04:00
Derek Smith f1abfd77af
Project Updates. 2023-03-18 18:18:21 -04:00
Derek Smith de6f078774
Update informational messages and whitespace. 2023-03-18 11:15:07 -04:00
Derek Smith 18fb5a1fe0
Move code block farther down. 2023-03-18 11:14:36 -04:00
Derek Smith 731639bc4b
Add default data plane. 2023-03-18 11:14:01 -04:00
Derek Smith a3ea78cfe2
Mount management machine .ssh into ss-mgmt 2023-03-18 11:12:14 -04:00
Derek Smith 8aef17fa00
Add jammy image disk caching. 2023-03-18 11:11:12 -04:00
Derek Smith 03d669750c
Formatting & add information messages. 2023-03-17 11:19:35 -04:00
Derek Smith 5f83e83fc8
Update project target commit. 2023-03-16 16:43:48 -04:00
Derek Smith 086aee9ed4
Updates for storage volumes. 2023-03-16 15:50:02 -04:00
Derek Smith 1a586e2399
Update project ref. 2023-03-15 19:22:32 -04:00
Derek Smith 1c2511af21
Create SS_ROOT_PATH 2023-03-15 19:21:02 -04:00
Derek Smith e12ead7ede
Installation/ss-mgmt provisioning improvements. 2023-03-15 19:19:32 -04:00
Derek Smith 1775f28cdb
Updated publish and changed version for tag. 2023-03-15 11:59:09 -04:00
Derek Smith cde7d287c5
Update projects commitment. 2023-03-15 11:23:29 -04:00
Derek Smith 16d4339af6
Prompt and help udpates. 2023-03-15 11:19:34 -04:00
Derek Smith 672be48b08
Added PROEJCT_NAME= 2023-03-14 11:45:29 -04:00
Derek Smith 7e8706f81c
Add USER_TARGET_PROJECT to update.sh 2023-03-14 11:44:22 -04:00
Derek Smith 5da08eab26
Add --project scope. 2023-03-14 11:37:33 -04:00
Derek Smith e2bfd5d090
Update commit hash. 2023-03-13 14:47:06 -04:00
Derek Smith a6ac567f12
Update target project git commit. 2023-03-13 14:44:13 -04:00
Derek Smith 34c1edf27e
Add project context switch during ss-update. 2023-03-13 13:54:47 -04:00
Derek Smith 180cd1fa8d
Rename site_definition to site.conf 2023-03-13 13:54:29 -04:00
Derek Smith afa6c530ff
Rename ss-migrate to ss-update. 2023-03-13 13:45:26 -04:00
Derek Smith cc6bdef20d
Update TARGET_PROJECT_GIT_COMMIT 2023-03-13 13:41:01 -04:00
Derek Smith 514ae6ce24
Rename to remote.conf 2023-03-13 13:40:47 -04:00
Derek Smith d283dfb353
Add iterate over DEPLOYMENT_STRING 2023-03-13 13:40:07 -04:00
Derek Smith c08260a2d4
Remove symlink. 2023-03-11 11:13:36 -05:00
Derek Smith efeb0261bc
Minor updates. 2023-03-11 11:12:19 -05:00
Derek Smith b2abf3fdf4
Update docker image versions. 2023-03-10 22:40:50 -05:00
Derek Smith 870d0b685c
Update project ref. 2023-03-10 21:13:40 -05:00
Derek Smith 6884154c04
Update documentation and links. 2023-03-10 21:12:39 -05:00
Derek Smith 8590e82411
Simplifiey and add comments. 2023-03-10 21:12:24 -05:00
Derek Smith 7a705828b7
Update migration warning message. 2023-03-10 11:14:55 -05:00
Derek Smith 867771c908
Remove Project 2023-03-09 15:38:50 -05:00
Derek Smith e205d1cc7a
Update gitignore 2023-03-09 15:37:53 -05:00
Derek Smith 7ba91f8bcb
Various updates. 2023-03-09 15:36:30 -05:00
Derek Smith 628df90d32
Update projects git ref and other monor updates. 2023-03-09 10:53:00 -05:00
Derek Smith bd3acd8ef4
Add farscapian.gpg 2023-03-09 10:16:11 -05:00
Derek Smith c3980df073
Various plumbing updates. 2023-03-09 09:58:56 -05:00
Derek Smith 9c518e47e2
Implement LXD projects. 2023-03-09 09:58:16 -05:00
Derek Smith f5deac4874
Standardize lxd version. 2023-03-09 09:56:38 -05:00
Derek Smith 493946c1f5
Rename cluster to remote. 2023-03-09 09:55:40 -05:00
Derek Smith 98866559bd
Move projects pull to install.sh 2023-03-06 19:04:56 -05:00
Derek Smith 03d7411a05
Update TARGET_PROJECT_GIT_COMMIT 2023-03-06 18:55:57 -05:00
Derek Smith eb67ec10a7
Update debugging info. 2023-03-06 15:42:17 -05:00
Derek Smith 2f16d2de0a
Wire up project repo. 2023-03-06 14:44:36 -05:00
Derek Smith 0510a36522
Delete project for repo refactor. 2023-03-06 14:33:25 -05:00
Derek Smith 49b435c113
Update publish.sh 2023-03-06 12:56:35 -05:00
Derek Smith 299446b7e8
Commit publish.sh 2023-03-06 12:54:25 -05:00
Derek Smith a19bbec9c1
Bump version tag to v0.0.24 2023-03-06 12:54:09 -05:00
Derek Smith ecd6b3a77d
Remove root .gitignore. 2023-03-06 12:53:51 -05:00
Derek Smith 9b8a2403eb
Remove temporary file. 2023-03-06 12:48:58 -05:00
Derek Smith 1de414dfc9
Organize files under project/ 2023-03-06 12:47:45 -05:00
Derek Smith 7ce1f0adbe
Stability improvements. 2023-03-06 11:47:27 -05:00
Derek Smith 8caae387b6
Simplify and refactor. 2023-03-05 11:50:22 -05:00
Derek Smith 610946e357
Improve cluster deployment 2023-03-05 11:49:53 -05:00
Derek Smith ddb0fbef57
Add clams integration. 2023-03-05 11:49:06 -05:00
Derek Smith 4a84fd24e5
Refactor project env 2023-03-05 11:45:27 -05:00
Derek Smith 16f88d964d Working. Clams integration stop. 2023-03-03 08:59:52 -05:00
Derek Smith b6e0e5ac4d Working on Dockerd 2023-03-02 09:46:17 -05:00
Derek Smith c3bab8c844
4 2023-02-01 16:16:59 -05:00
Derek Smith 48a26c0db0
1 2023-02-01 16:11:50 -05:00
Derek Smith 144b0209eb
2 2023-02-01 16:08:04 -05:00
Derek Smith 864926ffd0
3 2023-02-01 16:07:06 -05:00
Derek Smith a875137b05
45 2023-02-01 16:06:29 -05:00
Derek Smith faca4f41ce
4 2023-02-01 16:05:02 -05:00
Derek Smith ce66768038
3 2023-02-01 16:03:33 -05:00
Derek Smith 4124efe2ea
4 2023-02-01 16:03:07 -05:00
Derek Smith c832d1d050
1 2023-02-01 16:02:38 -05:00
Derek Smith bd3918ec93
Updates 2023-02-01 15:24:03 -05:00
Derek Smith 2c0645c951
Undo this commit. 2023-02-01 14:44:05 -05:00
Derek Smith d91ba02e7f
Added initial Clams build support. 2023-01-15 15:57:47 -05:00
Derek Smith bd129ed073
Remove onion artifacts. 2023-01-12 16:51:22 -05:00
Derek Smith 9d13ec3991
Simplify nostr deployment. 2023-01-07 17:04:00 -05:00
Derek Smith dbb48d7f99
Nostr config updates. 2023-01-07 10:45:44 -05:00
Derek Smith 12d7d33053
Simplify README.md 2023-01-07 10:45:20 -05:00
Derek Smith 902121a72d
Fix --no-backup-www bug. 2023-01-07 09:39:20 -05:00
Derek Smith 9a419ccc6e
Deploy nostr relay when -n NOSTR_ACCOUNT_PUBKEY. 2023-01-07 09:37:55 -05:00
Derek Smith 59ca96cb6c
migration and publish improvements. 2023-01-05 11:14:48 -05:00
Derek Smith 42aa3742fc
Delete reset now that migrate.sh is done. 2023-01-05 11:14:02 -05:00
Derek Smith 104e547a57
Make --restart-front-end default. Goal is opposite 2023-01-05 11:12:50 -05:00
Derek Smith 7a08b7cdcf
Bump Nextcloud to 25.0.2 2023-01-05 11:12:17 -05:00
Derek Smith fac6ab0ecf
Refactor 2023-01-05 11:11:25 -05:00
Derek Smith b23d60a6ad
Updated publish for support git remotes. 2022-12-26 10:34:08 -05:00
Derek Smith e0da0c511a
Updating publishing logic. 2022-12-26 10:31:09 -05:00
Derek Smith 3bc86d8aa3
Added comment + refactor. 2022-12-25 13:04:48 -05:00
Derek Smith a5ce7ceb4c
Add core-lightning mod via custom btcpay fragments 2022-12-25 13:04:20 -05:00
Derek Smith bceec32b66
Add bitcoin/lighting aliases to lxc profile stub 2022-12-25 13:02:22 -05:00
Derek Smith 8316360c70
Bump Ghost image. 2022-12-25 13:01:03 -05:00
Derek Smith 1288962d79
Add switch to restart nginx stack + cert renewal. 2022-12-25 13:00:45 -05:00
Derek Smith 1bc685fa74
Control flow updates. 2022-12-25 12:59:02 -05:00
Derek Smith 69d5564e44
Migration updates for vertical scaling of VMs. 2022-12-12 17:16:47 -05:00
Derek Smith bd9a76108b
Various updates. 2022-11-21 10:58:32 -05:00
Derek Smith 47191ba254
Removed unicode characters. 2022-11-16 12:23:02 -05:00
Derek Smith 36d6b273f1
Documentation and unicode updates. 2022-11-16 12:20:16 -05:00
Derek Smith 6948856bc9
Bump nginx version and other updates. 2022-11-15 16:00:08 -05:00
Derek Smith b9d8dddbc6
nostr updates 2022-11-15 15:59:08 -05:00
Derek Smith 070df55bf3
Clean up migration and NOSTR updates. 2022-11-13 20:27:27 -05:00
Derek Smith d6b46241b2
Refactoring/cleaup. 2022-11-13 20:26:51 -05:00
Derek Smith e6acdb9df3
Added NOSTR RELAY stack. 2022-11-13 20:23:48 -05:00
Derek Smith 25139b514c
Refactors from AWS removal. 2022-11-05 19:52:45 -04:00
Derek Smith 57b7985009
Improve cluster init script. 2022-11-05 19:51:58 -04:00
Derek Smith 3883cb4d66
Misc updates/whitespace. 2022-11-05 19:50:59 -04:00
Derek Smith e5b92f212c
Simplify nginx stub script related to btcpay. 2022-11-05 19:49:24 -04:00
Derek Smith 148644110b
Registry changes on mgmt. 2022-11-05 19:47:47 -04:00
Derek Smith ddad272b98
Updated letsencrypt cert renewal logic. 2022-11-05 19:45:00 -04:00
Derek Smith 3c1b985870
Updated BTCPAY_ALT_NAMES 2022-11-02 14:54:02 -04:00
Derek Smith 08b158da09
Removed all AWS functionality. Big day. 2022-10-30 22:24:25 -04:00
Derek Smith e5164ebc7e
Before removing AWS functionality. 2022-10-30 22:09:44 -04:00
Derek Smith 47ff2522d3
Minor formatting and stability updates. 2022-10-30 12:59:24 -04:00
Derek Smith 3616c9aecb
Suppress error messages and timing. 2022-10-29 11:34:17 -04:00
Derek Smith 257034761d
Updates to btcpayserver backup/restore. 2022-10-27 16:07:00 -04:00
Derek Smith 5455237f1f
Working on btcpay backup/restore. 2022-10-26 21:41:14 -04:00
Derek Smith a3db31986b
BTCPay and documentation updates. 2022-10-26 19:33:43 -04:00
76 changed files with 2631 additions and 3713 deletions

2
.gitignore vendored
View File

@ -1 +1 @@
./reset.sh
archive

3
.gitmodules vendored Normal file
View File

@ -0,0 +1,3 @@
[submodule "deployment/project"]
path = deployment/project
url = https://git.sovereign-stack.org/ss/project

18
.vscode/settings.json vendored
View File

@ -10,13 +10,15 @@
"shellcheck.enableQuickFix": true,
"shellcheck.run": "onType",
"shellcheck.executablePath": "shellcheck",
"shellcheck.customArgs": [],
"shellcheck.ignorePatterns": {},
"shellcheck.exclude": [
// "SC1090",
// "SC1091",
"SC2029"
"shellcheck.customArgs": [
"-x"
],
"shellcheck.ignorePatterns": {},
// "shellcheck.exclude": [
// "SC1090",
// "SC1091",
// "SC2029"
// ],
"terminal.integrated.fontFamily": "monospace",
"workbench.colorCustomizations": {
"activityBar.background": "#1900a565",
@ -28,8 +30,8 @@
"titleBar.inactiveBackground": "#02972799",
"titleBar.activeForeground": "#e7e7e7",
"titleBar.inactiveForeground": "#e7e7e799",
"statusBar.background": "#ff9900",
"statusBarItem.hoverBackground": "#ff9900",
"statusBar.background": "#d68100",
"statusBarItem.hoverBackground": "#c77700",
"statusBar.foreground": "#000000"
}
}

1
NOTES
View File

@ -1 +0,0 @@
Trezor MUST Use the "Crypto" firmware with shitcoin support in order for 2FA (WEBAUTHN) to work. Bummer.

View File

@ -1,3 +1,3 @@
# Documentation
All documentation for this project can be found at the [sovereign-stack.org](https://www.sovereign-stack.org).
All documentation for this project can be found at [sovereign-stack.org](https://www.sovereign-stack.org). To get started with this code, check out [this post](https://www.sovereign-stack.org/get/).

View File

@ -1,229 +0,0 @@
#!/bin/bash
set -eux
cd "$(dirname "$0")"
# This script is meant to be executed on the management machine.
# it reaches out to an SSH endpoint and provisions that machine
# to use LXD.
DATA_PLANE_MACVLAN_INTERFACE=
DISK_TO_USE=loop
# override the cluster name.
CLUSTER_NAME="${1:-}"
if [ -z "$CLUSTER_NAME" ]; then
echo "ERROR: The cluster name was not provided."
exit 1
fi
#shellcheck disable=SC1091
source ./defaults.sh
export CLUSTER_PATH="$CLUSTERS_DIR/$CLUSTER_NAME"
CLUSTER_DEFINITION="$CLUSTER_PATH/cluster_definition"
export CLUSTER_DEFINITION="$CLUSTER_DEFINITION"
mkdir -p "$CLUSTER_PATH"
if [ ! -f "$CLUSTER_DEFINITION" ]; then
# stub out a cluster_definition.
cat >"$CLUSTER_DEFINITION" <<EOL
#!/bin/bash
# see https://www.sovereign-stack.org/cluster_definition for more info!
export LXD_CLUSTER_PASSWORD="$(gpg --gen-random --armor 1 14)"
export SOVEREIGN_STACK_MAC_ADDRESS="CHANGE_ME_REQUIRED"
export PROJECT_NAME="public"
export REGISTRY_URL="http://$(hostname).$(resolvectl status | grep 'DNS Domain:' | awk '{ print $3 }'):5000"
export REGISTRY_USERNAME="CHANGE_ME"
export REGISTRY_PASSWORD="CHANGE_ME"
EOL
chmod 0744 "$CLUSTER_DEFINITION"
echo "We stubbed out a '$CLUSTER_DEFINITION' file for you."
echo "Use this file to customize your cluster deployment;"
echo "Check out 'https://www.sovereign-stack.org/cluster-definition' for an example."
exit 1
fi
source "$CLUSTER_DEFINITION"
if ! lxc remote list | grep -q "$CLUSTER_NAME"; then
FQDN="${2:-}"
shift
if [ -z "$FQDN" ]; then
echo "ERROR: The Fully Qualified Domain Name of the new cluster member was not set."
exit 1
fi
# let's check to ensure we have SSH access to the specified host.
if ! wait-for-it -t 5 "$FQDN:22"; then
echo "ERROR: We can't get an SSH connection to '$FQDN:22'. Ensure you have the host set up correctly."
exit 1
fi
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--data-plane-interface=*)
DATA_PLANE_MACVLAN_INTERFACE="${i#*=}"
shift
;;
--disk=*)
DISK_TO_USE="${i#*=}"
shift
;;
*)
;;
esac
done
# if [ -z "$DATA_PLANE_MACVLAN_INTERFACE" ]; then
# echo "INFO: It looks like you didn't provide input on the command line for the data plane macvlan interface."
# echo " We need to know which interface that is! Enter it here now."
# echo ""
# ssh "ubuntu@$FQDN" ip link
# echo "Please enter the network interface that's dedicated to the Sovereign Stack data plane: "
# read -r DATA_PLANE_MACVLAN_INTERFACE
# fi
# if [ -z "$DISK_TO_USE" ]; then
# echo "INFO: It looks like the DISK_TO_USE has not been set. Enter it now."
# echo ""
# ssh "ubuntu@$FQDN" lsblk
# USER_DISK=
# echo "Please enter the disk or partition that Sovereign Stack will use to store data (default: loop): "
# read -r USER_DISK
# fi
else
echo "ERROR: the cluster already exists! You need to go delete your lxd remote if you want to re-create your cluster."
echo " It's may also be helpful to reset/rename your cluster path."
exit 1
fi
# ensure we actually have that interface on the system.
echo "DATA_PLANE_MACVLAN_INTERFACE: $DATA_PLANE_MACVLAN_INTERFACE"
if ! ssh "ubuntu@$FQDN" ip link | grep "$DATA_PLANE_MACVLAN_INTERFACE" | grep -q ",UP"; then
echo "ERROR: We could not find your interface in our list of available interfaces. Please run this command again."
echo "NOTE: You can always specify on the command line by adding the '--data-plane-interface=eth0', for example."
exit 1
fi
# if the disk is loop-based, then we assume the / path exists.
if [ "$DISK_TO_USE" != loop ]; then
# ensure we actually have that disk/partition on the system.
if ssh "ubuntu@$FQDN" lsblk | grep -q "$DISK_TO_USE"; then
echo "ERROR: We could not the disk you specified. Please run this command again and supply a different disk."
echo "NOTE: You can always specify on the command line by adding the '--disk=/dev/sdd', for example."
exit 1
fi
fi
# The MGMT Plane IP is the IP address that the LXD API binds to, which happens
# to be the same as whichever SSH connection you're coming in on.
MGMT_PLANE_IP="$(ssh ubuntu@"$FQDN" env | grep SSH_CONNECTION | cut -d " " -f 3)"
IP_OF_MGMT_MACHINE="$(ssh ubuntu@"$FQDN" env | grep SSH_CLIENT | cut -d " " -f 1 )"
IP_OF_MGMT_MACHINE="${IP_OF_MGMT_MACHINE#*=}"
IP_OF_MGMT_MACHINE="$(echo "$IP_OF_MGMT_MACHINE" | cut -d: -f1)"
# error out if the cluster password is unset.
if [ -z "$LXD_CLUSTER_PASSWORD" ]; then
echo "ERROR: LXD_CLUSTER_PASSWORD must be set in your cluster_definition."
exit 1
fi
if ! command -v lxc >/dev/null 2>&1; then
if lxc profile list --format csv | grep -q sovereign-stack; then
lxc profile delete sovereign-stack
sleep 1
fi
if lxc network list --format csv | grep -q lxdbrSS; then
lxc network delete lxdbrSS
sleep 1
fi
fi
ssh -t "ubuntu@$FQDN" "
# set host firewall policy.
# allow LXD API from management network.
# sudo ufw allow from ${IP_OF_MGMT_MACHINE}/32 proto tcp to $MGMT_PLANE_IP port 8443
# enable it.
# if sudo ufw status | grep -q 'Status: inactive'; then
# sudo ufw enable
# fi
# install lxd as a snap if it's not installed. We only really use the LXC part of this package.
if ! snap list | grep -q lxd; then
sudo -A snap install lxd
sleep 4
fi
"
# if the DATA_PLANE_MACVLAN_INTERFACE is not specified, then we 'll
# just attach VMs to the network interface used for for the default route.
if [ -z "$DATA_PLANE_MACVLAN_INTERFACE" ]; then
DATA_PLANE_MACVLAN_INTERFACE="$(ssh -t ubuntu@"$FQDN" ip route | grep default | cut -d " " -f 5)"
fi
# stub out the lxd init file for the remote SSH endpoint.
CLUSTER_MASTER_LXD_INIT="$CLUSTER_PATH/lxdinit_profile.yml"
cat >"$CLUSTER_MASTER_LXD_INIT" <<EOF
config:
core.https_address: ${MGMT_PLANE_IP}:8443
core.trust_password: ${LXD_CLUSTER_PASSWORD}
images.auto_update_interval: 15
networks:
- name: lxdbrSS
type: bridge
config:
ipv4.address: 10.139.144.1/24
ipv4.nat: "false"
ipv4.dhcp: "false"
ipv6.address: "none"
dns.mode: "none"
#managed: true
description: ss-config,${DATA_PLANE_MACVLAN_INTERFACE:-},${DISK_TO_USE:-}
# lxdbrSS is an isolated bridge; no Internet access.
cluster:
server_name: ${CLUSTER_NAME}
enabled: true
member_config: []
cluster_address: ""
cluster_certificate: ""
server_address: ""
cluster_password: ""
cluster_certificate_path: ""
cluster_token: ""
EOF
# configure the LXD Daemon with our preseed.
cat "$CLUSTER_MASTER_LXD_INIT" | ssh "ubuntu@$FQDN" lxd init --preseed
# ensure the lxd service is available over the network, then add a lxc remote, then switch the active remote to it.
if wait-for-it -t 20 "$FQDN:8443"; then
# now create a remote on your local LXC client and switch to it.
# the software will now target the new cluster.
lxc remote add "$CLUSTER_NAME" "$FQDN" --password="$LXD_CLUSTER_PASSWORD" --protocol=lxd --auth-type=tls --accept-certificate
lxc remote switch "$CLUSTER_NAME"
echo "INFO: You have create a new cluster named '$CLUSTER_NAME'. Great! We switched your lxd remote to it."
else
echo "ERROR: Could not detect the LXD endpoint. Something went wrong."
exit 1
fi
echo "HINT: Now you can consider running 'ss-deploy'."

View File

@ -1,152 +0,0 @@
#!/bin/bash
set -eu
export DEPLOY_WWW_SERVER=false
export WWW_SERVER_MAC_ADDRESS=
export DEPLOY_BTCPPAY_SERVER=false
export DEPLOY_GHOST=true
export DEPLOY_NOSTR_RELAY=true
export DEPLOY_ONION_SITE=false
export DEPLOY_NEXTCLOUD=false
export DEPLOY_GITEA=false
export WWW_HOSTNAME="www"
export BTCPAY_HOSTNAME="btcpay"
export BTCPAY_HOSTNAME_IN_CERT="tip"
export NEXTCLOUD_HOSTNAME="nextcloud"
export GITEA_HOSTNAME="git"
export NOSTR_HOSTNAME="relay"
export NOSTR_ACCOUNT_PUBKEY=
# used by 'aws' deployments only; planned deprecation
export DDNS_PASSWORD=
# this is where the html is sourced from.
export SITE_HTML_PATH=
export BTCPAY_ADDITIONAL_HOSTNAMES=
# enter your AWS Access Key and Secret Access Key here.
export AWS_ACCESS_KEY=
export AWS_SECRET_ACCESS_KEY=
# if overridden, the app will be deployed to proxy $BTCPAY_HOSTNAME.$DOMAIN_NAME requests to the URL specified.
# this is useful when you want to oursource your BTCPAY fullnode/lightning node.
#export BTCPAY_HANDLER_URL=
export SMTP_SERVER="smtp.mailgun.org"
export SMTP_PORT="587"
# default AWS region and AMI (free-tier AMI ubuntu 20.10)
export AWS_REGION="us-east-1"
# AMI NAME:
# ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-20220420
export AWS_AMI_ID="ami-09d56f8956ab235b3"
WWW_INSTANCE_TYPE="t2.small"
BTCPAY_INSTANCE_TYPE="t2.medium"
# goal will be to keep any particular instance to run AT OR BELOW t2.medium.
# other options are t2.small, micro, nano; micro is the free-tier eligible.
# [1=vCPUs, 1=Mem(GiB)]
# nano [1,0.5], micro [1,1] (free-tier eligible), small [1,2], medium [2,4], large [2,8], xlarge [4,16], 2xlarge [8,32]
export WWW_INSTANCE_TYPE="$WWW_INSTANCE_TYPE"
export BTCPAY_INSTANCE_TYPE="$BTCPAY_INSTANCE_TYPE"
# TODO REMOVE SMTP_PASSWORD ONCE VERIFIED NO LONGER NEEDED
#export SMTP_PASSWORD=
export GHOST_MYSQL_PASSWORD=
export GHOST_MYSQL_ROOT_PASSWORD=
export NEXTCLOUD_MYSQL_PASSWORD=
export GITEA_MYSQL_PASSWORD=
export NEXTCLOUD_MYSQL_ROOT_PASSWORD=
export GITEA_MYSQL_ROOT_PASSWORD=
export DUPLICITY_BACKUP_PASSPHRASE=
#opt-add-fireflyiii;opt-add-zammad
export SSH_HOME="$HOME/.ssh"
export VLAN_INTERFACE=
export VM_NAME="sovereign-stack-base"
export DEV_MEMORY_MB="4096"
export DEV_CPU_COUNT="4"
export DOCKER_IMAGE_CACHE_FQDN="registry-1.docker.io"
export NEXTCLOUD_SPACE_GB=10
# first of all, if there are uncommited changes, we quit. You better stash or commit!
# Remote VPS instances are tagged with your current git HEAD so we know which code revision
# used when provisioning the VPS.
#LATEST_GIT_COMMIT="$(cat ./.git/refs/heads/master)"
#export LATEST_GIT_COMMIT="$LATEST_GIT_COMMIT"
# check if there are any uncommited changes. It's dangerous to instantiate VMs using
# code that hasn't been committed.
# if git update-index --refresh | grep -q "needs update"; then
# echo "ERROR: You have uncommited changes! Better stash your work with 'git stash'."
# exit 1
# fi
# TODO
# 1 add check for ~/.aws/credentials and stub one out
# 2 ensure install.sh has been run by checking for tor, docker-machine, lxd, wait-for-it, etc.
# 3 pretty much just run the install script if anything is awry
# 4 maybe check to ensure all the CNAME and A+ records are there first so we can quit before machine creation.
BTC_CHAIN=regtest
export BTC_CHAIN="$BTC_CHAIN"
DEFAULT_DB_IMAGE="mariadb:10.9.3-jammy"
# run the docker stack.
export GHOST_IMAGE="ghost:5.18.0"
# TODO switch to mysql. May require intricate export work. THIS MUST BE COMPLETED BEFORE v1 RELEASE
#https://forum.ghost.org/t/how-to-migrate-from-mariadb-10-to-mysql-8/29575
export GHOST_DB_IMAGE="$DEFAULT_DB_IMAGE"
export NGINX_IMAGE="nginx:1.23.1"
# version of backup is 24.0.3
export NEXTCLOUD_IMAGE="nextcloud:25.0.0"
export NEXTCLOUD_DB_IMAGE="$DEFAULT_DB_IMAGE"
# TODO PIN the gitea version number.
export GITEA_IMAGE="gitea/gitea:latest"
export GITEA_DB_IMAGE="$DEFAULT_DB_IMAGE"
export SOVEREIGN_STACK_MAC_ADDRESS=
export WWW_SERVER_MAC_ADDRESS=
export BTCPAYSERVER_MAC_ADDRESS=
export CLUSTERS_DIR="$HOME/ss-clusters"
export PROJECTS_DIR="$HOME/ss-projects"
export SITES_PATH="$HOME/ss-sites"
# The base VM image.
export BASE_LXC_IMAGE="ubuntu/22.04/cloud"
# Deploy a registry cache on your management machine.
export DEPLOY_MGMT_REGISTRY=true
export OTHER_SITES_LIST=
export REMOTE_HOME="/home/ubuntu"
export BTCPAY_SERVER_APPPATH="$REMOTE_HOME/btcpayserver-docker"
export REMOTE_CERT_BASE_DIR="$REMOTE_HOME/.certs"
# this space is for OS, docker images, etc. DOES NOT INCLUDE USER DATA.
export ROOT_DISK_SIZE_GB=20

553
deploy.sh
View File

@ -1,553 +0,0 @@
#!/bin/bash
set -eux
cd "$(dirname "$0")"
check_dependencies () {
for cmd in "$@"; do
if ! command -v "$cmd" >/dev/null 2>&1; then
echo "This script requires \"${cmd}\" to be installed. Please run 'install.sh'."
exit 1
fi
done
}
# Check system's dependencies
check_dependencies wait-for-it dig rsync sshfs lxc docker-machine
# TODO remove dependency on Docker-machine. That's what we use to provision VM on 3rd party vendors. Looking for LXD endpoint.
# let's check to ensure the management machine is on the Baseline ubuntu 21.04
if ! lsb_release -d | grep -q "Ubuntu 22.04"; then
echo "ERROR: Your machine is not running the Ubuntu 22.04 LTS baseline OS on your management machine."
exit 1
fi
DOMAIN_NAME=
RESTORE_ARCHIVE=
VPS_HOSTING_TARGET=lxd
RUN_CERT_RENEWAL=false
RESTORE_WWW=false
BACKUP_CERTS=true
BACKUP_APPS=true
BACKUP_BTCPAY=false
RESTORE_BTCPAY=false
MIGRATE_WWW=false
MIGRATE_BTCPAY=false
USER_SKIP_WWW=false
USER_SKIP_BTCPAY=false
UPDATE_BTCPAY=false
RECONFIGURE_BTCPAY_SERVER=false
DEPLOY_BTCPAY_SERVER=true
CLUSTER_NAME="$(lxc remote get-default)"
STOP_SERVICES=false
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--aws)
VPS_HOSTING_TARGET=aws
shift
;;
--restore-www)
RESTORE_WWW=true
BACKUP_APPS=false
RUN_CERT_RENEWAL=false
shift
;;
--restore-btcpay)
RESTORE_BTCPAY=true
BACKUP_BTCPAY=false
RUN_CERT_RENEWAL=false
shift
;;
--backup-certs)
BACKUP_CERTS=true
shift
;;
--stop)
STOP_SERVICES=true
shift
;;
--archive=*)
RESTORE_ARCHIVE="${i#*=}"
shift
;;
--domain=*)
DOMAIN_NAME="${i#*=}"
shift
;;
--update-btcpay)
UPDATE_BTCPAY=true
shift
;;
--skip-www)
USER_SKIP_WWW=true
shift
;;
--skip-btcpay)
USER_SKIP_BTCPAY=true
shift
;;
--backup-ghost)
BACKUP_APPS=true
shift
;;
--backup-btcpay)
BACKUP_BTCPAY=true
shift
;;
--migrate-www)
MIGRATE_WWW=true
RUN_CERT_RENEWAL=false
shift
;;
--migrate-btcpay)
MIGRATE_BTCPAY=true
RUN_CERT_RENEWAL=false
shift
;;
--renew-certs)
RUN_CERT_RENEWAL=true
shift
;;
--reconfigure-btcpay)
RECONFIGURE_BTCPAY_SERVER=true
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
# set up our default paths.
source ./defaults.sh
export CACHES_DIR="$HOME/ss-cache"
export DOMAIN_NAME="$DOMAIN_NAME"
export REGISTRY_DOCKER_IMAGE="registry:2"
export RESTORE_ARCHIVE="$RESTORE_ARCHIVE"
export RESTORE_WWW="$RESTORE_WWW"
export STOP_SERVICES="$STOP_SERVICES"
export BACKUP_CERTS="$BACKUP_CERTS"
export BACKUP_APPS="$BACKUP_APPS"
export RESTORE_BTCPAY="$RESTORE_BTCPAY"
export BACKUP_BTCPAY="$RESTORE_BTCPAY"
export MIGRATE_WWW="$MIGRATE_WWW"
export MIGRATE_BTCPAY="$MIGRATE_BTCPAY"
export RUN_CERT_RENEWAL="$RUN_CERT_RENEWAL"
if [ "$VPS_HOSTING_TARGET" = aws ]; then
if [ -z "$DOMAIN_NAME" ]; then
echo "ERROR: Please specify a domain name with --domain= when using --aws."
exit 1
fi
CLUSTER_NAME="docker-machine"
fi
export CLUSTER_NAME="$CLUSTER_NAME"
export CLUSTER_PATH="$CLUSTERS_DIR/$CLUSTER_NAME"
# ensure our cluster path is created.
mkdir -p "$CLUSTER_PATH"
# if an authorized_keys file does not exist, we'll stub one out with the current user.
# add additional id_rsa.pub entries manually for more administrative logins.
if [ ! -f "$CLUSTER_PATH/authorized_keys" ]; then
cat "$SSH_HOME/id_rsa.pub" >> "$CLUSTER_PATH/authorized_keys"
echo "INFO: Sovereign Stack just stubbed out '$CLUSTER_PATH/authorized_keys'. Go update it."
echo " Add ssh pubkeys for your various management machines, if any."
echo " By default we added your main ssh pubkey: '$SSH_HOME/id_rsa.pub'."
exit 1
fi
if [ "$VPS_HOSTING_TARGET" = lxd ]; then
CLUSTER_DEFINITION="$CLUSTER_PATH/cluster_definition"
export CLUSTER_DEFINITION="$CLUSTER_DEFINITION"
#########################################
if [ ! -f "$CLUSTER_DEFINITION" ]; then
echo "ERROR: The cluster defintion could not be found. You may need to re-run 'ss-cluster create'."
exit 1
fi
source "$CLUSTER_DEFINITION"
###########################3
# # This section is done to the management machine. We deploy a registry pull through cache on port 5000
# if ! docker volume list | grep -q registry_data; then
# docker volume create registry_data
# fi
# if the registry URL isn't defined, then we just use the upstream dockerhub.
# recommended to run a registry cache on your management machine though.
if [ -n "$REGISTRY_URL" ]; then
cat > "$CLUSTER_PATH/registry.yml" <<EOL
version: 0.1
http:
addr: 0.0.0.0:5000
host: ${REGISTRY_URL}
proxy:
remoteurl: ${REGISTRY_URL}
username: ${REGISTRY_USERNAME}
password: ${REGISTRY_PASSWORD}
EOL
# enable docker swarm mode so we can support docker stacks.
if docker info | grep -q "Swarm: inactive"; then
docker swarm init
fi
mkdir -p "${CACHES_DIR}/registry_images"
# run a docker registry pull through cache on the management machine.
if [ "$DEPLOY_MGMT_REGISTRY" = true ]; then
if ! docker stack list | grep -q registry; then
docker stack deploy -c management/registry_mirror.yml registry
fi
fi
fi
fi
# this is our password generation mechanism. Relying on GPG for secure password generation
function new_pass {
gpg --gen-random --armor 1 25
}
function instantiate_vms {
export VPS_HOSTING_TARGET="$VPS_HOSTING_TARGET"
export BTC_CHAIN="$BTC_CHAIN"
export UPDATE_BTCPAY="$UPDATE_BTCPAY"
export RECONFIGURE_BTCPAY_SERVER="$RECONFIGURE_BTCPAY_SERVER"
# iterate over all our server endpoints and provision them if needed.
# www
VPS_HOSTNAME=
for VIRTUAL_MACHINE in www btcpayserver; do
FQDN=
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
source "$SITE_PATH/site_definition"
source ./domain_env.sh
# VALIDATE THE INPUT from the ENVFILE
if [ -z "$DOMAIN_NAME" ]; then
echo "ERROR: DOMAIN_NAME not specified. Use the --domain-name= option."
exit 1
fi
if [ "$VPS_HOSTING_TARGET" = lxd ]; then
# first let's get the DISK_TO_USE and DATA_PLANE_MACVLAN_INTERFACE from the ss-config
# which is set up during LXD cluster creation ss-cluster.
LXD_SS_CONFIG_LINE=
if lxc network list --format csv | grep lxdbrSS | grep ss-config; then
LXD_SS_CONFIG_LINE="$(lxc network list --format csv | grep lxdbrSS | grep ss-config)"
fi
if [ -z "$LXD_SS_CONFIG_LINE" ]; then
echo "ERROR: the MACVLAN interface has not been specified. You may need to run ss-cluster again."
exit 1
fi
CONFIG_ITEMS="$(echo "$LXD_SS_CONFIG_LINE" | awk -F'"' '{print $2}')"
DATA_PLANE_MACVLAN_INTERFACE="$(echo "$CONFIG_ITEMS" | cut -d ',' -f2)"
DISK_TO_USE="$(echo "$CONFIG_ITEMS" | cut -d ',' -f3)"
export DATA_PLANE_MACVLAN_INTERFACE="$DATA_PLANE_MACVLAN_INTERFACE"
export DISK_TO_USE="$DISK_TO_USE"
./deployment/create_lxc_base.sh
fi
export MAC_ADDRESS_TO_PROVISION=
export VPS_HOSTNAME="$VPS_HOSTNAME"
export FQDN="$VPS_HOSTNAME.$DOMAIN_NAME"
# ensure the admin has set the MAC address for the base image.
if [ -z "$SOVEREIGN_STACK_MAC_ADDRESS" ]; then
echo "ERROR: SOVEREIGN_STACK_MAC_ADDRESS is undefined. Check your project definition."
exit 1
fi
DDNS_HOST=
MIGRATE_VPS=false
if [ "$VIRTUAL_MACHINE" = www ]; then
if [ "$DEPLOY_WWW_SERVER" = false ] || [ "$USER_SKIP_WWW" = true ]; then
continue
fi
VPS_HOSTNAME="$WWW_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$WWW_SERVER_MAC_ADDRESS"
DDNS_HOST="$WWW_HOSTNAME"
ROOT_DISK_SIZE_GB="$((ROOT_DISK_SIZE_GB + NEXTCLOUD_SPACE_GB))"
if [ "$MIGRATE_WWW" = true ]; then
MIGRATE_VPS=true
fi
elif [ "$VIRTUAL_MACHINE" = btcpayserver ] || [ "$USER_SKIP_BTCPAY" = true ]; then
if [ "$DEPLOY_BTCPAY_SERVER" = false ]; then
continue
fi
DDNS_HOST="$BTCPAY_HOSTNAME"
VPS_HOSTNAME="$BTCPAY_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$BTCPAYSERVER_MAC_ADDRESS"
if [ "$BTC_CHAIN" = mainnet ]; then
ROOT_DISK_SIZE_GB=150
elif [ "$BTC_CHAIN" = testnet ]; then
ROOT_DISK_SIZE_GB=70
fi
if [ "$MIGRATE_BTCPAY" = true ]; then
MIGRATE_VPS=true
fi
elif [ "$VIRTUAL_MACHINE" = "sovereign-stack" ]; then
DDNS_HOST="sovereign-stack-base"
ROOT_DISK_SIZE_GB=8
else
echo "ERROR: VIRTUAL_MACHINE not within allowable bounds."
exit
fi
export DDNS_HOST="$DDNS_HOST"
export FQDN="$DDNS_HOST.$DOMAIN_NAME"
export LXD_VM_NAME="${FQDN//./-}"
export VIRTUAL_MACHINE="$VIRTUAL_MACHINE"
export REMOTE_CERT_DIR="$REMOTE_CERT_BASE_DIR/$FQDN"
export MAC_ADDRESS_TO_PROVISION="$MAC_ADDRESS_TO_PROVISION"
# This next section of if statements is our sanity checking area.
if [ "$VPS_HOSTING_TARGET" = aws ]; then
# we require DDNS on AWS to set the public DNS to the right host.
if [ -z "$DDNS_PASSWORD" ]; then
echo "ERROR: Ensure DDNS_PASSWORD is configured in your site_definition."
exit 1
fi
fi
MACHINE_EXISTS=false
if [ "$VPS_HOSTING_TARGET" = aws ] && docker-machine ls -q | grep -q "$FQDN"; then
MACHINE_EXISTS=true
fi
if [ "$VPS_HOSTING_TARGET" = lxd ] && lxc list --format csv | grep -q "$FQDN"; then
MACHINE_EXISTS=true
fi
if [ "$MACHINE_EXISTS" = true ]; then
# we delete the machine if the user has directed us to
if [ "$MIGRATE_VPS" = true ]; then
# if the RESTORE_ARCHIVE is not set, then
if [ -z "$RESTORE_ARCHIVE" ]; then
RESTORE_ARCHIVE="$LOCAL_BACKUP_PATH/$UNIX_BACKUP_TIMESTAMP.tar.gz"
fi
# get a backup of the machine. This is what we restore to the new VPS.
echo "INFO: Machine exists. Since we're going to delete it, let's grab a backup. "
echo " We don't need to restore services since we're deleting it."
./deployment/deploy_vms.sh
# delete the remote VPS.
if [ "$VPS_HOSTING_TARGET" = aws ]; then
RESPONSE=
read -r -p "Do you want to continue with deleting '$FQDN' (y/n)": RESPONSE
if [ "$RESPONSE" = y ]; then
docker-machine rm -f "$FQDN"
else
echo "STOPPING the migration. User entered something other than 'y'."
exit 1
fi
elif [ "$VPS_HOSTING_TARGET" = lxd ]; then
lxc delete --force "$LXD_VM_NAME"
fi
# Then we run the script again to re-instantiate a new VPS, restoring all user data
# if restore directory doesn't exist, then we end up with a new site.
echo "INFO: Recreating the remote VPS then restoring user data."
sleep 2
./deployment/deploy_vms.sh
else
./deployment/deploy_vms.sh
fi
else
if [ "$MIGRATE_VPS" = true ]; then
echo "INFO: User has indicated to delete the machine, but it doesn't exist."
echo " Going to create it anyway."
fi
# The machine does not exist. Let's bring it into existence, restoring from latest backup.
echo "Machine does not exist."
./deployment/deploy_vms.sh
fi
# if the local docker client isn't logged in, do so;
# this helps prevent docker pull errors since they throttle.
if [ ! -f "$HOME/.docker/config.json" ]; then
echo "$REGISTRY_PASSWORD" | docker login --username "$REGISTRY_USERNAME" --password-stdin
fi
# this tells our local docker client to target the remote endpoint via SSH
export DOCKER_HOST="ssh://ubuntu@$PRIMARY_WWW_FQDN"
# enable docker swarm mode so we can support docker stacks.
if docker info | grep -q "Swarm: inactive"; then
docker swarm init --advertise-addr enp6s0
fi
done
}
function stub_site_definition {
mkdir -p "$SITE_PATH" "$PROJECT_PATH/sites"
# create a symlink from the CLUSTERPATH/sites/DOMAIN_NAME to the ss-sites/domain name
if [ ! -d "$PROJECT_PATH/sites/$DOMAIN_NAME" ]; then
ln -s "$SITE_PATH" "$PROJECT_PATH/sites/$DOMAIN_NAME"
fi
if [ ! -f "$SITE_PATH/site_definition" ]; then
# check to see if the enf file exists. exist if not.
SITE_DEFINITION_PATH="$SITE_PATH/site_definition"
if [ ! -f "$SITE_DEFINITION_PATH" ]; then
# stub out a site_definition with new passwords.
cat >"$SITE_DEFINITION_PATH" <<EOL
#!/bin/bash
export DOMAIN_NAME="${DOMAIN_NAME}"
export SITE_LANGUAGE_CODES="en"
export DUPLICITY_BACKUP_PASSPHRASE="$(new_pass)"
#export BTCPAY_HOSTNAME_IN_CERT="store"
export DEPLOY_GHOST=true
export DEPLOY_NEXTCLOUD=false
export DEPLOY_NOSTR_RELAY=true
export NOSTR_ACCOUNT_PUBKEY="NOSTR_IDENTITY_PUBKEY_GOES_HERE"
export DEPLOY_GITEA=false
#export DEPLOY_ONION_SITE=false
export GHOST_MYSQL_PASSWORD="$(new_pass)"
export GHOST_MYSQL_ROOT_PASSWORD="$(new_pass)"
export NEXTCLOUD_MYSQL_PASSWORD="$(new_pass)"
export NEXTCLOUD_MYSQL_ROOT_PASSWORD="$(new_pass)"
export GITEA_MYSQL_PASSWORD="$(new_pass)"
export GITEA_MYSQL_ROOT_PASSWORD="$(new_pass)"
EOL
chmod 0744 "$SITE_DEFINITION_PATH"
echo "INFO: we stubbed a new site_defintion for you at '$SITE_DEFINITION_PATH'. Go update it yo!"
exit 1
fi
fi
}
function stub_project_definition {
# check to see if the enf file exists. exist if not.
PROJECT_DEFINITION_PATH="$PROJECT_PATH/project_definition"
if [ ! -f "$PROJECT_DEFINITION_PATH" ]; then
# stub out a site_definition with new passwords.
cat >"$PROJECT_DEFINITION_PATH" <<EOL
#!/bin/bash
# see https://www.sovereign-stack.org/project-definition for more info.
export DEPLOY_WWW_SERVER=true
export WWW_SERVER_MAC_ADDRESS="CHANGE_ME_REQUIRED"
export DEPLOY_BTCPAY_SERVER=true
export BTCPAYSERVER_MAC_ADDRESS="CHANGE_ME_REQUIRED"
# export BTC_CHAIN=mainnet
export PRIMARY_DOMAIN="CHANGE_ME"
export OTHER_SITES_LIST=""
EOL
chmod 0744 "$PROJECT_DEFINITION_PATH"
echo "INFO: we stubbed a new project_defition for you at '$PROJECT_DEFINITION_PATH'. Go update it yo!"
echo "INFO: Learn more at https://www.sovereign-stack.org/project-definitions/"
exit 1
fi
# source project defition.
source "$PROJECT_DEFINITION_PATH"
}
# let's iterate over the user-supplied domain list and provision each domain.
if [ "$VPS_HOSTING_TARGET" = lxd ]; then
CURRENT_PROJECT="$(lxc info | grep "project:" | awk '{print $2}')"
PROJECT_PATH="$PROJECTS_DIR/$PROJECT_NAME"
mkdir -p "$PROJECT_PATH" "$CLUSTER_PATH/projects"
export PROJECT_PATH="$PROJECT_PATH"
# create a symlink from ./clusterpath/projects/project
if [ ! -d "$CLUSTER_PATH/projects/$PROJECT_NAME" ]; then
ln -s "$PROJECT_PATH" "$CLUSTER_PATH/projects/$PROJECT_NAME"
fi
# check if we need to provision a new lxc project.
if [ "$PROJECT_NAME" != "$CURRENT_PROJECT" ]; then
if ! lxc project list | grep -q "$PROJECT_NAME"; then
echo "INFO: The lxd project specified in the cluster_definition did not exist. We'll create one!"
lxc project create "$PROJECT_NAME"
fi
echo "INFO: switch to lxd project '$PROJECT_NAME'."
lxc project switch "$PROJECT_NAME"
fi
# stub out the project definition if needed.
stub_project_definition
# the DOMAIN_LIST is a complete list of all our domains. We often iterate over this list.
export DOMAIN_LIST="${PRIMARY_DOMAIN},${OTHER_SITES_LIST}"
export DOMAIN_COUNT=$(("$(echo $DOMAIN_LIST | tr -cd , | wc -c)"+1))
# let's provision our primary domain first.
export DOMAIN_NAME="$PRIMARY_DOMAIN"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
export PRIMARY_WWW_FQDN="$WWW_HOSTNAME.$DOMAIN_NAME"
stub_site_definition
# bring the vms up under the primary domain name.
instantiate_vms
# let's stub out the rest of our site definitions, if any.
for DOMAIN_NAME in ${OTHER_SITES_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# stub out the site_defition if it's doesn't exist.
stub_site_definition
done
# now let's run the www and btcpay-specific provisioning scripts.
bash -c "./deployment/www/go.sh"
bash -c "./deployment/btcpayserver/go.sh"
elif [ "$VPS_HOSTING_TARGET" = aws ]; then
stub_site_definition
# if we're on AWS, we can just provision each system separately.
run_domain
fi

9
deployment/base.sh Executable file
View File

@ -0,0 +1,9 @@
#!/bin/bash
# The base VM image.
export INCUS_UBUNTU_BASE_VERSION="jammy"
export BASE_IMAGE_VM_NAME="ss-base-${INCUS_UBUNTU_BASE_VERSION//./-}"
export BASE_INCUS_IMAGE="ubuntu/$INCUS_UBUNTU_BASE_VERSION/cloud"
WEEK_NUMBER=$(date +%U)
export UBUNTU_BASE_IMAGE_NAME="ss-ubuntu-${INCUS_UBUNTU_BASE_VERSION//./-}"
export DOCKER_BASE_IMAGE_NAME="ss-docker-${INCUS_UBUNTU_BASE_VERSION//./-}-$WEEK_NUMBER"

View File

@ -1,22 +0,0 @@
#!/bin/bash
set -exo
cd "$(dirname "$0")"
# take the services down, create a backup archive, then pull it down.
# the script executed here from the BTCPAY repo will automatically take services down
# and bring them back up.
ssh "$FQDN" "mkdir -p $REMOTE_HOME/backups; cd $REMOTE_HOME/; sudo BTCPAY_BASE_DIRECTORY=$REMOTE_HOME bash -c $BTCPAY_SERVER_APPPATH/btcpay-down.sh"
# TODO enable encrypted archives
# TODO switch to btcpay-backup.sh when on LXD fully.
scp ./remote_scripts/btcpay-backup.sh "$FQDN:$REMOTE_HOME/btcpay-backup.sh"
ssh "$FQDN" "sudo cp $REMOTE_HOME/btcpay-backup.sh $BTCPAY_SERVER_APPPATH/btcpay-backup.sh && sudo chmod 0755 $BTCPAY_SERVER_APPPATH/btcpay-backup.sh"
ssh "$FQDN" "cd $REMOTE_HOME/; sudo BTCPAY_BASE_DIRECTORY=$REMOTE_HOME BTCPAY_DOCKER_COMPOSE=$REMOTE_HOME/btcpayserver-docker/Generated/docker-compose.generated.yml bash -c $BTCPAY_SERVER_APPPATH/btcpay-backup.sh"
# next we pull the resulting backup archive down to our management machine.
ssh "$FQDN" "sudo cp /var/lib/docker/volumes/backup_datadir/_data/backup.tar.gz $REMOTE_HOME/backups/btcpay.tar.gz"
ssh "$FQDN" "sudo chown ubuntu:ubuntu $REMOTE_HOME/backups/btcpay.tar.gz"
scp "$FQDN:$REMOTE_HOME/backups/btcpay.tar.gz" "$LOCAL_BACKUP_PATH/$1.tar.gz"

View File

@ -1,2 +0,0 @@
# we append this text to the btcpay server /home/ubuntu/.bashrc so
# logged in users have more common access to the variou

View File

@ -1,72 +0,0 @@
#!/bin/bash
set -eux
cd "$(dirname "$0")"
export DOCKER_HOST="ssh://ubuntu@$BTCPAY_FQDN"
OPEN_URL=true
RUN_SERVICES=true
# we will re-run the btcpayserver provisioning scripts if directed to do so.
# if an update does occur, we grab another backup.
if [ "$UPDATE_BTCPAY" = true ]; then
# run the update.
ssh "$FQDN" "bash -c $BTCPAY_SERVER_APPPATH/btcpay-down.sh"
# btcpay-update.sh brings services back up, but does not take them down.
ssh "$FQDN" "sudo bash -c $BTCPAY_SERVER_APPPATH/btcpay-update.sh"
sleep 20
elif [ "$RESTORE_BTCPAY" = true ]; then
# run the update.
ssh "$FQDN" "bash -c $BTCPAY_SERVER_APPPATH/btcpay-down.sh"
sleep 10
./restore.sh
RUN_SERVICES=true
OPEN_URL=true
# if this is a new VM from a migration procedure, then we can now run setup.
if [ "$MIGRATE_BTCPAY" = true ]; then
./stub_btcpay_setup.sh
fi
elif [ "$RECONFIGURE_BTCPAY_SERVER" == true ]; then
# the administrator may have indicated a reconfig;
# if so, we re-run setup script.
./stub_btcpay_setup.sh
RUN_SERVICES=true
OPEN_URL=true
fi
# if the script gets this far, then we grab a regular backup.
if [ "$BACKUP_BTCPAY" = true ]; then
# we just grab a regular backup
./backup.sh "$UNIX_BACKUP_TIMESTAMP"
fi
if [ "$RUN_SERVICES" = true ]; then
# The default is to resume services, though admin may want to keep services off (eg., for a migration)
# we bring the services back up by default.
ssh "$FQDN" "bash -c $BTCPAY_SERVER_APPPATH/btcpay-up.sh"
OPEN_URL=true
fi
if [ "$OPEN_URL" = true ]; then
if [ "$VPS_HOSTING_TARGET" = lxd ]; then
if wait-for-it -t 5 "$PRIMARY_WWW_FQDN:80"; then
xdg-open "http://$PRIMARY_WWW_FQDN" > /dev/null 2>&1
fi
else
if wait-for-it -t 5 "$FQDN:443"; then
xdg-open "https://$FQDN" > /dev/null 2>&1
fi
fi
fi

View File

@ -1,117 +0,0 @@
#!/bin/bash -e
set -o pipefail -o errexit -x
# Please be aware of these important issues:
#
# - Old channel state is toxic and you can loose all your funds, if you or someone
# else closes a channel based on the backup with old state - and the state changes
# often! If you publish an old state (say from yesterday's backup) on chain, you
# WILL LOSE ALL YOUR FUNDS IN A CHANNEL, because the counterparty will publish a
# revocation key!
if [ "$(id -u)" != "0" ]; then
printf "\n🚨 This script must be run as root.\n"
printf "➡️ Use the command 'sudo su -' (include the trailing hypen) and try again.\n\n"
exit 1
fi
# preparation
docker_dir=$(docker volume inspect generated_btcpay_datadir --format="{{.Mountpoint}}" | sed -e "s%/volumes/.*%%g")
dbdump_name=postgres.sql.gz
btcpay_dir="$BTCPAY_BASE_DIRECTORY/btcpayserver-docker"
backup_dir="$docker_dir/volumes/backup_datadir/_data"
dbdump_path="$docker_dir/$dbdump_name"
backup_path="$backup_dir/backup.tar.gz"
# ensure backup dir exists
if [ ! -d "$backup_dir" ]; then
mkdir -p "$backup_dir"
fi
cd "$btcpay_dir"
. helpers.sh
dbcontainer=$(docker ps -a -q -f "name=postgres_1")
if [ -z "$dbcontainer" ]; then
printf "\n"
echo " Database container is not up and running. Starting BTCPay Server …"
docker volume create generated_postgres_datadir
docker-compose -f "$BTCPAY_DOCKER_COMPOSE" up -d postgres
printf "\n"
dbcontainer=$(docker ps -a -q -f "name=postgres_1")
if [ -z "$dbcontainer" ]; then
echo "🚨 Database container could not be started or found."
exit 1
fi
fi
printf "\n"
echo " Dumping database …"
{
docker exec "$dbcontainer" pg_dumpall -c -U postgres | gzip > "$dbdump_path"
echo "✅ Database dump done."
} || {
echo "🚨 Dumping failed. Please check the error message above."
exit 1
}
printf "\n Stopping BTCPay Server …\n\n"
btcpay_down
printf "\n"
cd $docker_dir
echo " Archiving files in $(pwd)"
{
tar \
--exclude="volumes/backup_datadir" \
--exclude="volumes/generated_bitcoin_datadir/_data/blocks" \
--exclude="volumes/generated_bitcoin_datadir/_data/chainstate" \
--exclude="volumes/generated_bitcoin_datadir/_data/debug.log" \
--exclude="volumes/generated_bitcoin_datadir/_data/testnet3/blocks" \
--exclude="volumes/generated_bitcoin_datadir/_data/testnet3/chainstate" \
--exclude="volumes/generated_bitcoin_datadir/_data/testnet3/debug.log" \
--exclude="volumes/generated_bitcoin_datadir/_data/regtest/blocks" \
--exclude="volumes/generated_bitcoin_datadir/_data/regtest/chainstate" \
--exclude="volumes/generated_bitcoin_datadir/_data/regtest/debug.log" \
--exclude="volumes/generated_postgres_datadir" \
--exclude="volumes/generated_tor_relay_datadir" \
--exclude="volumes/generated_clightning_bitcoin_datadir/_data/lightning-rpc" \
--exclude="**/logs/*" \
-cvzf "$backup_path" "$dbdump_name" volumes/generated_*
echo "✅ Archive done."
if [ ! -z "$BTCPAY_BACKUP_PASSPHRASE" ]; then
printf "\n"
echo "🔐 BTCPAY_BACKUP_PASSPHRASE is set, the backup will be encrypted."
{
gpg -o "$backup_path.gpg" --batch --yes -c --passphrase "$BTCPAY_BACKUP_PASSPHRASE" $backup_path
rm "$backup_path"
backup_path="$backup_path.gpg"
echo "✅ Encryption done."
} || {
echo "🚨 Encrypting failed. Please check the error message above."
printf "\n Restarting BTCPay Server …\n\n"
cd "$btcpay_dir"
#btcpay_up
exit 1
}
fi
} || {
echo "🚨 Archiving failed. Please check the error message above."
printf "\n Restarting BTCPay Server …\n\n"
cd $btcpay_dir
#btcpay_up
exit 1
}
printf "\n Restarting BTCPay Server …\n\n"
cd $btcpay_dir
#btcpay_up
printf "\n Cleaning up …\n\n"
rm $dbdump_path
printf "✅ Backup done => $backup_path\n\n"

View File

@ -1,118 +0,0 @@
#!/bin/bash -e
set -o pipefail -o errexit -x
if [ "$(id -u)" != "0" ]; then
echo "ERROR: This script must be run as root."
echo "➡️ Use the command 'sudo su -' (include the trailing hypen) and try again."
exit 1
fi
backup_path=$1
if [ -z "$backup_path" ]; then
echo "ERROR: Usage: btcpay-restore.sh /path/to/backup.tar.gz"
exit 1
fi
if [ ! -f "$backup_path" ]; then
echo "ERROR: $backup_path does not exist."
exit 1
fi
if [[ "$backup_path" == *.gpg && -z "$BTCPAY_BACKUP_PASSPHRASE" ]]; then
echo "INFO: $backup_path is encrypted. Please provide the passphrase to decrypt it."
echo "INFO: Usage: BTCPAY_BACKUP_PASSPHRASE=t0pSeCrEt btcpay-restore.sh /path/to/backup.tar.gz.gpg"
exit 1
fi
# preparation
docker_dir=$(docker volume inspect generated_btcpay_datadir --format="{{.Mountpoint}}" | sed -e "s%/volumes/.*%%g")
restore_dir="$docker_dir/volumes/backup_datadir/_data/restore"
dbdump_name=postgres.sql.gz
btcpay_dir="$BTCPAY_BASE_DIRECTORY/btcpayserver-docker"
# ensure clean restore dir
echo "INFO: Cleaning restore directory $restore_dir."
rm -rf "$restore_dir"
mkdir -p "$restore_dir"
if [[ "$backup_path" == *.gpg ]]; then
echo "🔐 Decrypting backup file …"
{
gpg -o "${backup_path%.*}" --batch --yes --passphrase "$BTCPAY_BACKUP_PASSPHRASE" -d "$backup_path"
backup_path="${backup_path%.*}"
echo "SUCESS: Decryption done."
} || {
echo "🚨 Decryption failed. Please check the error message above."
exit 1
}
fi
cd "$restore_dir"
echo " Extracting files in $(pwd)"
tar -h -xvf "$backup_path" -C "$restore_dir"
# basic control checks
if [ ! -f "$dbdump_name" ]; then
echo "ERROR: '$dbdump_name' does not exist."
exit 1
fi
if [ ! -d "volumes" ]; then
echo "ERROR: volumes directory does not exist."
exit 1
fi
cd "$btcpay_dir"
. helpers.sh
#echo "INFO: Stopping BTCPay Server."
#btcpay_down
cd "$restore_dir"
{
echo "INFO: Restoring volumes."
# ensure volumes dir exists
if [ ! -d "$docker_dir/volumes" ]; then
mkdir -p "$docker_dir/volumes"
fi
# copy volume directories over
cp -r volumes/* "$docker_dir/volumes/"
# ensure datadirs excluded in backup exist
mkdir -p "$docker_dir/volumes/generated_postgres_datadir/_data"
echo "✅ Volume restore done."
} || {
echo "🚨 Restoring volumes failed. Please check the error message above."
exit 1
}
{
echo "INFO: Starting database container"
docker-compose -f "$BTCPAY_DOCKER_COMPOSE" up -d postgres
dbcontainer=$(docker ps -a -q -f "name=postgres")
if [ -z "$dbcontainer" ]; then
echo "ERROR: Database container could not be started or found."
exit 1
fi
} || {
echo "ERROR: Starting database container failed. Please check the error message above."
exit 1
}
cd "$restore_dir"
{
echo "INFO: Restoring database..."
gunzip -c $dbdump_name | docker exec -i "$dbcontainer" psql -U postgres postgres -a
echo "SUCCESS: Database restore done."
} || {
echo "ERROR: Restoring database failed. Please check the error message above."
exit 1
}
echo "INFO: Cleaning up."
rm -rf "$restore_dir"
echo "SUCCESS: Restore done"

View File

@ -1,27 +0,0 @@
#!/bin/bash
set -ex
cd "$(dirname "$0")"
if [ -f "$RESTORE_ARCHIVE" ]; then
# push the restoration archive to the remote server
echo "INFO: Restoring BTCPAY Server: $RESTORE_ARCHIVE"
ssh "$FQDN" mkdir -p "$REMOTE_BACKUP_PATH"
REMOTE_BTCPAY_ARCHIVE_PATH="$REMOTE_HOME/backups/btcpay.tar.gz"
scp "$RESTORE_ARCHIVE" "$FQDN:$REMOTE_BTCPAY_ARCHIVE_PATH"
# we clean up any old containers first before restoring.
ssh "$FQDN" docker system prune -f
# push the modified restore script to the remote directory, set permissions, and execute.
scp ./remote_scripts/btcpay-restore.sh "$FQDN:$REMOTE_HOME/btcpay-restore.sh"
ssh "$FQDN" "sudo mv $REMOTE_HOME/btcpay-restore.sh $BTCPAY_SERVER_APPPATH/btcpay-restore.sh && sudo chmod 0755 $BTCPAY_SERVER_APPPATH/btcpay-restore.sh"
ssh "$FQDN" "cd $REMOTE_HOME/; sudo BTCPAY_BASE_DIRECTORY=$REMOTE_HOME BTCPAY_DOCKER_COMPOSE=$REMOTE_HOME/btcpayserver-docker/Generated/docker-compose.generated.yml bash -c '$BTCPAY_SERVER_APPPATH/btcpay-restore.sh $REMOTE_BTCPAY_ARCHIVE_PATH'"
# now, we're going to take things down because aparently we this needs to be re-exececuted.
ssh "$FQDN" "bash -c $BTCPAY_SERVER_APPPATH/btcpay-down.sh"
else
echo "ERROR: File does not exist."
exit 1
fi

View File

@ -1,81 +0,0 @@
#!/bin/bash
set -ex
cd "$(dirname "$0")"
# export BTCPAY_FASTSYNC_ARCHIVE_FILENAME="utxo-snapshot-bitcoin-testnet-1445586.tar"
# BTCPAY_REMOTE_RESTORE_PATH="/var/lib/docker/volumes/generated_bitcoin_datadir/_data"
# This is the config for a basic proxy to the listening port 127.0.0.1:2368
# It also supports modern TLS, so SSL certs must be available.
cat > "$SITE_PATH/btcpay.sh" <<EOL
#!/bin/bash
set -ex
cd "\$(dirname "\$0")"
# wait for cloud-init to complete yo
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done
if [ ! -d "btcpayserver-docker" ]; then
echo "cloning btcpayserver-docker";
git clone -b master https://github.com/btcpayserver/btcpayserver-docker btcpayserver-docker;
git config --global --add safe.directory /home/ubuntu/btcpayserver-docker
else
cd ./btcpayserver-docker
git pull
git pull --all --tags
cd -
fi
cd btcpayserver-docker
export BTCPAY_HOST="${BTCPAY_USER_FQDN}"
export NBITCOIN_NETWORK="${BTC_CHAIN}"
export LIGHTNING_ALIAS="${DOMAIN_NAME}"
export BTCPAYGEN_LIGHTNING="clightning"
export BTCPAYGEN_CRYPTO1="btc"
export BTCPAYGEN_ADDITIONAL_FRAGMENTS="opt-save-storage-s;opt-add-btctransmuter;opt-add-nostr-relay;"
export BTCPAYGEN_REVERSEPROXY="nginx"
export BTCPAY_ENABLE_SSH=false
export BTCPAY_BASE_DIRECTORY=${REMOTE_HOME}
EOL
# can add opt-add-tor-relay; in BTCPAYGEN_ADDITIONAL_FRAGMENTS
if [ "$VPS_HOSTING_TARGET" = lxd ]; then
cat >> "$SITE_PATH/btcpay.sh" <<EOL
export BTCPAYGEN_EXCLUDE_FRAGMENTS="nginx-https"
export REVERSEPROXY_DEFAULT_HOST="$BTCPAY_USER_FQDN"
EOL
elif [ "$VPS_HOSTING_TARGET" = aws ]; then
cat >> "$SITE_PATH/btcpay.sh" <<EOL
export BTCPAY_ADDITIONAL_HOSTS="${BTCPAY_ADDITIONAL_HOSTNAMES}"
export LETSENCRYPT_EMAIL="${CERTIFICATE_EMAIL_ADDRESS}"
EOL
fi
cat >> "$SITE_PATH/btcpay.sh" <<EOL
if [ "\$NBITCOIN_NETWORK" != regtest ]; then
# run fast_sync if it's not been done before.
if [ ! -f /home/ubuntu/fast_sync_completed ]; then
cd ./contrib/FastSync
./load-utxo-set.sh
touch /home/ubuntu/fast_sync_completed
cd -
fi
fi
# run the setup script.
. ./btcpay-setup.sh -i
EOL
# send the setup script to the remote machine.
scp "$SITE_PATH/btcpay.sh" "ubuntu@$FQDN:$REMOTE_HOME/btcpay_setup.sh"
ssh "$BTCPAY_FQDN" "chmod 0744 $REMOTE_HOME/btcpay_setup.sh"
ssh "$BTCPAY_FQDN" "sudo bash -c ./btcpay_setup.sh"
ssh "$BTCPAY_FQDN" "touch $REMOTE_HOME/btcpay.complete"

102
deployment/create_base.sh Executable file
View File

@ -0,0 +1,102 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
. ./base.sh
bash -c "./stub_profile.sh --incus-hostname=$BASE_IMAGE_VM_NAME"
if incus list -q --project default | grep -q "$BASE_IMAGE_VM_NAME" ; then
incus delete -f "$BASE_IMAGE_VM_NAME" --project default
fi
# let's download our base image.
if ! incus image list --format csv --columns l --project default | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
# copy the image down from canonical.
incus image copy "images:$BASE_INCUS_IMAGE" "$REMOTE_NAME": --alias "$UBUNTU_BASE_IMAGE_NAME" --public --vm --auto-update --target-project default
fi
# If the VM does exist, then we will delete it (so we can start fresh)
if incus list --format csv -q --project default | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
# if there's no snapshot, we dispense with the old image and try again.
if ! incus info "$BASE_IMAGE_VM_NAME" --project default | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
incus delete "$BASE_IMAGE_VM_NAME" --force --project default
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$BASE_IMAGE_VM_NAME"
fi
else
if ! incus list --project default | grep -q "$BASE_IMAGE_VM_NAME"; then
# the base image is ubuntu:22.04.
script -q -c "incus init -q --profile=$BASE_IMAGE_VM_NAME $UBUNTU_BASE_IMAGE_NAME $BASE_IMAGE_VM_NAME --vm --project default" /dev/null
fi
if incus info "$BASE_IMAGE_VM_NAME" --project default | grep -q "Status: STOPPED"; then
# TODO move this sovereign-stack-base construction VM to separate dedicated IP
incus config set "$BASE_IMAGE_VM_NAME" --project default
incus start "$BASE_IMAGE_VM_NAME" --project default
sleep 15
fi
# for CHAIN in mainnet testnet; do
# for DATA in blocks chainstate; do
# incus storage volume attach ss-base "$CHAIN-$DATA" "$BASE_IMAGE_VM_NAME" "/home/ubuntu/bitcoin/$DATA"
# done
# done
if incus info "$BASE_IMAGE_VM_NAME" --project default | grep -q "Status: RUNNING"; then
while incus exec "$BASE_IMAGE_VM_NAME" --project default -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done
# ensure the ssh service is listening at localhost
incus exec "$BASE_IMAGE_VM_NAME" --project default -- wait-for-it -t 100 127.0.0.1:22
# # If we have any chaninstate or blocks in our SSME, let's push them to the
# # remote host as a zfs volume that way deployments can share a common history
# # of chainstate/blocks.
# for CHAIN in testnet mainnet; do
# for DATA in blocks chainstate; do
# # if the storage snapshot doesn't yet exist, create it.
# if ! incus storage volume list ss-base -q --format csv -c n | grep -q "$CHAIN-$DATA/snap0"; then
# DATA_PATH="/home/ubuntu/.ss/cache/bitcoin/$CHAIN/$DATA"
# if [ -d "$DATA_PATH" ]; then
# COMPLETE_FILE_PATH="$DATA_PATH/complete"
# if incus exec "$BASE_IMAGE_VM_NAME" -- [ ! -f "$COMPLETE_FILE_PATH" ]; then
# incus file push --recursive --project default "$DATA_PATH/" "$BASE_IMAGE_VM_NAME""$DATA_PATH/"
# incus exec "$BASE_IMAGE_VM_NAME" -- su ubuntu - bash -c "echo $(date) > $COMPLETE_FILE_PATH"
# incus exec "$BASE_IMAGE_VM_NAME" -- chown -R 999:999 "$DATA_PATH/$DATA"
# else
# echo "INFO: it appears as though $CHAIN/$DATA has already been initialized. Continuing."
# fi
# fi
# fi
# done
# done
# stop the VM and get a snapshot.
incus stop "$BASE_IMAGE_VM_NAME" --project default
fi
incus snapshot create "$BASE_IMAGE_VM_NAME" "$UBUNTU_BASE_IMAGE_NAME" --project default
fi
echo "INFO: Publishing '$BASE_IMAGE_VM_NAME' as image '$DOCKER_BASE_IMAGE_NAME'. Please wait."
incus publish -q --public "$BASE_IMAGE_VM_NAME/$UBUNTU_BASE_IMAGE_NAME" \
--project default --alias="$DOCKER_BASE_IMAGE_NAME" \
--compression none
echo "INFO: Success creating the base image. Deleting artifacts from the build process."
incus delete -f "$BASE_IMAGE_VM_NAME" --project default
# # now let's get a snapshot of each of the blocks/chainstate directories.
# for CHAIN in testnet mainnet; do
# for DATA in blocks chainstate; do
# if ! incus storage volume list ss-base -q --format csv -c n | grep -q "$CHAIN-$DATA/snap0"; then
# echo "INFO: Creating a snapshot 'ss-base/$CHAIN-$DATA/snap0'."
# incus storage volume snapshot ss-base --project default "$CHAIN-$DATA"
# fi
# done
# done

View File

@ -1,61 +0,0 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
./stub_lxc_profile.sh sovereign-stack
# create the default storage pool if necessary
if ! lxc storage list --format csv | grep -q "sovereign-stack"; then
if [ "$DISK_TO_USE" != loop ]; then
# we omit putting a size here so, so LXD will consume the entire disk if '/dev/sdb' or partition if '/dev/sdb1'.
# TODO do some sanity/resource checking on DISK_TO_USE.
lxc storage create "sovereign-stack" zfs source="$DISK_TO_USE"
else
# if a disk is the default 'loop', then we create a zfs storage pool
# on top of the existing filesystem using a loop device, per LXD docs
lxc storage create "sovereign-stack" zfs
fi
fi
# If our template doesn't exist, we create one.
if ! lxc image list --format csv "$VM_NAME" | grep -q "$VM_NAME"; then
# If the lxc VM does exist, then we will delete it (so we can start fresh)
if lxc list -q --format csv | grep -q "$VM_NAME"; then
lxc delete "$VM_NAME" --force
# remove the ssh known endpoint else we get warnings.
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$VM_NAME"
fi
# let's download our base image.
if ! lxc image list --format csv --columns l | grep -q "ubuntu-base"; then
# if the image doesn't exist, download it from Ubuntu's image server
# TODO see if we can fetch this file from a more censorship-resistant source, e.g., ipfs
# we don't really need to cache this locally since it gets continually updated upstream.
lxc image copy "images:$BASE_LXC_IMAGE" "$CLUSTER_NAME": --alias "ubuntu-base" --public --vm
fi
# this vm is used temperarily with
lxc init --profile="sovereign-stack" "ubuntu-base" "$VM_NAME" --vm
# let's PIN the HW address for now so we don't exhaust IP
# and so we can set DNS internally.
# TODO move this sovereign-stack-base construction VM to separate dedicated IP
lxc config set "$VM_NAME" "volatile.enp5s0.hwaddr=$SOVEREIGN_STACK_MAC_ADDRESS"
lxc start "$VM_NAME"
sleep 10
# let's wait for the LXC vm remote machine to get an IP address.
./wait_for_lxc_ip.sh "$VM_NAME"
# stop the VM and get a snapshot.
lxc stop "$VM_NAME"
lxc publish "$CLUSTER_NAME:$VM_NAME" --alias "$VM_NAME" --public
fi

122
deployment/deploy_vm.sh Executable file
View File

@ -0,0 +1,122 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
. ./base.sh
## This is a weird if clause since we need to LEFT-ALIGN the statement below.
SSH_STRING="Host ${FQDN}"
if ! grep -q "$SSH_STRING" "$SSH_HOME/config"; then
########## BEGIN
cat >> "$SSH_HOME/config" <<-EOF
${SSH_STRING}
HostName ${FQDN}
User ubuntu
EOF
###
fi
# if the machine doesn't exist, we create it.
if ! incus list --format csv | grep -q "$INCUS_VM_NAME"; then
# create a base image if needed and instantiate a VM.
if [ -z "$MAC_ADDRESS_TO_PROVISION" ]; then
echo "ERROR: You MUST define a MAC Address for all your machines in your project definition."
echo "INFO: IMPORTANT! You MUST have DHCP Reservations for these MAC addresses. You also need records established the DNS."
exit 1
fi
# TODO ensure we are only GROWING the volume--never shrinking per zfs volume docs.
BACKUP_DISK_SIZE_GB=
SSDATA_DISK_SIZE_GB=
DOCKER_DISK_SIZE_GB=
if [ "$VIRTUAL_MACHINE" = www ]; then
if [ "$SKIP_WWW_SERVER" = true ]; then
exit 0
fi
BACKUP_DISK_SIZE_GB="$WWW_BACKUP_DISK_SIZE_GB"
SSDATA_DISK_SIZE_GB="$WWW_SSDATA_DISK_SIZE_GB"
DOCKER_DISK_SIZE_GB="$WWW_DOCKER_DISK_SIZE_GB"
fi
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
if [ "$SKIP_BTCPAY_SERVER" = true ]; then
exit 0
fi
BACKUP_DISK_SIZE_GB="$BTCPAYSERVER_BACKUP_DISK_SIZE_GB"
SSDATA_DISK_SIZE_GB="$BTCPAYSERVER_SSDATA_DISK_SIZE_GB"
DOCKER_DISK_SIZE_GB="$BTCPAYSERVER_DOCKER_DISK_SIZE_GB"
fi
SSDATA_VOLUME_NAME=
BACKUP_VOLUME_NAME=
if [ "$VIRTUAL_MACHINE" != lnplayserver ]; then
DOCKER_VOLUME_NAME="$VIRTUAL_MACHINE-docker"
if ! incus storage volume list ss-base | grep -q "$DOCKER_VOLUME_NAME"; then
incus storage volume create ss-base "$DOCKER_VOLUME_NAME" --type=block
incus storage volume set ss-base "$DOCKER_VOLUME_NAME" size="${DOCKER_DISK_SIZE_GB}GB"
fi
SSDATA_VOLUME_NAME="$VIRTUAL_MACHINE-ss-data"
if ! incus storage volume list ss-base | grep -q "$SSDATA_VOLUME_NAME"; then
incus storage volume create ss-base "$SSDATA_VOLUME_NAME" --type=filesystem
incus storage volume set ss-base "$SSDATA_VOLUME_NAME" size="${SSDATA_DISK_SIZE_GB}GB"
fi
BACKUP_VOLUME_NAME="$VIRTUAL_MACHINE-backup"
if ! incus storage volume list ss-base | grep -q "$BACKUP_VOLUME_NAME"; then
incus storage volume create ss-base "$BACKUP_VOLUME_NAME" --type=filesystem
incus storage volume set ss-base "$BACKUP_VOLUME_NAME" size="${BACKUP_DISK_SIZE_GB}GB"
fi
fi
bash -c "./stub_profile.sh --vm=$VIRTUAL_MACHINE --incus-hostname=$INCUS_VM_NAME --ss-volume-name=$SSDATA_VOLUME_NAME --backup-volume-name=$BACKUP_VOLUME_NAME"
INCUS_LNPLAYSERVER_IMAGE_NAME="lnplayserver-$DOMAIN_NAME"
if ! incus image list -q --format csv | grep -q "$INCUS_LNPLAYSERVER_IMAGE_NAME"; then
script -q -c "incus init -q $DOCKER_BASE_IMAGE_NAME $INCUS_VM_NAME --vm --profile=$INCUS_VM_NAME" /dev/null
elif [ "$VIRTUAL_MACHINE" = lnplayserver ]; then
script -q -c "incus init -q $INCUS_LNPLAYSERVER_IMAGE_NAME $INCUS_VM_NAME --vm --profile=$INCUS_VM_NAME" /dev/null
fi
# let's PIN the HW address for now so we don't exhaust IP
# and so we can set DNS internally.
incus config set "$INCUS_VM_NAME" "volatile.enp5s0.hwaddr=$MAC_ADDRESS_TO_PROVISION"
if [ "$VIRTUAL_MACHINE" != lnplayserver ]; then
# attack the docker block device.
incus storage volume attach ss-base "$DOCKER_VOLUME_NAME" "$INCUS_VM_NAME"
fi
# if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
# # attach any volumes
# for CHAIN in testnet mainnet; do
# for DATA in blocks chainstate; do
# MOUNT_PATH="/$CHAIN-$DATA"
# incus config device add "$INCUS_VM_NAME" "$CHAIN-$DATA" disk pool=ss-base source="$CHAIN-$DATA" path="$MOUNT_PATH"
# done
# done
# fi
incus start "$INCUS_VM_NAME"
sleep 15
bash -c "./wait_for_ip.sh --incus-name=$INCUS_VM_NAME"
# scan the remote machine and install it's identity in our SSH known_hosts file.
ssh-keyscan -H "$FQDN" >> "$SSH_HOME/known_hosts"
if [ "$VIRTUAL_MACHINE" != lnplayserver ]; then
ssh "$FQDN" "sudo chown ubuntu:ubuntu $REMOTE_DATA_PATH"
ssh "$FQDN" "sudo chown -R ubuntu:ubuntu $REMOTE_BACKUP_PATH"
fi
fi

View File

@ -1,73 +0,0 @@
#!/bin/bash
set -eux
cd "$(dirname "$0")"
# let's make sure we have an ssh keypair. We just use $SSH_HOME/id_rsa
# TODO convert this to SSH private key held on Trezor. THus trezor-T required for
# login operations. This should be configurable of course.
if [ ! -f "$SSH_HOME/id_rsa" ]; then
# generate a new SSH key for the base vm image.
ssh-keygen -f "$SSH_HOME/id_rsa" -t ecdsa -b 521 -N ""
fi
## This is a weird if clause since we need to LEFT-ALIGN the statement below.
SSH_STRING="Host ${FQDN}"
if ! grep -q "$SSH_STRING" "$SSH_HOME/config"; then
########## BEGIN
cat >> "$SSH_HOME/config" <<-EOF
${SSH_STRING}
HostName ${FQDN}
User ubuntu
EOF
###
fi
function prepare_host {
# scan the remote machine and install it's identity in our SSH known_hosts file.
ssh-keyscan -H -t ecdsa "$FQDN" >> "$SSH_HOME/known_hosts"
# create a directory to store backup archives. This is on all new vms.
ssh "$FQDN" mkdir -p "$REMOTE_HOME/backups"
# if this execution is for btcpayserver, then we run the stub/btcpay setup script
# but only if it hasn't been executed before.
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
if [ "$(ssh "$BTCPAY_FQDN" [[ ! -f "$REMOTE_HOME/btcpay.complete" ]]; echo $?)" -eq 0 ]; then
./btcpayserver/stub_btcpay_setup.sh
fi
fi
}
# when set to true, this flag indicates that a new VPS was created during THIS script run.
if [ "$VPS_HOSTING_TARGET" = aws ]; then
# let's create the remote VPS if needed.
if ! docker-machine ls -q --filter name="$FQDN" | grep -q "$FQDN"; then
./provision_vps.sh
prepare_host
fi
elif [ "$VPS_HOSTING_TARGET" = lxd ]; then
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$FQDN"
# if the machine doesn't exist, we create it.
if ! lxc list --format csv | grep -q "$LXD_VM_NAME"; then
# create a base image if needed and instantiate a VM.
if [ -z "$MAC_ADDRESS_TO_PROVISION" ]; then
echo "ERROR: You MUST define a MAC Address for all your machines by setting WWW_SERVER_MAC_ADDRESS, BTCPAYSERVER_MAC_ADDRESS in your site defintion."
echo "INFO: IMPORTANT! You MUST have DHCP Reservations for these MAC addresses. You also need static DNS entries."
exit 1
fi
./provision_lxc.sh
fi
prepare_host
fi

View File

@ -0,0 +1,50 @@
#!/bin/bash
set -eu
# file paths
export SSH_HOME="$HOME/.ssh"
export PASS_HOME="$HOME/.password-store" #TODO
export SS_ROOT_PATH="$HOME/ss"
export REMOTES_PATH="$SS_ROOT_PATH/remotes"
export PROJECTS_PATH="$SS_ROOT_PATH/projects"
export SITES_PATH="$SS_ROOT_PATH/sites"
export INCUS_CONFIG_PATH="$SS_ROOT_PATH/incus"
export SS_CACHE_PATH="$SS_ROOT_PATH/cache"
export REMOTE_HOME="/home/ubuntu"
export REMOTE_DATA_PATH="$REMOTE_HOME/ss-data"
export REMOTE_DATA_PATH_LETSENCRYPT="$REMOTE_DATA_PATH/letsencrypt"
export REMOTE_BACKUP_PATH="$REMOTE_HOME/backups"
export BTCPAY_SERVER_APPPATH="$REMOTE_DATA_PATH/btcpayserver-docker"
export BITCOIN_CHAIN=regtest
# this space is for OS, docker images, etc
# values here are fine for regtest generally. Later scripts adjust
# these values based on testnet/mainnet
export WWW_SSDATA_DISK_SIZE_GB=20
export WWW_BACKUP_DISK_SIZE_GB=50
export WWW_DOCKER_DISK_SIZE_GB=30
export BTCPAYSERVER_SSDATA_DISK_SIZE_GB=20
export BTCPAYSERVER_BACKUP_DISK_SIZE_GB=20
export BTCPAYSERVER_DOCKER_DISK_SIZE_GB=30
export WWW_HOSTNAME="www"
export BTCPAY_SERVER_HOSTNAME="btcpayserver"
export LNPLAY_SERVER_HOSTNAME="lnplayserver"
export BTCPAY_HOSTNAME_IN_CERT="btcpay"
export NEXTCLOUD_HOSTNAME="nextcloud"
export GITEA_HOSTNAME="git"
export NOSTR_HOSTNAME="relay"
export REGISTRY_URL="https://index.docker.io/v1"
export BTCPAY_SERVER_CPU_COUNT="4"
export BTCPAY_SERVER_MEMORY_MB="4096"
export WWW_SERVER_CPU_COUNT="4"
export WWW_SERVER_MEMORY_MB="4096"
export LNPLAY_SERVER_CPU_COUNT="4"
export LNPLAY_SERVER_MEMORY_MB="4096"
export DOCKER_IMAGE_CACHE_FQDN="registry-1.docker.io"

17
deployment/domain_list.sh Executable file
View File

@ -0,0 +1,17 @@
#!/bin/bash
set -exu
# the DOMAIN_LIST is a complete list of all our domains. We often iterate over this list.
DOMAIN_LIST="${PRIMARY_DOMAIN}"
if [ -n "$OTHER_SITES_LIST" ]; then
DOMAIN_LIST="${DOMAIN_LIST},${OTHER_SITES_LIST}"
fi
export DOMAIN_LIST="$DOMAIN_LIST"
export DOMAIN_COUNT=$(("$(echo "$DOMAIN_LIST" | tr -cd , | wc -c)"+1))
export OTHER_SITES_LIST="$OTHER_SITES_LIST"
export PRIMARY_WWW_FQDN="$WWW_HOSTNAME.$PRIMARY_DOMAIN"
export BTCPAY_SERVER_FQDN="$BTCPAY_SERVER_HOSTNAME.$PRIMARY_DOMAIN"
export LNPLAY_SERVER_FQDN="$LNPLAY_SERVER_HOSTNAME.$PRIMARY_DOMAIN"

137
deployment/down.sh Executable file
View File

@ -0,0 +1,137 @@
#!/bin/bash
# https://www.sovereign-stack.org/ss-down/
set -exu
cd "$(dirname "$0")"
if incus remote get-default -q | grep -q "local"; then
echo "ERROR: you are on the local incus remote. Nothing to take down"
exit 1
fi
KEEP_ZFS_STORAGE_VOLUMES=true
OTHER_SITES_LIST=
SKIP_BTCPAY_SERVER=false
SKIP_WWW_SERVER=false
SKIP_LNPLAY_SERVER=false
BACKUP_WWW_APPS=true
WWW_SERVER_MAC_ADDRESS=
BTCPAY_SERVER_MAC_ADDRESS=
LNPLAY_SERVER_MAC_ADDRESS=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--purge)
KEEP_ZFS_STORAGE_VOLUMES=false
shift
;;
--skip-btcpayserver)
SKIP_BTCPAY_SERVER=true
shift
;;
--skip-wwwserver)
SKIP_WWW_SERVER=true
shift
;;
--skip-lnplayserver)
SKIP_LNPLAY_SERVER=true
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
. ./deployment_defaults.sh
. ./remote_env.sh
. ./project_env.sh
# let's bring down services on the remote deployment if necessary.
export DOMAIN_NAME="$PRIMARY_DOMAIN"
export SITE_PATH="$SITES_PATH/$PRIMARY_DOMAIN"
source "$SITE_PATH/site.conf"
source ./project/domain_env.sh
source ./domain_list.sh
SERVERS=
if [ "$SKIP_WWW_SERVER" = false ] && [ -n "$WWW_SERVER_MAC_ADDRESS" ]; then
SERVERS="www $SERVERS"
fi
if [ "$SKIP_BTCPAY_SERVER" = false ] && [ -n "$BTCPAY_SERVER_MAC_ADDRESS" ]; then
SERVERS="$SERVERS btcpayserver"
fi
if [ "$SKIP_LNPLAY_SERVER" = false ] && [ -n "$LNPLAY_SERVER_MAC_ADDRESS" ]; then
SERVERS="$SERVERS lnplayserver"
fi
for VIRTUAL_MACHINE in $SERVERS; do
INCUS_VM_NAME="$VIRTUAL_MACHINE-${PRIMARY_DOMAIN//./-}"
if incus list | grep -q "$INCUS_VM_NAME"; then
bash -c "./stop.sh --server=$VIRTUAL_MACHINE"
incus stop "$INCUS_VM_NAME"
incus delete "$INCUS_VM_NAME"
fi
# remove the ssh known endpoint else we get warnings.
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$VIRTUAL_MACHINE.$PRIMARY_DOMAIN" | exit
if incus profile list | grep -q "$INCUS_VM_NAME"; then
incus profile delete "$INCUS_VM_NAME"
fi
if [ "$KEEP_ZFS_STORAGE_VOLUMES" = false ]; then
# d for docker; b for backup; s for ss-data
for DATA in docker backup ss-data; do
VOLUME_NAME="$VIRTUAL_MACHINE-$DATA"
if incus storage volume list ss-base -q | grep -q "$VOLUME_NAME"; then
RESPONSE=
read -r -p "Are you sure you want to delete the '$VOLUME_NAME' volume intended for '$INCUS_VM_NAME'?": RESPONSE
if [ "$RESPONSE" = "y" ]; then
incus storage volume delete ss-base "$VOLUME_NAME"
fi
fi
done
fi
done
BACKUP_WWW_APPS=true
echo "BACKUP_WWW_APPS: $BACKUP_WWW_APPS"
echo "SERVERS: $SERVERS"
echo "BACKUP_WWW_APPS: $BACKUP_WWW_APPS"
# let's grab a snapshot of the
if [ "$BACKUP_WWW_APPS" = true ]; then
SNAPSHOT_ID=$(cat /dev/urandom | tr -dc 'a-aA-Z' | fold -w 6 | head -n 1)
incus storage volume snapshot create ss-base www-ss-data "$SNAPSHOT_ID"
BACKUP_LOCATION="$HOME/ss/backups"
mkdir -p "$BACKUP_LOCATION"
#incus storage volume export ss-base "www-ss-data" "$BACKUP_LOCATION/project-$(incus project list --format csv | grep "(current)" | awk '{print $1}')_www-ss-data_""$(date +%s)"".tar.gz"
#incus storage volume snapshot delete ss-base "www-ss-data" "$SNAPSHOT_ID"
fi
if [[ "$SERVERS" == *"www"* && "$SERVERS" == *"btcpay"* ]]; then
if incus network list -q | grep -q ss-ovn; then
incus network delete ss-ovn
fi
fi

15
deployment/help.txt Normal file
View File

@ -0,0 +1,15 @@
You are in the Sovereign Stack Management Environment (SSME). From here, you can issue several commands:
ss-remote - Take a remote SSH endpoint under management of Sovereign Stack.
ss-reset - The opposite of ss-remote; de-provisions an existing remote.
ss-up - Instantiate a deployment to your active project according to your
various project.conf and site.conf files.
ss-down - Reverses ss-up. Takes the active project down. Non-destructive of user data,
unless you provide the --purge flag.
ss-update - This is just ss-down then ss-up.
ss-show - show the incus resources associated with the current remote.
For more infomation about all these topics, consult the Sovereign Stack website starting with:
- https://www.sovereign-stack.org/tag/deployment-management/

1
deployment/project Submodule

@ -0,0 +1 @@
Subproject commit e8470d789a3811e2fe3f6818fd9a6fea859ba71c

65
deployment/project_env.sh Executable file
View File

@ -0,0 +1,65 @@
#!/bin/bash
set -eu
PROJECT_NAME="$(incus info | grep "project:" | awk '{print $2}')"
export PROJECT_NAME="$PROJECT_NAME"
if [ "$PROJECT_NAME" = default ]; then
echo "ERROR: You are on the default project. Use 'incus project list' and 'incus project switch <project>'."
exit 1
fi
export PROJECT_PATH="$PROJECTS_PATH/$PROJECT_NAME"
PROJECT_DEFINITION_PATH="$PROJECT_PATH/project.conf"
if [ ! -f "$PROJECT_DEFINITION_PATH" ]; then
echo "ERROR: 'project.conf' not found $PROJECT_DEFINITION_PATH not found."
exit 1
fi
source "$PROJECT_DEFINITION_PATH"
export PRIMARY_SITE_DEFINITION_PATH="$SITES_PATH/$PRIMARY_DOMAIN/site.conf"
if [ -z "$PRIMARY_DOMAIN" ]; then
echo "ERROR: The PRIMARY_DOMAIN is not specified. Check your remote definition at '$PRIMARY_SITE_DEFINITION_PATH'."
exit 1
fi
SHASUM_OF_PRIMARY_DOMAIN="$(echo -n "$PRIMARY_DOMAIN" | sha256sum | awk '{print $1;}' )"
export PRIMARY_DOMAIN_IDENTIFIER="${SHASUM_OF_PRIMARY_DOMAIN: -6}"
# default values are already at regtest mode.
if [ "$BITCOIN_CHAIN" = testnet ]; then
WWW_SSDATA_DISK_SIZE_GB=30
WWW_BACKUP_DISK_SIZE_GB=30
WWW_DOCKER_DISK_SIZE_GB=50
BTCPAYSERVER_SSDATA_DISK_SIZE_GB=30
BTCPAYSERVER_BACKUP_DISK_SIZE_GB=30
BTCPAYSERVER_DOCKER_DISK_SIZE_GB=100
elif [ "$BITCOIN_CHAIN" = mainnet ]; then
WWW_SSDATA_DISK_SIZE_GB=40
WWW_BACKUP_DISK_SIZE_GB=40
WWW_DOCKER_DISK_SIZE_GB=50
BTCPAYSERVER_SSDATA_DISK_SIZE_GB=30
BTCPAYSERVER_BACKUP_DISK_SIZE_GB=30
BTCPAYSERVER_DOCKER_DISK_SIZE_GB=300
fi
export WWW_SSDATA_DISK_SIZE_GB="$WWW_SSDATA_DISK_SIZE_GB"
export WWW_BACKUP_DISK_SIZE_GB="$WWW_BACKUP_DISK_SIZE_GB"
export WWW_DOCKER_DISK_SIZE_GB="$WWW_DOCKER_DISK_SIZE_GB"
export BTCPAYSERVER_SSDATA_DISK_SIZE_GB="$BTCPAYSERVER_SSDATA_DISK_SIZE_GB"
export BTCPAYSERVER_BACKUP_DISK_SIZE_GB="$BTCPAYSERVER_BACKUP_DISK_SIZE_GB"
export BTCPAYSERVER_DOCKER_DISK_SIZE_GB="$BTCPAYSERVER_DOCKER_DISK_SIZE_GB"

View File

@ -1,59 +0,0 @@
#!/bin/bash
set -eux
cd "$(dirname "$0")"
./stub_lxc_profile.sh "$LXD_VM_NAME"
function run_ddns {
# now that the VM has an IP, we can update the DNS record. TODO add additional DNS providers here; namecheap only atm.
DDNS_STRING="$VPS_HOSTNAME"
if [ "$VPS_HOSTNAME" = www ]; then
# next update our DDNS record. TODO enable local/remote name provider.
DDNS_STRING="@"
fi
# if the DNS record is incorrect, we run DDNS to get it corrected yo.
if "$(getent hosts "$FQDN" | awk '{ print $1 }')" != "$IP_V4_ADDRESS"; then
curl "https://dynamicdns.park-your-domain.com/update?host=$DDNS_STRING&domain=$DOMAIN_NAME&password=$DDNS_PASSWORD&ip=$IP_V4_ADDRESS"
DDNS_SLEEP_SECONDS=60
while true; do
# we test the www CNAME here so we can be assured the underlying has corrected.
if [[ "$(getent hosts "$FQDN" | awk '{ print $1 }')" == "$IP_V4_ADDRESS" ]]; then
echo ""
echo "SUCCESS: The DNS appears to be configured correctly."
# we're waiting here to allow dns records to stale out.
# this is necessary for certificate renewal; letsencrypt might have stale records
echo "INFO: Waiting $DDNS_SLEEP_SECONDS seconds to allow cached DNS records to expire."
sleep "$DDNS_SLEEP_SECONDS";
break;
fi
printf "." && sleep 2;
done
fi
}
# now let's create a new VM to work with.
lxc init --profile="$LXD_VM_NAME" "$VM_NAME" "$LXD_VM_NAME" --vm
# let's PIN the HW address for now so we don't exhaust IP
# and so we can set DNS internally.
lxc config set "$LXD_VM_NAME" "volatile.enp5s0.hwaddr=$MAC_ADDRESS_TO_PROVISION"
lxc config device override "$LXD_VM_NAME" root size="${ROOT_DISK_SIZE_GB}GB"
lxc start "$LXD_VM_NAME"
./wait_for_lxc_ip.sh "$LXD_VM_NAME"
if [ "$VPS_HOSTING_TARGET" = aws ]; then
run_ddns
# remove any existing SSH identities for the host, then add it back.
ssh-keygen -R "$IP_V4_ADDRESS"
fi

View File

@ -1,84 +0,0 @@
#!/bin/bash
set -eux
cd "$(dirname "$0")"
if [ ! -f "$HOME/.aws/credentials" ]; then
# TODO write a credential file baseline
echo "ERROR: Please update your '$HOME/.aws/credentials' file before continuing."
mkdir -p "$HOME/.aws"
touch "$HOME/.aws/credentials"
# stub out a site_definition with new passwords.
cat >"$HOME/.aws/credentials" <<EOL
#!/bin/bash
# enter your AWS Access Key and Secret Access Key here.
export AWS_ACCESS_KEY=
export AWS_SECRET_ACCESS_KEY=
EOL
exit 1
fi
source "$HOME/.aws/credentials"
if [ -z "$AWS_ACCESS_KEY" ]; then
echo "ERROR: AWS_ACCESS_KEY is not set."
exit 1
fi
if [ -z "$AWS_SECRET_ACCESS_KEY" ]; then
echo "ERROR: AWS_SECRET_ACCESS_KEY is not set."
exit 1
fi
# Note, we assume the script has already made sure the machine doesn't exist.
if [ "$VIRTUAL_MACHINE" = www ] || [ "$VIRTUAL_MACHINE" = certonly ]; then
# creates a public VM in AWS and provisions the bcm website.
docker-machine create --driver amazonec2 \
--amazonec2-open-port 80 \
--amazonec2-open-port 443 \
--amazonec2-access-key "$AWS_ACCESS_KEY" \
--amazonec2-secret-key "$AWS_SECRET_ACCESS_KEY" \
--amazonec2-region "$AWS_REGION" \
--amazonec2-ami "$AWS_AMI_ID" \
--amazonec2-root-size "$ROOT_DISK_SIZE_GB" \
--amazonec2-instance-type "$WWW_INSTANCE_TYPE" \
"$FQDN"
# --engine-label commit="$LATEST_GIT_COMMIT" \
elif [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
# creates a public VM in AWS and provisions the bcm website.
docker-machine create --driver amazonec2 \
--amazonec2-open-port 80 \
--amazonec2-open-port 443 \
--amazonec2-open-port 9735 \
--amazonec2-access-key "$AWS_ACCESS_KEY" \
--amazonec2-secret-key "$AWS_SECRET_ACCESS_KEY" \
--amazonec2-region "$AWS_REGION" \
--amazonec2-ami "$AWS_AMI_ID" \
--amazonec2-root-size "$ROOT_DISK_SIZE_GB" \
--amazonec2-instance-type "$BTCPAY_INSTANCE_TYPE" \
"$FQDN"
# --engine-label commit="$LATEST_GIT_COMMIT" \
fi
docker-machine scp "$CLUSTER_PATH/authorized_keys" "$FQDN:$REMOTE_HOME/authorized_keys"
docker-machine ssh "$FQDN" "cat $REMOTE_HOME/authorized_keys >> $REMOTE_HOME/.ssh/authorized_keys"
# we have to ensure ubuntu is able to do sudo less docker commands.
docker-machine ssh "$FQDN" sudo usermod -aG docker ubuntu
# we restart so dockerd starts with fresh group membership.
docker-machine ssh "$FQDN" sudo systemctl restart docker
# TODO INSTALL DOCKER COMPOSE
# let's wire up the DNS so subsequent ssh commands resolve to the VPS.
./run_ddns.sh
# remove the SSH hostname from known_hosts as we'll
# todo why do we need this again?
ssh-keygen -f "$SSH_HOME/known_hosts" -R "$FQDN"

242
deployment/remote.sh Executable file
View File

@ -0,0 +1,242 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
# This script is meant to be executed on the management machine.
# it reaches out to an SSH endpoint and provisions that machine
# to use incus.
DATA_PLANE_MACVLAN_INTERFACE=
DISK_TO_USE=
# override the remote name.
REMOTE_NAME="${1:-}"
if [ -z "$REMOTE_NAME" ]; then
echo "ERROR: The remote name was not provided. Syntax is: 'ss-remote <remote_name> <remote01.domain.tld>'"
echo " for example: 'ss-remote development clusterhost00.domain.tld"
exit 1
fi
. ./deployment_defaults.sh
. ./base.sh
export REMOTE_PATH="$REMOTES_PATH/$REMOTE_NAME"
REMOTE_DEFINITION="$REMOTE_PATH/remote.conf"
export REMOTE_DEFINITION="$REMOTE_DEFINITION"
mkdir -p "$REMOTE_PATH"
if [ ! -f "$REMOTE_DEFINITION" ]; then
# stub out a remote.conf.
cat >"$REMOTE_DEFINITION" <<EOL
# https://www.sovereign-stack.org/ss-remote
# REGISTRY_URL=http://registry.domain.tld:5000
EOL
chmod 0744 "$REMOTE_DEFINITION"
echo "We stubbed out a '$REMOTE_DEFINITION' file for you."
echo "Use this file to customize your remote deployment;"
echo "Check out 'https://www.sovereign-stack.org/ss-remote' for more information."
exit 1
fi
source "$REMOTE_DEFINITION"
if ! incus remote list | grep -q "$REMOTE_NAME"; then
FQDN="${2:-}"
if [ -z "$FQDN" ]; then
echo "ERROR: You MUST provide the FQDN of the remote host."
exit
fi
shift
if [ -z "$FQDN" ]; then
echo "ERROR: The Fully Qualified Domain Name of the new remote member was not set."
exit 1
fi
# let's check to ensure we have SSH access to the specified host.
if ! wait-for-it -t 5 "$FQDN:22"; then
echo "ERROR: We can't get an SSH connection to '$FQDN:22'. Ensure you have the host set up correctly."
exit 1
fi
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--data-plane-interface=*)
DATA_PLANE_MACVLAN_INTERFACE="${i#*=}"
shift
;;
--disk=*)
DISK_TO_USE="${i#*=}"
shift
;;
*)
;;
esac
done
# first let's copy our ssh pubkey to the remote server so we don't have to login constantly.
ssh-copy-id -i "$HOME/.ssh/id_rsa.pub" "ubuntu@$FQDN"
if [ -z "$DISK_TO_USE" ]; then
if ! ssh "ubuntu@$FQDN" incus storage list -q | grep -q ss-base; then
echo "INFO: It looks like the DISK_TO_USE has not been set. Enter it now."
echo ""
ssh "ubuntu@$FQDN" lsblk --paths
echo "Please enter the disk or partition that Sovereign Stack will use to store data: "
read -r DISK_TO_USE
fi
fi
else
echo "ERROR: the remote already exists! You need to go delete your incus remote if you want to re-create your remote."
echo " It's may also be helpful to reset/rename your remote path."
exit 1
fi
#ssh "ubuntu@$FQDN" 'sudo echo "ubuntu ALL=(ALL) NOPASSWD: /bin/su - a" >> /etc/sudoers'
# if the disk is loop-based, then we assume the / path exists.
if [ "$DISK_TO_USE" != loop ]; then
# ensure we actually have that disk/partition on the system.
if ! ssh "ubuntu@$FQDN" lsblk --paths | grep -q "$DISK_TO_USE"; then
echo "ERROR: We could not findthe disk you specified. Please run this command again and supply a different disk."
echo "NOTE: You can always specify on the command line by adding the '--disk=/dev/sdd', for example."
exit 1
fi
fi
if ! command -v incus >/dev/null 2>&1; then
if incus profile list --format csv | grep -q "$BASE_IMAGE_VM_NAME"; then
incus profile delete "$BASE_IMAGE_VM_NAME"
sleep 1
fi
if incus network list --format csv -q --project default | grep -q incusbr0; then
incus network delete incusbr0 --project default
sleep 1
fi
if incus network list --format csv -q project default | grep -q incusbr1; then
incus network delete incusbr1 --project default
sleep 1
fi
fi
# install dependencies.
ssh -t "ubuntu@$FQDN" 'sudo apt update && sudo apt upgrade -y && sudo apt install htop dnsutils nano zfsutils-linux -y'
REMOTE_SCRIPT_PATH="$REMOTE_HOME/install_incus.sh"
scp ../install_incus.sh "ubuntu@$FQDN:$REMOTE_SCRIPT_PATH"
ssh -t "ubuntu@$FQDN" "chmod +x $REMOTE_SCRIPT_PATH"
ssh -t "ubuntu@$FQDN" "sudo bash -c $REMOTE_SCRIPT_PATH"
ssh -t "ubuntu@$FQDN" "sudo adduser ubuntu incus-admin"
# install OVN for the project-specific bridge networks
ssh -t "ubuntu@$FQDN" "sudo apt-get install -y ovn-host ovn-central && sudo ovs-vsctl set open_vswitch . external_ids:ovn-remote=unix:/var/run/ovn/ovnsb_db.sock external_ids:ovn-encap-type=geneve external_ids:ovn-encap-ip=127.0.0.1"
# if the user did not specify the interface, we just use whatever is used for the default route.
if [ -z "$DATA_PLANE_MACVLAN_INTERFACE" ]; then
DATA_PLANE_MACVLAN_INTERFACE="$(ssh ubuntu@"$FQDN" ip route | grep "default via" | awk '{print $5}')"
fi
export DATA_PLANE_MACVLAN_INTERFACE="$DATA_PLANE_MACVLAN_INTERFACE"
MGMT_PLANE_IP="$(ssh ubuntu@"$FQDN" env | grep SSH_CONNECTION | cut -d " " -f 3)"
IP_OF_MGMT_MACHINE="$(ssh ubuntu@"$FQDN" env | grep SSH_CLIENT | cut -d " " -f 1 )"
IP_OF_MGMT_MACHINE="${IP_OF_MGMT_MACHINE#*=}"
IP_OF_MGMT_MACHINE="$(echo "$IP_OF_MGMT_MACHINE" | cut -d: -f1)"
# run incus admin init on the remote server.
cat <<EOF | ssh ubuntu@"$FQDN" incus admin init --preseed
config:
core.https_address: ${MGMT_PLANE_IP}:8443
core.dns_address: ${MGMT_PLANE_IP}
images.auto_update_interval: 15
networks:
- name: incusbr0
description: "ss-config,${DATA_PLANE_MACVLAN_INTERFACE:-error}"
type: bridge
config:
ipv4.address: 10.9.9.1/24
ipv4.dhcp.ranges: 10.9.9.10-10.9.9.127
ipv4.nat: true
ipv6.address: none
dns.mode: managed
- name: incusbr1
description: "Non-natting bridge needed for ovn networks."
type: bridge
config:
ipv4.address: 10.10.10.1/24
ipv4.dhcp.ranges: 10.10.10.10-10.10.10.63
ipv4.ovn.ranges: 10.10.10.64-10.10.10.254
ipv4.nat: false
ipv6.address: none
profiles:
- config: {}
description: "default profile for sovereign-stack instances."
devices:
root:
path: /
pool: ss-base
type: disk
name: default
EOF
ssh ubuntu@"$FQDN" incus project list -q >> /dev/null
# ensure the incus service is available over the network, then add a incus remote, then switch the active remote to it.
if wait-for-it -t 20 "$FQDN:8443"; then
# before we add the remote, we need a trust token from the incus server
INCUS_CERT_TRUST_TOKEN=$(ssh ubuntu@"$FQDN" incus config trust add ss-mgmt | tail -n 1)
# now create a remote on your local incus client and switch to it.
# the software will now target the new remote.
incus remote add "$REMOTE_NAME" "$FQDN" --auth-type=tls --accept-certificate --token="$INCUS_CERT_TRUST_TOKEN"
incus remote switch "$REMOTE_NAME"
echo "INFO: A new remote named '$REMOTE_NAME' has been created. Your incus client has been switched to it."
else
echo "ERROR: Could not detect the incus endpoint. Something went wrong."
exit 1
fi
# create the default storage pool if necessary
if ! incus storage list --format csv | grep -q ss-base; then
if [ "$DISK_TO_USE" != loop ]; then
# we omit putting a size here so, so incus will consume the entire disk if '/dev/sdb' or partition if '/dev/sdb1'.
# TODO do some sanity/resource checking on DISK_TO_USE. Impelment full-disk encryption?
incus storage create ss-base zfs source="$DISK_TO_USE"
else
# if a disk is the default 'loop', then we create a zfs storage pool
# on top of the existing filesystem using a loop device, per incus docs
incus storage create ss-base zfs
fi
# # create the testnet/mainnet blocks/chainstate subvolumes.
# for CHAIN in mainnet testnet; do
# for DATA in blocks chainstate; do
# if ! incus storage volume list ss-base | grep -q "$CHAIN-$DATA"; then
# incus storage volume create ss-base "$CHAIN-$DATA" --type=filesystem
# fi
# done
# done
fi
echo "INFO: completed remote.sh."

52
deployment/remote_env.sh Executable file
View File

@ -0,0 +1,52 @@
#!/bin/bash
set -eu
CURRENT_REMOTE="$(incus remote get-default)"
DEPLOYMENT_STRING=
SS_ROOT_PATH="$HOME/ss"
REMOTES_PATH="$SS_ROOT_PATH/remotes"
PROJECTS_PATH="$SS_ROOT_PATH/projects"
SITES_PATH="$SS_ROOT_PATH/sites"
INCUS_CONFIG_PATH="$SS_ROOT_PATH/incus"
SS_CACHE_PATH="$SS_ROOT_PATH/cache"
if echo "$CURRENT_REMOTE" | grep -q "prod"; then
echo "WARNING: You are running a migration procedure on a production system."
echo ""
RESPONSE=
read -r -p " Are you sure you want to continue (y) ": RESPONSE
if [ "$RESPONSE" != "y" ]; then
echo "STOPPING."
exit 1
fi
# check if there are any uncommited changes. It's dangerous to
# alter production systems when you have commits to make or changes to stash.
if git update-index --refresh | grep -q "needs update"; then
echo "ERROR: You have uncommited changes! Better stash your work with 'git stash'."
exit 1
fi
fi
. ./deployment_defaults.sh
export REMOTE_PATH="$REMOTES_PATH/$CURRENT_REMOTE"
REMOTE_DEFINITION="$REMOTE_PATH/remote.conf"
export REMOTE_DEFINITION="$REMOTE_DEFINITION"
# ensure the remote definition exists.
if [ ! -f "$REMOTE_DEFINITION" ]; then
echo "ERROR: The remote definition could not be found. You may need to run 'ss-remote'."
echo "INFO: Consult https://www.sovereign-stack.org/ss-remote for more information."
exit 1
fi
source "$REMOTE_DEFINITION"

88
deployment/reset.sh Executable file
View File

@ -0,0 +1,88 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
echo "WARNING: THIS SCRIPT NEEDS WORK"
exit 1
PURGE_INCUS=false
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--purge)
PURGE_INCUS=true
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
./down.sh
# these only get initialzed upon creation, so we MUST delete here so they get recreated.
if incus profile list | grep -q "$BASE_IMAGE_VM_NAME"; then
incus profile delete "$BASE_IMAGE_VM_NAME"
fi
if incus image list | grep -q "$BASE_IMAGE_VM_NAME"; then
incus image rm "$BASE_IMAGE_VM_NAME"
fi
if incus image list | grep -q "$DOCKER_BASE_IMAGE_NAME"; then
incus image rm "$DOCKER_BASE_IMAGE_NAME"
fi
CURRENT_PROJECT="$(incus info | grep "project:" | awk '{print $2}')"
if ! incus info | grep -q "project: default"; then
incus project switch default
incus project delete "$CURRENT_PROJECT"
fi
if [ "$PURGE_INCUS" = true ]; then
if incus profile show default | grep -q "root:"; then
incus profile device remove default root
fi
if incus profile show default| grep -q "eth0:"; then
incus profile device remove default eth0
fi
if incus network list --format csv -q --project default | grep -q incusbr0; then
incus network delete incusbr0 --project default
fi
if incus network list --format csv -q --project default | grep -q incusbr1; then
incus network delete incusbr1 --project default
fi
# # create the testnet/mainnet blocks/chainstate subvolumes.
# for CHAIN in mainnet testnet; do
# for DATA in blocks chainstate; do
# if incus storage volume list ss-base | grep -q "$CHAIN-$DATA"; then
# incus storage volume delete ss-base "$CHAIN-$DATA"
# fi
# done
# done
echo "WARNING: ss-basae NOT DELETED. NEED TO TEST THIS SCRIPT"
# if incus storage list --format csv | grep -q ss-base; then
# incus storage delete ss-base
# fi
CURRENT_REMOTE="$(incus remote get-default)"
if ! incus remote get-default | grep -q "local"; then
incus remote switch local
incus remote remove "$CURRENT_REMOTE"
echo "INFO: The remote '$CURRENT_REMOTE' has been removed! You are now controlling your local instance."
fi
fi

4
deployment/restore.sh Executable file
View File

@ -0,0 +1,4 @@
#!/bin/bash
set -exu

View File

@ -1,38 +0,0 @@
#!/bin/bash
set -eux
DDNS_STRING=
# for the www stack, we register only the domain name so our URLs look like https://$DOMAIN_NAME
if [ "$VIRTUAL_MACHINE" = www ] || [ "$VIRTUAL_MACHINE" = certonly ]; then
DDNS_STRING="@"
else
DDNS_STRING="$DDNS_HOST"
fi
# wait for DNS to get setup. Pass in the IP address of the actual VPS.
MACHINE_IP="$(docker-machine ip "$FQDN")"
if [ "$VPS_HOSTING_TARGET" = aws ]; then
# wire DNS entries using namecheap DDNS API (via HTTPS rather than ddclient)
curl "https://dynamicdns.park-your-domain.com/update?host=$DDNS_STRING&domain=$DOMAIN_NAME&password=$DDNS_PASSWORD&ip=$MACHINE_IP"
#install dependencies.
docker-machine ssh "$FQDN" sudo apt-get -qq install -y wait-for-it git rsync duplicity sshfs
fi
DDNS_SLEEP_SECONDS=60
while true; do
# we test the www CNAME here so we can be assured the underlying has corrected.
if [[ "$(getent hosts "$FQDN" | awk '{ print $1 }')" == "$MACHINE_IP" ]]; then
echo ""
echo "SUCCESS: The DNS appears to be configured correctly."
echo "INFO: Waiting $DDNS_SLEEP_SECONDS seconds to allow cached DNS records to expire."
sleep "$DDNS_SLEEP_SECONDS";
break;
fi
printf "." && sleep 2;
done

42
deployment/show.sh Executable file
View File

@ -0,0 +1,42 @@
#!/bin/bash
set -e
cd "$(dirname "$0")"
. ./deployment_defaults.sh
. ./remote_env.sh
echo "Global Settings:"
incus image list
incus storage list
echo
echo
PROJECT_NAME="$(incus info | grep "project:" | awk '{print $2}')"
export export="$PROJECT_NAME"
export PROJECT_PATH="$PROJECTS_PATH/$PROJECT_NAME"
echo
echo
echo "Active project: $PROJECT_NAME"
echo "----------------------------------------------------------"
echo " Networks:"
incus network list
echo
echo " Storage Volumes:"
incus storage volume list ss-base
echo
echo " Profiles:"
incus profile list
echo
echo " Instances (VMs):"
incus list

66
deployment/stop.sh Executable file
View File

@ -0,0 +1,66 @@
#!/bin/bash
# https://www.sovereign-stack.org/ss-down/
set -eu
cd "$(dirname "$0")"
if incus remote get-default -q | grep -q "local"; then
echo "ERROR: you are on the local incus remote. Nothing to take down"
exit 1
fi
SERVER_TO_STOP=
OTHER_SITES_LIST=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--server=*)
SERVER_TO_STOP="${i#*=}"
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
if [ -z "$SERVER_TO_STOP" ]; then
echo "ERROR: you MUST specify a server to stop with '--server=www' for example."
exit 1
fi
. ./deployment_defaults.sh
. ./remote_env.sh
. ./project_env.sh
# let's bring down services on the remote deployment if necessary.
export DOMAIN_NAME="$PRIMARY_DOMAIN"
export SITE_PATH="$SITES_PATH/$PRIMARY_DOMAIN"
source "$SITE_PATH/site.conf"
source ./project/domain_env.sh
source ./domain_list.sh
if [ "$SERVER_TO_STOP" = www ]; then
DOCKER_HOST="ssh://ubuntu@$PRIMARY_WWW_FQDN" ./project/www/stop_docker_stacks.sh
fi
if [ "$SERVER_TO_STOP" = btcpayserver ]; then
if wait-for-it -t 5 "$BTCPAY_SERVER_FQDN":22; then
ssh "$BTCPAY_SERVER_FQDN" "bash -c $BTCPAY_SERVER_APPPATH/btcpay-down.sh"
else
echo "ERROR: the remote BTCPAY Server is not available on ssh."
exit 1
fi
fi
if [ "$SERVER_TO_STOP" = lnplayserver ]; then
DOCKER_HOST="ssh://ubuntu@$LNPLAY_SERVER_FQDN" ./project/lnplay/down.sh
fi

View File

@ -1,265 +0,0 @@
#!/bin/bash
set -eux
LXD_HOSTNAME="$1"
# generate the custom cloud-init file. Cloud init installs and configures sshd
SSH_AUTHORIZED_KEY=$(<"$SSH_HOME/id_rsa.pub")
eval "$(ssh-agent -s)"
ssh-add "$SSH_HOME/id_rsa"
export SSH_AUTHORIZED_KEY="$SSH_AUTHORIZED_KEY"
export FILENAME="$LXD_HOSTNAME.yml"
mkdir -p "$PROJECT_PATH/cloud-init"
YAML_PATH="$PROJECT_PATH/cloud-init/$FILENAME"
# If we are deploying the www, we attach the vm to the underlay via macvlan.
cat > "$YAML_PATH" <<EOF
config:
limits.cpu: "${DEV_CPU_COUNT}"
limits.memory: "${DEV_MEMORY_MB}MB"
EOF
# if VIRTUAL_MACHINE=sovereign-stack then we are building the base image.
if [ "$LXD_HOSTNAME" = "sovereign-stack" ]; then
# this is for the base image only...
cat >> "$YAML_PATH" <<EOF
user.vendor-data: |
#cloud-config
apt_mirror: http://us.archive.ubuntu.com/ubuntu/
package_update: true
package_upgrade: false
package_reboot_if_required: false
preserve_hostname: false
fqdn: sovereign-stack
packages:
- curl
- ssh-askpass
- apt-transport-https
- ca-certificates
- gnupg-agent
- software-properties-common
- lsb-release
- net-tools
- htop
- rsync
- duplicity
- sshfs
- fswatch
- jq
- git
- nano
- wait-for-it
- dnsutils
- wget
groups:
- docker
users:
- name: ubuntu
groups: docker
shell: /bin/bash
lock_passwd: false
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ${SSH_AUTHORIZED_KEY}
write_files:
- path: /home/ubuntu/docker.asc
content: |
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFit2ioBEADhWpZ8/wvZ6hUTiXOwQHXMAlaFHcPH9hAtr4F1y2+OYdbtMuth
lqqwp028AqyY+PRfVMtSYMbjuQuu5byyKR01BbqYhuS3jtqQmljZ/bJvXqnmiVXh
38UuLa+z077PxyxQhu5BbqntTPQMfiyqEiU+BKbq2WmANUKQf+1AmZY/IruOXbnq
L4C1+gJ8vfmXQt99npCaxEjaNRVYfOS8QcixNzHUYnb6emjlANyEVlZzeqo7XKl7
UrwV5inawTSzWNvtjEjj4nJL8NsLwscpLPQUhTQ+7BbQXAwAmeHCUTQIvvWXqw0N
cmhh4HgeQscQHYgOJjjDVfoY5MucvglbIgCqfzAHW9jxmRL4qbMZj+b1XoePEtht
ku4bIQN1X5P07fNWzlgaRL5Z4POXDDZTlIQ/El58j9kp4bnWRCJW0lya+f8ocodo
vZZ+Doi+fy4D5ZGrL4XEcIQP/Lv5uFyf+kQtl/94VFYVJOleAv8W92KdgDkhTcTD
G7c0tIkVEKNUq48b3aQ64NOZQW7fVjfoKwEZdOqPE72Pa45jrZzvUFxSpdiNk2tZ
XYukHjlxxEgBdC/J3cMMNRE1F4NCA3ApfV1Y7/hTeOnmDuDYwr9/obA8t016Yljj
q5rdkywPf4JF8mXUW5eCN1vAFHxeg9ZWemhBtQmGxXnw9M+z6hWwc6ahmwARAQAB
tCtEb2NrZXIgUmVsZWFzZSAoQ0UgZGViKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3
BBMBCgAhBQJYrefAAhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEI2BgDwO
v82IsskP/iQZo68flDQmNvn8X5XTd6RRaUH33kXYXquT6NkHJciS7E2gTJmqvMqd
tI4mNYHCSEYxI5qrcYV5YqX9P6+Ko+vozo4nseUQLPH/ATQ4qL0Zok+1jkag3Lgk
jonyUf9bwtWxFp05HC3GMHPhhcUSexCxQLQvnFWXD2sWLKivHp2fT8QbRGeZ+d3m
6fqcd5Fu7pxsqm0EUDK5NL+nPIgYhN+auTrhgzhK1CShfGccM/wfRlei9Utz6p9P
XRKIlWnXtT4qNGZNTN0tR+NLG/6Bqd8OYBaFAUcue/w1VW6JQ2VGYZHnZu9S8LMc
FYBa5Ig9PxwGQOgq6RDKDbV+PqTQT5EFMeR1mrjckk4DQJjbxeMZbiNMG5kGECA8
g383P3elhn03WGbEEa4MNc3Z4+7c236QI3xWJfNPdUbXRaAwhy/6rTSFbzwKB0Jm
ebwzQfwjQY6f55MiI/RqDCyuPj3r3jyVRkK86pQKBAJwFHyqj9KaKXMZjfVnowLh
9svIGfNbGHpucATqREvUHuQbNnqkCx8VVhtYkhDb9fEP2xBu5VvHbR+3nfVhMut5
G34Ct5RS7Jt6LIfFdtcn8CaSas/l1HbiGeRgc70X/9aYx/V/CEJv0lIe8gP6uDoW
FPIZ7d6vH+Vro6xuWEGiuMaiznap2KhZmpkgfupyFmplh0s6knymuQINBFit2ioB
EADneL9S9m4vhU3blaRjVUUyJ7b/qTjcSylvCH5XUE6R2k+ckEZjfAMZPLpO+/tF
M2JIJMD4SifKuS3xck9KtZGCufGmcwiLQRzeHF7vJUKrLD5RTkNi23ydvWZgPjtx
Q+DTT1Zcn7BrQFY6FgnRoUVIxwtdw1bMY/89rsFgS5wwuMESd3Q2RYgb7EOFOpnu
w6da7WakWf4IhnF5nsNYGDVaIHzpiqCl+uTbf1epCjrOlIzkZ3Z3Yk5CM/TiFzPk
z2lLz89cpD8U+NtCsfagWWfjd2U3jDapgH+7nQnCEWpROtzaKHG6lA3pXdix5zG8
eRc6/0IbUSWvfjKxLLPfNeCS2pCL3IeEI5nothEEYdQH6szpLog79xB9dVnJyKJb
VfxXnseoYqVrRz2VVbUI5Blwm6B40E3eGVfUQWiux54DspyVMMk41Mx7QJ3iynIa
1N4ZAqVMAEruyXTRTxc9XW0tYhDMA/1GYvz0EmFpm8LzTHA6sFVtPm/ZlNCX6P1X
zJwrv7DSQKD6GGlBQUX+OeEJ8tTkkf8QTJSPUdh8P8YxDFS5EOGAvhhpMBYD42kQ
pqXjEC+XcycTvGI7impgv9PDY1RCC1zkBjKPa120rNhv/hkVk/YhuGoajoHyy4h7
ZQopdcMtpN2dgmhEegny9JCSwxfQmQ0zK0g7m6SHiKMwjwARAQABiQQ+BBgBCAAJ
BQJYrdoqAhsCAikJEI2BgDwOv82IwV0gBBkBCAAGBQJYrdoqAAoJEH6gqcPyc/zY
1WAP/2wJ+R0gE6qsce3rjaIz58PJmc8goKrir5hnElWhPgbq7cYIsW5qiFyLhkdp
YcMmhD9mRiPpQn6Ya2w3e3B8zfIVKipbMBnke/ytZ9M7qHmDCcjoiSmwEXN3wKYI
mD9VHONsl/CG1rU9Isw1jtB5g1YxuBA7M/m36XN6x2u+NtNMDB9P56yc4gfsZVES
KA9v+yY2/l45L8d/WUkUi0YXomn6hyBGI7JrBLq0CX37GEYP6O9rrKipfz73XfO7
JIGzOKZlljb/D9RX/g7nRbCn+3EtH7xnk+TK/50euEKw8SMUg147sJTcpQmv6UzZ
cM4JgL0HbHVCojV4C/plELwMddALOFeYQzTif6sMRPf+3DSj8frbInjChC3yOLy0
6br92KFom17EIj2CAcoeq7UPhi2oouYBwPxh5ytdehJkoo+sN7RIWua6P2WSmon5
U888cSylXC0+ADFdgLX9K2zrDVYUG1vo8CX0vzxFBaHwN6Px26fhIT1/hYUHQR1z
VfNDcyQmXqkOnZvvoMfz/Q0s9BhFJ/zU6AgQbIZE/hm1spsfgvtsD1frZfygXJ9f
irP+MSAI80xHSf91qSRZOj4Pl3ZJNbq4yYxv0b1pkMqeGdjdCYhLU+LZ4wbQmpCk
SVe2prlLureigXtmZfkqevRz7FrIZiu9ky8wnCAPwC7/zmS18rgP/17bOtL4/iIz
QhxAAoAMWVrGyJivSkjhSGx1uCojsWfsTAm11P7jsruIL61ZzMUVE2aM3Pmj5G+W
9AcZ58Em+1WsVnAXdUR//bMmhyr8wL/G1YO1V3JEJTRdxsSxdYa4deGBBY/Adpsw
24jxhOJR+lsJpqIUeb999+R8euDhRHG9eFO7DRu6weatUJ6suupoDTRWtr/4yGqe
dKxV3qQhNLSnaAzqW/1nA3iUB4k7kCaKZxhdhDbClf9P37qaRW467BLCVO/coL3y
Vm50dwdrNtKpMBh3ZpbB1uJvgi9mXtyBOMJ3v8RZeDzFiG8HdCtg9RvIt/AIFoHR
H3S+U79NT6i0KPzLImDfs8T7RlpyuMc4Ufs8ggyg9v3Ae6cN3eQyxcK3w0cbBwsh
/nQNfsA6uu+9H7NhbehBMhYnpNZyrHzCmzyXkauwRAqoCbGCNykTRwsur9gS41TQ
M8ssD1jFheOJf3hODnkKU+HKjvMROl1DK7zdmLdNzA1cvtZH/nCC9KPj1z8QC47S
xx+dTZSx4ONAhwbS/LN3PoKtn8LPjY9NP9uDWI+TWYquS2U+KHDrBDlsgozDbs/O
jCxcpDzNmXpWQHEtHU7649OXHP7UeNST1mCUCH5qdank0V1iejF6/CfTFU4MfcrG
YT90qFF93M3v01BbxP+EIY2/9tiIPbrd
=0YYh
-----END PGP PUBLIC KEY BLOCK-----
- path: /etc/ssh/ssh_config
content: |
Port 22
ListenAddress 0.0.0.0
Protocol 2
ChallengeResponseAuthentication no
PasswordAuthentication no
UsePAM no
LogLevel INFO
- path: /etc/docker/daemon.json
content: |
{
"registry-mirrors": [
"${REGISTRY_URL}"
]
}
runcmd:
- cat /home/ubuntu/docker.asc | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
- sudo rm /home/ubuntu/docker.asc
- echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
- sudo apt-get update
- sudo apt-get install -y docker-ce docker-ce-cli containerd.io
- echo "alias ll='ls -lah'" >> /home/ubuntu/.bash_profile
- sudo curl -s -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- sudo apt-get install -y openssh-server
EOF
else
# all other machines.
cat >> "$YAML_PATH" <<EOF
user.vendor-data: |
#cloud-config
apt_mirror: http://us.archive.ubuntu.com/ubuntu/
package_update: false
package_upgrade: false
package_reboot_if_required: false
preserve_hostname: false
fqdn: ${FQDN}
user.network-config: |
version: 2
ethernets:
enp5s0:
dhcp4: true
match:
macaddress: ${MAC_ADDRESS_TO_PROVISION}
set-name: enp5s0
enp6s0:
dhcp4: false
EOF
if [[ "$LXD_HOSTNAME" = $WWW_HOSTNAME-* ]]; then
cat >> "$YAML_PATH" <<EOF
addresses: [10.139.144.5/24]
nameservers:
addresses: [10.139.144.1]
EOF
fi
if [[ "$LXD_HOSTNAME" = $BTCPAY_HOSTNAME-* ]]; then
cat >> "$YAML_PATH" <<EOF
addresses: [10.139.144.10/24]
nameservers:
addresses: [10.139.144.1]
EOF
fi
fi
# If we are deploying the www, we attach the vm to the underlay via macvlan.
cat >> "$YAML_PATH" <<EOF
description: Default LXD profile for ${FILENAME}
devices:
root:
path: /
pool: sovereign-stack
type: disk
config:
source: cloud-init:config
type: disk
EOF
# Stub out the network piece for the base image.
if [ "$LXD_HOSTNAME" = sovereign-stack ] ; then
# If we are deploying the www, we attach the vm to the underlay via macvlan.
cat >> "$YAML_PATH" <<EOF
enp5s0:
name: enp5s0
nictype: macvlan
parent: ${DATA_PLANE_MACVLAN_INTERFACE}
type: nic
name: ${FILENAME}
EOF
else
# If we are deploying the www, we attach the vm to the underlay via macvlan.
cat >> "$YAML_PATH" <<EOF
enp5s0:
nictype: macvlan
parent: ${DATA_PLANE_MACVLAN_INTERFACE}
type: nic
enp6s0:
name: enp6s0
network: lxdbrSS
type: nic
name: ${FILENAME}
EOF
fi
# let's create a profile for the BCM TYPE-1 VMs. This is per VM.
if ! lxc profile list --format csv | grep -q "$LXD_HOSTNAME"; then
lxc profile create "$LXD_HOSTNAME"
fi
# configure the profile with our generated cloud-init.yml file.
cat "$YAML_PATH" | lxc profile edit "$LXD_HOSTNAME"

338
deployment/stub_profile.sh Executable file
View File

@ -0,0 +1,338 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
VIRTUAL_MACHINE=base
INCUS_HOSTNAME=
SSDATA_VOLUME_NAME=
BACKUP_VOLUME_NAME=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--incus-hostname=*)
INCUS_HOSTNAME="${i#*=}"
shift
;;
--vm=*)
VIRTUAL_MACHINE="${i#*=}"
shift
;;
--ss-volume-name=*)
SSDATA_VOLUME_NAME="${i#*=}"
shift
;;
--backup-volume-name=*)
BACKUP_VOLUME_NAME="${i#*=}"
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
# generate the custom cloud-init file. Cloud init installs and configures sshd
SSH_AUTHORIZED_KEY=$(<"$SSH_PUBKEY_PATH")
eval "$(ssh-agent -s)" > /dev/null
ssh-add "$SSH_HOME/id_rsa" > /dev/null
export SSH_AUTHORIZED_KEY="$SSH_AUTHORIZED_KEY"
export FILENAME="$INCUS_HOSTNAME.yml"
mkdir -p "$PROJECT_PATH/cloud-init"
YAML_PATH="$PROJECT_PATH/cloud-init/$FILENAME"
# If we are deploying the www, we attach the vm to the underlay via macvlan.
cat > "$YAML_PATH" <<EOF
config:
EOF
if [ "$VIRTUAL_MACHINE" = base ]; then
cat >> "$YAML_PATH" <<EOF
limits.cpu: 4
limits.memory: 4096MB
EOF
fi
if [ "$VIRTUAL_MACHINE" = www ]; then
cat >> "$YAML_PATH" <<EOF
limits.cpu: "${WWW_SERVER_CPU_COUNT}"
limits.memory: "${WWW_SERVER_MEMORY_MB}MB"
EOF
fi
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
cat >> "$YAML_PATH" <<EOF
limits.cpu: "${BTCPAY_SERVER_CPU_COUNT}"
limits.memory: "${BTCPAY_SERVER_MEMORY_MB}MB"
EOF
elif [ "$VIRTUAL_MACHINE" = lnplayserver ]; then
cat >> "$YAML_PATH" <<EOF
limits.cpu: "${LNPLAY_SERVER_CPU_COUNT}"
limits.memory: "${LNPLAY_SERVER_MEMORY_MB}MB"
EOF
fi
# if VIRTUAL_MACHINE=base, then we doing the base image.
if [ "$VIRTUAL_MACHINE" = base ]; then
# this is for the base image only...
cat >> "$YAML_PATH" <<EOF
user.vendor-data: |
#cloud-config
package_update: true
package_upgrade: false
package_reboot_if_required: false
preserve_hostname: false
fqdn: ${BASE_IMAGE_VM_NAME}
packages:
- curl
- ssh-askpass
- apt-transport-https
- ca-certificates
- gnupg-agent
- software-properties-common
- lsb-release
- net-tools
- htop
- rsync
- duplicity
- sshfs
- fswatch
- jq
- git
- nano
- wait-for-it
- dnsutils
- wget
groups:
- docker
users:
- name: ubuntu
groups: docker
shell: /bin/bash
lock_passwd: false
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ${SSH_AUTHORIZED_KEY}
EOF
if [ "$REGISTRY_URL" != "https://index.docker.io/v1" ]; then
cat >> "$YAML_PATH" <<EOF
write_files:
- path: /etc/docker/daemon.json
permissions: 0644
owner: root
content: |
{
"registry-mirrors": [
"${REGISTRY_URL}"
]
}
EOF
fi
fi
if [ "$VIRTUAL_MACHINE" = base ]; then
cat >> "$YAML_PATH" <<EOF
runcmd:
- sudo mkdir -m 0755 -p /etc/apt/keyrings
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" | sudo tee /etc/apt/sources.list.d/docker.list
- sudo apt-get update
- sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
- sudo DEBIAN_FRONTEND=noninteractive apt-get install -y openssh-server
EOF
fi
if [ "$VIRTUAL_MACHINE" != base ]; then
# all other machines that are not the base image
cat >> "$YAML_PATH" <<EOF
user.vendor-data: |
#cloud-config
apt_mirror: http://us.archive.ubuntu.com/ubuntu/
package_update: false
package_upgrade: false
package_reboot_if_required: false
preserve_hostname: true
fqdn: ${FQDN}
EOF
fi
if [ "$VIRTUAL_MACHINE" = www ] || [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
# all other machines that are not the base image
cat >> "$YAML_PATH" <<EOF
resize_rootfs: false
disk_setup:
/dev/sdb:
table_type: 'gpt'
layout: true
overwrite: false
fs_setup:
- label: docker-data
filesystem: 'ext4'
device: '/dev/sdb1'
overwrite: false
mounts:
- [ sdb, /var/lib/docker ]
mount_default_fields: [ None, None, "auto", "defaults,nofail", "0", "2" ]
EOF
fi
if [ "$VIRTUAL_MACHINE" != base ]; then
cat >> "$YAML_PATH" <<EOF
user.network-config: |
version: 2
ethernets:
enp5s0:
dhcp4: true
dhcp4-overrides:
route-metric: 50
match:
macaddress: ${MAC_ADDRESS_TO_PROVISION}
set-name: enp5s0
EOF
fi
# TODO try to get DHCP working reliably.
if [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
addresses:
- 10.10.10.66/24
EOF
fi
if [ "$VIRTUAL_MACHINE" = www ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
addresses:
- 10.10.10.65/24
EOF
fi
# All profiles get a root disk and cloud-init config.
cat >> "$YAML_PATH" <<EOF
description: Default incus profile for ${FILENAME}
devices:
EOF
if [ "$VIRTUAL_MACHINE" = lnplayserver ]; then
# All profiles get a root disk and cloud-init config.
cat >> "$YAML_PATH" <<EOF
root:
path: /
pool: ss-base
type: disk
size: 20GiB
EOF
else
# All profiles get a root disk and cloud-init config.
cat >> "$YAML_PATH" <<EOF
root:
path: /
pool: ss-base
type: disk
EOF
fi
cat >> "$YAML_PATH" <<EOF
config:
source: cloud-init:config
type: disk
EOF
if [ "$VIRTUAL_MACHINE" = www ] || [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
cat >> "$YAML_PATH" <<EOF
ss-data:
path: ${REMOTE_DATA_PATH}
pool: ss-base
source: ${SSDATA_VOLUME_NAME}
type: disk
ss-backup:
path: ${REMOTE_BACKUP_PATH}
pool: ss-base
source: ${BACKUP_VOLUME_NAME}
type: disk
EOF
fi
# Stub out the network piece for the base image.
if [ "$VIRTUAL_MACHINE" = base ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
name: enp6s0
network: incusbr0
type: nic
name: ${FILENAME}
EOF
else
# all other vms attach to the network underlay
cat >> "$YAML_PATH" <<EOF
enp5s0:
nictype: macvlan
parent: ${DATA_PLANE_MACVLAN_INTERFACE}
type: nic
EOF
if [ "$VIRTUAL_MACHINE" = www ] || [ "$VIRTUAL_MACHINE" = btcpayserver ]; then
cat >> "$YAML_PATH" <<EOF
enp6s0:
name: enp6s0
network: ss-ovn
type: nic
EOF
fi
cat >> "$YAML_PATH" <<EOF
name: ${PRIMARY_DOMAIN}
EOF
fi
if [ "$VIRTUAL_MACHINE" = base ]; then
if ! incus profile list --format csv --project default | grep -q "$INCUS_HOSTNAME"; then
incus profile create "$INCUS_HOSTNAME" --project default
fi
# configure the profile with our generated cloud-init.yml file.
incus profile edit "$INCUS_HOSTNAME" --project default < "$YAML_PATH"
else
if ! incus profile list --format csv | grep -q "$INCUS_HOSTNAME"; then
incus profile create "$INCUS_HOSTNAME"
fi
# configure the profile with our generated cloud-init.yml file.
incus profile edit "$INCUS_HOSTNAME" < "$YAML_PATH"
fi

478
deployment/up.sh Executable file
View File

@ -0,0 +1,478 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
# check to ensure dependencies are met.
for cmd in wait-for-it dig rsync sshfs incus; do
if ! command -v "$cmd" >/dev/null 2>&1; then
echo "This script requires \"${cmd}\" to be installed. Please run 'install.sh'."
exit 1
fi
done
# do a spot check; if we are on production warn.
if incus remote get-default | grep -q "production"; then
echo "WARNING: You are running command against a production system!"
echo ""
# check if there are any uncommited changes. It's dangerous to
# alter production systems when you have commits to make or changes to stash.
if git update-index --refresh | grep -q "needs update"; then
echo "ERROR: You have uncommited changes! You MUST commit or stash all changes to continue."
exit 1
fi
RESPONSE=
read -r -p " Are you sure you want to continue (y) ": RESPONSE
if [ "$RESPONSE" != "y" ]; then
echo "STOPPING."
exit 1
fi
fi
OTHER_SITES_LIST=
PRIMARY_DOMAIN=
RUN_CERT_RENEWAL=true
SKIP_BASE_IMAGE_CREATION=false
RESTORE_WWW=false
RESTORE_CERTS=false
BACKUP_CERTS=true
BACKUP_BTCPAY=true
SKIP_BTCPAY_SERVER=false
SKIP_WWW_SERVER=false
SKIP_LNPLAY_SERVER=false
BACKUP_BTCPAY_ARCHIVE_PATH=
RESTORE_BTCPAY=false
UPDATE_BTCPAY=false
REMOTE_NAME="$(incus remote get-default)"
USER_SAYS_YES=false
WWW_SERVER_MAC_ADDRESS=
BTCPAY_SERVER_MAC_ADDRESS=
LNPLAY_SERVER_MAC_ADDRESS=
LNPLAY_ENV_PATH=
LNPLAY_VM_EXPIRATION_DATE=
LNPLAY_ORDER_ID=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--restore-certs)
RESTORE_CERTS=true
shift
;;
--restore-wwwserver)
RESTORE_WWW=true
shift
;;
--restore-btcpay)
RESTORE_BTCPAY=true
shift
;;
--skip-btcpayserver)
SKIP_BTCPAY_SERVER=true
shift
;;
--skip-wwwserver)
SKIP_WWW_SERVER=true
shift
;;
--skip-lnplayserver)
SKIP_LNPLAY_SERVER=true
shift
;;
--backup-btcpayserver)
BACKUP_BTCPAY=true
shift
;;
--backup-archive-path=*)
BACKUP_BTCPAY_ARCHIVE_PATH="${i#*=}"
shift
;;
--update-btcpay)
UPDATE_BTCPAY=true
shift
;;
--skip-base-image)
SKIP_BASE_IMAGE_CREATION=true
shift
;;
--no-cert-renew)
RUN_CERT_RENEWAL=false
shift
;;
--lnplay-env-path=*)
LNPLAY_ENV_PATH="${i#*=}"
shift
;;
--vm-expiration-date=*)
LNPLAY_VM_EXPIRATION_DATE="${i#*=}"
shift
;;
--order-id=*)
LNPLAY_ORDER_ID="${i#*=}"
shift
;;
-y)
USER_SAYS_YES=true
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
if [ "$RESTORE_BTCPAY" = true ] && [ -z "$BACKUP_BTCPAY_ARCHIVE_PATH" ]; then
echo "ERROR: Use the '--backup-archive-path=/path/to/btcpay/archive.tar.gz' option when restoring btcpay server."
exit 1
fi
if [ "$RESTORE_BTCPAY" = true ] && [ ! -f "$BACKUP_BTCPAY_ARCHIVE_PATH" ]; then
echo "ERROR: The backup archive path you specified DOES NOT exist!"
exit 1
fi
. ./remote_env.sh
export REGISTRY_DOCKER_IMAGE="registry:2"
export BACKUP_CERTS="$BACKUP_CERTS"
export RESTORE_BTCPAY="$RESTORE_BTCPAY"
export RESTORE_WWW="$RESTORE_WWW"
export BACKUP_BTCPAY="$BACKUP_BTCPAY"
export RUN_CERT_RENEWAL="$RUN_CERT_RENEWAL"
export REMOTE_NAME="$REMOTE_NAME"
export REMOTE_PATH="$REMOTES_PATH/$REMOTE_NAME"
export USER_SAYS_YES="$USER_SAYS_YES"
export BACKUP_BTCPAY_ARCHIVE_PATH="$BACKUP_BTCPAY_ARCHIVE_PATH"
export RESTORE_CERTS="$RESTORE_CERTS"
# todo convert this to Trezor-T
SSH_PUBKEY_PATH="$SSH_HOME/id_rsa.pub"
export SSH_PUBKEY_PATH="$SSH_PUBKEY_PATH"
# ensure our remote path is created.
mkdir -p "$REMOTE_PATH"
REMOTE_DEFINITION="$REMOTE_PATH/remote.conf"
if [ ! -f "$REMOTE_DEFINITION" ]; then
echo "ERROR: The remote definition could not be found. You may need to re-run 'ss-remote'."
exit 1
fi
export REMOTE_DEFINITION="$REMOTE_DEFINITION"
source "$REMOTE_DEFINITION"
# this is our password generation mechanism. Relying on GPG for secure password generation
# TODO see if this is a secure way to do it.
function new_pass {
gpg --gen-random --armor 1 25
}
function stub_site_definition {
mkdir -p "$SITE_PATH" "$PROJECT_PATH/sites"
# create a symlink from the PROJECT_PATH/sites/DOMAIN_NAME to the ss-sites/domain name
DOMAIN_SYMLINK_PATH="$PROJECT_PATH/sites/$DOMAIN_NAME"
if [ ! -L "$DOMAIN_SYMLINK_PATH" ]; then
ln -r -s "$SITE_PATH" "$DOMAIN_SYMLINK_PATH"
fi
if [ ! -f "$SITE_PATH/site.conf" ]; then
# check to see if the enf file exists. exist if not.
SITE_DEFINITION_PATH="$SITE_PATH/site.conf"
if [ ! -f "$SITE_DEFINITION_PATH" ]; then
# stub out a site.conf with new passwords.
cat >"$SITE_DEFINITION_PATH" <<EOL
# https://www.sovereign-stack.org/ss-up/#siteconf
DOMAIN_NAME="${DOMAIN_NAME}"
# BTCPAY_ALT_NAMES="tip,store,pay,send"
SITE_LANGUAGE_CODES="en"
DUPLICITY_BACKUP_PASSPHRASE="$(new_pass)"
DEPLOY_GHOST=true
DEPLOY_NEXTCLOUD=false
DEPLOY_NOSTR=false
NOSTR_ACCOUNT_PUBKEY=
DEPLOY_GITEA=false
GHOST_MYSQL_PASSWORD="$(new_pass)"
GHOST_MYSQL_ROOT_PASSWORD="$(new_pass)"
NEXTCLOUD_MYSQL_PASSWORD="$(new_pass)"
NEXTCLOUD_MYSQL_ROOT_PASSWORD="$(new_pass)"
GITEA_MYSQL_PASSWORD="$(new_pass)"
GITEA_MYSQL_ROOT_PASSWORD="$(new_pass)"
#GHOST_DEPLOY_SMTP=true
#MAILGUN_FROM_ADDRESS=false
#MAILGUN_SMTP_USERNAME=
#MAILGUN_SMTP_PASSWORD=
EOL
chmod 0744 "$SITE_DEFINITION_PATH"
echo "INFO: we stubbed a new site.conf for you at '$SITE_DEFINITION_PATH'. Go update it!"
exit 1
fi
fi
}
PROJECT_NAME="$(incus info | grep "project:" | awk '{print $2}')"
export PROJECT_NAME="$PROJECT_NAME"
export PROJECT_PATH="$PROJECTS_PATH/$PROJECT_NAME"
export SKIP_BTCPAY_SERVER="$SKIP_BTCPAY_SERVER"
export SKIP_WWW_SERVER="$SKIP_WWW_SERVER"
export SKIP_LNPLAY_SERVER="$SKIP_LNPLAY_SERVER"
mkdir -p "$PROJECT_PATH" "$REMOTE_PATH/projects"
# create a symlink from ./remotepath/projects/project
PROJECT_SYMLINK="$REMOTE_PATH/projects/$PROJECT_NAME"
if [ ! -L "$PROJECT_SYMLINK" ]; then
ln -r -s "$PROJECT_PATH" "$PROJECT_SYMLINK"
fi
# check to see if the enf file exists. exist if not.
PROJECT_DEFINITION_PATH="$PROJECT_PATH/project.conf"
if [ ! -f "$PROJECT_DEFINITION_PATH" ]; then
# stub out a project.conf
cat >"$PROJECT_DEFINITION_PATH" <<EOL
# see https://www.sovereign-stack.org/ss-up/#projectconf for more info.
PRIMARY_DOMAIN="domain0.tld"
# OTHER_SITES_LIST="domain1.tld,domain2.tld,domain3.tld"
WWW_SERVER_MAC_ADDRESS=
# WWW_SSDATA_DISK_SIZE_GB=100
# WWW_SERVER_CPU_COUNT="6"
# WWW_SERVER_MEMORY_MB="4096"
BTCPAY_SERVER_MAC_ADDRESS=
# BTCPAY_SERVER_CPU_COUNT="4"
# BTCPAY_SERVER_MEMORY_MB="4096"
LNPLAY_SERVER_MAC_ADDRESS=
# LNPLAY_SERVER_CPU_COUNT="4"
# LNPLAY_SERVER_MEMORY_MB="4096"
# BITCOIN_CHAIN=mainnet
EOL
chmod 0744 "$PROJECT_DEFINITION_PATH"
echo "INFO: we stubbed a new project.conf for you at '$PROJECT_DEFINITION_PATH'. Go update it!"
echo "INFO: Learn more at https://www.sovereign-stack.org/ss-up/"
exit 1
fi
. ./project_env.sh
if [ -z "$PRIMARY_DOMAIN" ]; then
echo "ERROR: The PRIMARY_DOMAIN is not specified. Check your project.conf."
exit 1
fi
source ./domain_list.sh
# let's provision our primary domain first.
export DOMAIN_NAME="$PRIMARY_DOMAIN"
export PRIMARY_DOMAIN="$PRIMARY_DOMAIN"
export BITCOIN_CHAIN="$BITCOIN_CHAIN"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
export PRIMARY_SITE_PATH="$SITES_PATH/$PRIMARY_DOMAIN"
stub_site_definition
# bring the VMs up under the primary domain name.
export UPDATE_BTCPAY="$UPDATE_BTCPAY"
# iterate over all our server endpoints and provision them if needed.
# www
VPS_HOSTNAME=
. ./base.sh
if ! incus image list --format csv | grep -q "$DOCKER_BASE_IMAGE_NAME"; then
# create the incus base image.
if [ "$SKIP_BASE_IMAGE_CREATION" = false ]; then
./create_base.sh
fi
fi
VMS_TO_PROVISION=""
if [ -n "$WWW_SERVER_MAC_ADDRESS" ] && [ "$SKIP_WWW_SERVER" = false ]; then
VMS_TO_PROVISION="www"
fi
if [ -n "$BTCPAY_SERVER_MAC_ADDRESS" ] && [ "$SKIP_BTCPAY_SERVER" = false ]; then
VMS_TO_PROVISION="$VMS_TO_PROVISION btcpayserver"
fi
if [ -n "$LNPLAY_SERVER_MAC_ADDRESS" ] || [ "$SKIP_LNPLAY_SERVER" = false ]; then
VMS_TO_PROVISION="$VMS_TO_PROVISION lnplayserver"
fi
for VIRTUAL_MACHINE in $VMS_TO_PROVISION; do
export VIRTUAL_MACHINE="$VIRTUAL_MACHINE"
FQDN=
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
source "$SITE_PATH/site.conf"
source ./project/domain_env.sh
# VALIDATE THE INPUT from the ENVFILE
if [ -z "$DOMAIN_NAME" ]; then
echo "ERROR: DOMAIN_NAME not specified in your site.conf."
exit 1
fi
# Goal is to get the macvlan interface.
INCUS_SS_CONFIG_LINE=
if incus network list --format csv --project default | grep incusbr0 | grep -q "ss-config"; then
INCUS_SS_CONFIG_LINE="$(incus network list --format csv --project default | grep incusbr0 | grep ss-config)"
fi
if [ -z "$INCUS_SS_CONFIG_LINE" ]; then
echo "ERROR: the MACVLAN interface has not been specified. You may need to run 'ss-remote' again."
exit 1
fi
CONFIG_ITEMS="$(echo "$INCUS_SS_CONFIG_LINE" | awk -F'"' '{print $2}')"
DATA_PLANE_MACVLAN_INTERFACE="$(echo "$CONFIG_ITEMS" | cut -d ',' -f2)"
export DATA_PLANE_MACVLAN_INTERFACE="$DATA_PLANE_MACVLAN_INTERFACE"
# Now let's switch to the new project to ensure new resources are created under the project scope.
if ! incus info | grep "project:" | grep -q "$PROJECT_NAME"; then
incus project switch "$PROJECT_NAME"
fi
# check if the OVN network exists in this project.
if ! incus network list | grep -q "ss-ovn"; then
incus network create ss-ovn --type=ovn network=incusbr1 ipv6.address=none
fi
export MAC_ADDRESS_TO_PROVISION=
export VPS_HOSTNAME="$VPS_HOSTNAME"
export FQDN="$VPS_HOSTNAME.$DOMAIN_NAME"
if [ "$VIRTUAL_MACHINE" = www ] && [ -n "$WWW_SERVER_MAC_ADDRESS" ]; then
FQDN="$WWW_HOSTNAME.$DOMAIN_NAME"
VPS_HOSTNAME="$WWW_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$WWW_SERVER_MAC_ADDRESS"
elif [ "$VIRTUAL_MACHINE" = btcpayserver ] && [ -n "$BTCPAY_SERVER_MAC_ADDRESS" ]; then
FQDN="$BTCPAY_SERVER_HOSTNAME.$DOMAIN_NAME"
VPS_HOSTNAME="$BTCPAY_SERVER_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$BTCPAY_SERVER_MAC_ADDRESS"
elif [ "$VIRTUAL_MACHINE" = lnplayserver ] && [ -n "$LNPLAY_SERVER_MAC_ADDRESS" ]; then
FQDN="$LNPLAY_SERVER_HOSTNAME.$DOMAIN_NAME"
VPS_HOSTNAME="$LNPLAY_SERVER_HOSTNAME"
MAC_ADDRESS_TO_PROVISION="$LNPLAY_SERVER_MAC_ADDRESS"
elif [ "$VIRTUAL_MACHINE" = "$BASE_IMAGE_VM_NAME" ]; then
FQDN="$BASE_IMAGE_VM_NAME"
fi
export FQDN="$FQDN"
export INCUS_VM_NAME="${FQDN//./-}"
export MAC_ADDRESS_TO_PROVISION="$MAC_ADDRESS_TO_PROVISION"
export PROJECT_PATH="$PROJECT_PATH"
./deploy_vm.sh
done
# let's stub out the rest of our site definitions, if any.
for DOMAIN_NAME in ${OTHER_SITES_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# stub out the site_defition if it's doesn't exist.
stub_site_definition
done
if [ "$SKIP_BTCPAY_SERVER" = false ]; then
if [ -n "$BTCPAY_SERVER_MAC_ADDRESS" ]; then
export DOCKER_HOST="ssh://ubuntu@$BTCPAY_SERVER_FQDN"
./project/btcpayserver/go.sh
fi
fi
if [ "$SKIP_WWW_SERVER" = false ]; then
# now let's run the www and btcpay-specific provisioning scripts.
if [ -n "$WWW_SERVER_MAC_ADDRESS" ]; then
export DOCKER_HOST="ssh://ubuntu@$WWW_FQDN"
# enable docker swarm mode so we can support docker stacks.
if docker info | grep -q "Swarm: inactive"; then
docker swarm init --advertise-addr enp6s0
fi
./project/www/go.sh
fi
fi
# don't run lnplay stuff if user specifies --skip-lnplay
if [ "$SKIP_LNPLAY_SERVER" = false ]; then
# now let's run the www and btcpay-specific provisioning scripts.
if [ -n "$LNPLAY_SERVER_MAC_ADDRESS" ]; then
export DOCKER_HOST="ssh://ubuntu@$LNPLAY_SERVER_FQDN"
LNPLAY_ENV_FILE="$PRIMARY_SITE_PATH/$LNPLAY_SERVER_FQDN/lnplay.conf"
if [ ! -f "$LNPLAY_ENV_FILE" ]; then
# and we have to set our environment file as well.
cat > "$LNPLAY_ENV_FILE" <<EOL
DOCKER_HOST=ssh://ubuntu@${LNPLAY_SERVER_FQDN}
BACKEND_FQDN=lnplay.${PRIMARY_DOMAIN}
FRONTEND_FQDN=remote.${PRIMARY_DOMAIN}
ENABLE_TLS=true
BTC_CHAIN=${BITCOIN_CHAIN}
CHANNEL_SETUP=none
LNPLAY_SERVER_PATH=${SITES_PATH}/${PRIMARY_DOMAIN}/lnplayserver
DEPLOY_PRISM_PLUGIN=true
EOL
fi
INCUS_LNPLAYSERVER_IMAGE_NAME="lnplayserver-$DOMAIN_NAME"
if ! incus image list -q --format csv | grep -q "$INCUS_LNPLAYSERVER_IMAGE_NAME"; then
# do all the docker image creation steps, but don't run services.
bash -c "./project/lnplay/up.sh -y --no-services --lnplay-conf-path=$LNPLAY_ENV_FILE"
# stop the instance so we can get an image yo
INCUS_VM_NAME="${LNPLAY_SERVER_FQDN//./-}"
incus stop "$INCUS_VM_NAME"
# create the incus image.
incus publish -q --public "$INCUS_VM_NAME" --alias="$INCUS_LNPLAYSERVER_IMAGE_NAME" --compression none
incus start "$INCUS_VM_NAME"
sleep 10
bash -c "./wait_for_ip.sh --incus-name=$INCUS_VM_NAME"
sleep 3
fi
# bring up lnplay services.
bash -c "./project/lnplay/up.sh -y --lnplay-conf-path=$LNPLAY_ENV_FILE"
fi
fi

51
deployment/wait_for_ip.sh Executable file
View File

@ -0,0 +1,51 @@
#!/bin/bash
set -e
INCUS_INSTANCE_NAME=
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--incus-name=*)
INCUS_INSTANCE_NAME="${i#*=}"
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
# if the invoker did not set the instance name, throw an error.
if [ -z "$INCUS_INSTANCE_NAME" ]; then
echo "ERROR: The instance name was not specified. Use '--incus-name' when calling wait_for_ip.sh."
exit 1
fi
if ! incus list --format csv | grep -q "$INCUS_INSTANCE_NAME"; then
echo "ERROR: the instance '$INCUS_INSTANCE_NAME' does not exist."
exit 1
fi
IP_V4_ADDRESS=
while true; do
IP_V4_ADDRESS="$(incus list "$INCUS_INSTANCE_NAME" --format csv --columns=4 | grep enp5s0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')" || true
export IP_V4_ADDRESS="$IP_V4_ADDRESS"
if [ -n "$IP_V4_ADDRESS" ]; then
# give the machine extra time to spin up.
wait-for-it -t 300 "$IP_V4_ADDRESS:22"
break
else
sleep 1
printf '.'
fi
done
# wait for cloud-init to complet before returning.
while incus exec "$INCUS_INSTANCE_NAME" -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done
sleep 1

View File

@ -1,28 +0,0 @@
#!/bin/bash
set -ex
LXC_INSTANCE_NAME="$1"
IP_V4_ADDRESS=
while true; do
IP_V4_ADDRESS="$(lxc list "$LXC_INSTANCE_NAME" --format csv --columns=4 | grep enp5s0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')" || true
export IP_V4_ADDRESS="$IP_V4_ADDRESS"
if [ -n "$IP_V4_ADDRESS" ]; then
# give the machine extra time to spin up.
wait-for-it -t 300 "$IP_V4_ADDRESS:22"
echo ""
break
else
sleep 1
printf '.'
fi
done
# Let's remove any entry in our known_hosts, then add it back.
# we are using IP address here so we don't have to rely on external DNS
# configuration for the base image preparataion.
ssh-keygen -R "$IP_V4_ADDRESS"
ssh-keyscan -H -t ecdsa "$IP_V4_ADDRESS" >> "$SSH_HOME/known_hosts"
ssh "ubuntu@$IP_V4_ADDRESS" sudo chown -R ubuntu:ubuntu /home/ubuntu

View File

@ -1,36 +0,0 @@
#!/bin/bash
set -eux
cd "$(dirname "$0")"
# TODO: We are using extra space on the remote VPS at the moment for the duplicity backup files.
# we could eliminate that and simply save duplicity backups to the management machine running the script
# this could be done by using a local path and mounting it on the remote VPS.
# maybe something like https://superuser.com/questions/616182/how-to-mount-local-directory-to-remote-like-sshfs
# step 1: run duplicity on the remote system to backup all files to the remote system.
# --allow-source-mismatch
# if the source files to backup don't exist on the remote host, we return.
if ! ssh "$PRIMARY_WWW_FQDN" "[ -d $REMOTE_SOURCE_BACKUP_PATH ]"; then
echo "INFO: The path to backup does not exist. There's nothing to backup! That's ok, execution will continue."
exit 0
fi
ssh "$PRIMARY_WWW_FQDN" sudo PASSPHRASE="$DUPLICITY_BACKUP_PASSPHRASE" duplicity "$REMOTE_SOURCE_BACKUP_PATH" "file://$REMOTE_BACKUP_PATH"
ssh "$PRIMARY_WWW_FQDN" sudo chown -R ubuntu:ubuntu "$REMOTE_BACKUP_PATH"
SSHFS_PATH="/tmp/sshfs_temp"
mkdir -p "$SSHFS_PATH"
# now let's pull down the latest files from the backup directory.
# create a temp directory to serve as the mountpoint for the remote machine backups directory
sshfs "$PRIMARY_WWW_FQDN:$REMOTE_BACKUP_PATH" "$SSHFS_PATH"
# rsync the files from the remote server to our local backup path.
rsync -av "$SSHFS_PATH/" "$LOCAL_BACKUP_PATH/"
# step 4: unmount the SSHFS filesystem and cleanup.
umount "$SSHFS_PATH"
rm -rf "$SSHFS_PATH"

View File

@ -1,59 +0,0 @@
#!/bin/bash
set -ex
# let's do a refresh of the certificates. Let's Encrypt will not run if it's not time.
docker pull certbot/certbot:latest
# when deploying to AWS, www exists on a separate IP address from btcpay, etc.
# thus, we structure the certificate accordingly.
if [ "$VPS_HOSTING_TARGET" = aws ]; then
docker run -it --rm \
--name certbot \
-p 80:80 \
-p 443:443 \
-v "$REMOTE_HOME/letsencrypt":/etc/letsencrypt \
-v /var/lib/letsencrypt:/var/lib/letsencrypt \
-v "$REMOTE_HOME/letsencrypt_logs":/var/log/letsencrypt \
certbot/certbot certonly -v --noninteractive --agree-tos --key-type ecdsa --standalone --expand -d "$DOMAIN_NAME" -d "$FQDN" -d "$NEXTCLOUD_FQDN" -d "$GITEA_FQDN" --email "$CERTIFICATE_EMAIL_ADDRESS"
elif [ "$VPS_HOSTING_TARGET" = lxd ]; then
# with the lxd side, we are trying to expose ALL OUR services from one IP address, which terminates
# at a cachehing reverse proxy that runs nginx.
# docker run -it --rm \
# --name certbot \
# -p 80:80 \
# -p 443:443 \
# -v "$REMOTE_HOME/letsencrypt":/etc/letsencrypt \
# -v /var/lib/letsencrypt:/var/lib/letsencrypt \
# -v "$REMOTE_HOME/letsencrypt_logs":/var/log/letsencrypt \
# certbot/certbot certonly -v --noninteractive --agree-tos --key-type ecdsa --standalone --expand -d "$DOMAIN_NAME" -d "$PRIMARY_WWW_FQDN" -d "$BTCPAY_USER_FQDN" -d "$NEXTCLOUD_FQDN" -d "$GITEA_FQDN" -d "$NOSTR_FQDN" --email "$CERTIFICATE_EMAIL_ADDRESS"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source ../../reset_env.sh
source "$SITE_PATH/site_definition"
source ../../domain_env.sh
# with the lxd side, we are trying to expose ALL OUR services from one IP address, which terminates
# at a cachehing reverse proxy that runs nginx.
ssh "$PRIMARY_WWW_FQDN" sudo mkdir -p "$REMOTE_HOME/letsencrypt/$DOMAIN_NAME/_logs"
docker run -it --rm \
--name certbot \
-p 80:80 \
-p 443:443 \
-v "$REMOTE_HOME/letsencrypt/$DOMAIN_NAME":/etc/letsencrypt \
-v /var/lib/letsencrypt:/var/lib/letsencrypt \
-v "$REMOTE_HOME/letsencrypt/$DOMAIN_NAME/_logs":/var/log/letsencrypt \
certbot/certbot certonly -v --noninteractive --agree-tos --key-type ecdsa --standalone --expand -d "$DOMAIN_NAME" -d "$WWW_FQDN" -d "$BTCPAY_USER_FQDN" -d "$NEXTCLOUD_FQDN" -d "$GITEA_FQDN" -d "$NOSTR_FQDN" --email "$CERTIFICATE_EMAIL_ADDRESS"
sleep 3
done
fi

View File

@ -1,142 +0,0 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
# Create the nginx config file which covers all domains.
bash -c ./stub/nginx_config.sh
# redirect all docker commands to the remote host.
export DOCKER_HOST="ssh://ubuntu@$PRIMARY_WWW_FQDN"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source ../../reset_env.sh
source "$SITE_PATH/site_definition"
source ../../domain_env.sh
### Let's check to ensure all the requiredsettings are set.
if [ "$DEPLOY_GHOST" = true ]; then
if [ -z "$GHOST_MYSQL_PASSWORD" ]; then
echo "ERROR: Ensure GHOST_MYSQL_PASSWORD is configured in your site_definition."
exit 1
fi
if [ -z "$GHOST_MYSQL_ROOT_PASSWORD" ]; then
echo "ERROR: Ensure GHOST_MYSQL_ROOT_PASSWORD is configured in your site_definition."
exit 1
fi
fi
if [ "$DEPLOY_GITEA" = true ]; then
if [ -z "$GITEA_MYSQL_PASSWORD" ]; then
echo "ERROR: Ensure GITEA_MYSQL_PASSWORD is configured in your site_definition."
exit 1
fi
if [ -z "$GITEA_MYSQL_ROOT_PASSWORD" ]; then
echo "ERROR: Ensure GITEA_MYSQL_ROOT_PASSWORD is configured in your site_definition."
exit 1
fi
fi
if [ "$DEPLOY_NEXTCLOUD" = true ]; then
if [ -z "$NEXTCLOUD_MYSQL_ROOT_PASSWORD" ]; then
echo "ERROR: Ensure NEXTCLOUD_MYSQL_ROOT_PASSWORD is configured in your site_definition."
exit 1
fi
if [ -z "$NEXTCLOUD_MYSQL_PASSWORD" ]; then
echo "ERROR: Ensure NEXTCLOUD_MYSQL_PASSWORD is configured in your site_definition."
exit 1
fi
fi
if [ "$DEPLOY_NOSTR_RELAY" = true ]; then
if [ -z "$NOSTR_ACCOUNT_PUBKEY" ]; then
echo "ERROR: Ensure NOSTR_ACCOUNT_PUBKEY is configured in your site_definition."
exit 1
fi
if [ -z "$NOSTR_ACCOUNT_PUBKEY" ]; then
echo "ERROR: Ensure NOSTR_ACCOUNT_PUBKEY is configured in your site_definition."
exit 1
fi
fi
if [ -z "$DUPLICITY_BACKUP_PASSPHRASE" ]; then
echo "ERROR: Ensure DUPLICITY_BACKUP_PASSPHRASE is configured in your site_definition."
exit 1
fi
if [ -z "$DOMAIN_NAME" ]; then
echo "ERROR: Ensure DOMAIN_NAME is configured in your site_definition."
exit 1
fi
if [ -z "$NOSTR_ACCOUNT_PUBKEY" ]; then
echo "ERROR: You MUST specify a Nostr public key. This is how you get all your social features."
echo "INFO: Go to your site_definition file and set the NOSTR_ACCOUNT_PUBKEY variable."
exit 1
fi
TOR_CONFIG_PATH=
done
./stop_docker_stacks.sh
if [ "$DEPLOY_ONION_SITE" = true ]; then
# ensure the tor image is built
docker build -t tor:latest ./tor
# if the tor folder doesn't exist, we provision a new one. Otherwise you need to restore.
# this is how we generate a new torv3 endpoint.
if ! ssh "$PRIMARY_WWW_FQDN" "[ -d $REMOTE_HOME/tor/www ]"; then
ssh "$PRIMARY_WWW_FQDN" "mkdir -p $REMOTE_HOME/tor"
TOR_CONFIG_PATH="$(pwd)/tor/torrc-init"
export TOR_CONFIG_PATH="$TOR_CONFIG_PATH"
docker stack deploy -c ./tor.yml torstack
sleep 20
docker stack rm torstack
sleep 20
fi
ONION_ADDRESS="$(ssh "$PRIMARY_WWW_FQDN" sudo cat "${REMOTE_HOME}"/tor/www/hostname)"
export ONION_ADDRESS="$ONION_ADDRESS"
# # Since we run a separate ghost process, we create a new directory and symlink it to the original
# if ! ssh "$PRIMARY_WWW_FQDN" "[ -L $REMOTE_HOME/tor_ghost ]"; then
# ssh "$PRIMARY_WWW_FQDN" ln -s "$REMOTE_HOME/ghost_site/themes $REMOTE_HOME/tor_ghost/themes"
# fi
fi
./stub/nginx_yml.sh
./stub/ghost_yml.sh
./stub/nextcloud_yml.sh
./stub/gitea_yml.sh
# # start a browser session; point it to port 80 to ensure HTTPS redirect.
# wait-for-it -t 320 "$PRIMARY_WWW_FQDN:80"
# wait-for-it -t 320 "$PRIMARY_WWW_FQDN:443"
# # open bowser tabs.
# if [ "$DEPLOY_GHOST" = true ]; then
# xdg-open "http://$PRIMARY_WWW_FQDN" > /dev/null 2>&1
# fi
# if [ "$DEPLOY_NEXTCLOUD" = true ]; then
# xdg-open "http://$NEXTCLOUD_FQDN" > /dev/null 2>&1
# fi
# if [ "$DEPLOY_GITEA" = true ]; then
# xdg-open "http://$GITEA_FQDN" > /dev/null 2>&1
# fi
# #fi

View File

@ -1,31 +0,0 @@
#!/bin/bash
set -eux
cd "$(dirname "$0")"
FILE_COUNT="$(find "$LOCAL_BACKUP_PATH" -type f | wc -l)"
if [ "$FILE_COUNT" = 0 ]; then
echo "ERROR: there are no files in the local backup path '$LOCAL_BACKUP_PATH'."
echo "We're going to continue with execution."
exit 0
fi
RESPONSE=
read -r -p "Are you sure you want to restore the local path '$LOCAL_BACKUP_PATH' to the remote server at '$PRIMARY_WWW_FQDN' (y/n)": RESPONSE
if [ "$RESPONSE" != y ]; then
echo "STOPPING."
exit 0
fi
# delete the target backup path so we can push restoration files from the management machine.
ssh "$PRIMARY_WWW_FQDN" sudo rm -rf "$REMOTE_SOURCE_BACKUP_PATH"
# scp our local backup directory to the remote machine
ssh "$PRIMARY_WWW_FQDN" sudo mkdir -p "$REMOTE_BACKUP_PATH"
ssh "$PRIMARY_WWW_FQDN" sudo chown ubuntu:ubuntu "$REMOTE_BACKUP_PATH"
scp -r "$LOCAL_BACKUP_PATH" "$PRIMARY_WWW_FQDN:$REMOTE_BACKUP_PATH"
# now we run duplicity to restore the archive.
ssh "$PRIMARY_WWW_FQDN" sudo PASSPHRASE="$DUPLICITY_BACKUP_PASSPHRASE" duplicity --force restore "file://$REMOTE_BACKUP_PATH/$BACKUP_TIMESTAMP" "$REMOTE_SOURCE_BACKUP_PATH/"

View File

@ -1,101 +0,0 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
# bring down ghost instances.
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source ../../reset_env.sh
source "$SITE_PATH/site_definition"
source ../../domain_env.sh
### Stop all services.
for APP in ghost nextcloud gitea; do
# backup each language for each app.
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
STACK_NAME="$DOMAIN_IDENTIFIER-$APP-$LANGUAGE_CODE"
if docker stack list --format "{{.Name}}" | grep -q "$STACK_NAME"; then
docker stack rm "$STACK_NAME"
sleep 2
fi
# these variable are used by both backup/restore scripts.
export APP="$APP"
export REMOTE_BACKUP_PATH="$REMOTE_HOME/backups/www/$APP/$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
export REMOTE_SOURCE_BACKUP_PATH="$REMOTE_HOME/$APP/$DOMAIN_NAME"
# ensure our local backup path exists so we can pull down the duplicity archive to the management machine.
export LOCAL_BACKUP_PATH="$SITE_PATH/backups/www/$APP/$BACKUP_TIMESTAMP"
# ensure our local backup path exists.
if [ ! -d "$LOCAL_BACKUP_PATH" ]; then
mkdir -p "$LOCAL_BACKUP_PATH"
fi
if [ "$RESTORE_WWW" = true ]; then
./restore_path.sh
#ssh "$PRIMARY_WWW_FQDN" sudo chown ubuntu:ubuntu "$REMOTE_HOME/$APP"
elif [ "$BACKUP_APPS" = true ]; then
# if we're not restoring, then we may or may not back up.
./backup_path.sh
fi
done
done
done
if docker stack list --format "{{.Name}}" | grep -q reverse-proxy; then
docker stack rm reverse-proxy
if [ "$STOP_SERVICES" = true ]; then
echo "STOPPING as indicated by the --stop flag."
exit 1
fi
# wait for all docker containers to stop.
# TODO see if there's a way to check for this.
sleep 10
docker system prune -f
fi
# generate the certs and grab a backup
if [ "$RUN_CERT_RENEWAL" = true ]; then
./generate_certs.sh
fi
# Back each domain's certificates under /home/ubuntu/letsencrypt/domain
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source ../../reset_env.sh
source "$SITE_PATH/site_definition"
source ../../domain_env.sh
# these variable are used by both backup/restore scripts.
export APP="letsencrypt"
export REMOTE_BACKUP_PATH="$REMOTE_HOME/backups/www/$APP/$DOMAIN_IDENTIFIER"
export REMOTE_SOURCE_BACKUP_PATH="$REMOTE_HOME/$APP/$DOMAIN_NAME"
# ensure our local backup path exists so we can pull down the duplicity archive to the management machine.
export LOCAL_BACKUP_PATH="$SITE_PATH/backups/www/$APP/$BACKUP_TIMESTAMP"
mkdir -p "$LOCAL_BACKUP_PATH"
if [ "$RESTORE_WWW" = true ]; then
sleep 5
echo "STARTING restore_path.sh for letsencrypt."
./restore_path.sh
#ssh "$PRIMARY_WWW_FQDN" sudo chown ubuntu:ubuntu "$REMOTE_HOME/$APP"
elif [ "$BACKUP_APPS" = true ]; then
# if we're not restoring, then we may or may not back up.
./backup_path.sh
fi
done

View File

@ -1,112 +0,0 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source ../../../reset_env.sh
source "$SITE_PATH/site_definition"
source ../../../domain_env.sh
# for each language specified in the site_definition, we spawn a separate ghost container
# at https://www.domain.com/$LANGUAGE_CODE
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
STACK_NAME="$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
# ensure directories on remote host exist so we can mount them into the containers.
ssh "$PRIMARY_WWW_FQDN" mkdir -p "$REMOTE_HOME/ghost/$DOMAIN_NAME"
ssh "$PRIMARY_WWW_FQDN" mkdir -p "$REMOTE_HOME/ghost/$DOMAIN_NAME/$LANGUAGE_CODE/ghost" "$REMOTE_HOME/ghost/$DOMAIN_NAME/$LANGUAGE_CODE/db"
export GHOST_STACK_TAG="ghost-$STACK_NAME"
export GHOST_DB_STACK_TAG="ghostdb-$STACK_NAME"
# todo append domain number or port number.
WEBSTACK_PATH="$SITE_PATH/webstack"
mkdir -p "$WEBSTACK_PATH"
export DOCKER_YAML_PATH="$WEBSTACK_PATH/ghost-$LANGUAGE_CODE.yml"
# here's the NGINX config. We support ghost and nextcloud.
cat > "$DOCKER_YAML_PATH" <<EOL
version: "3.8"
services:
EOL
# This is the ghost for HTTPS (not over Tor)
cat >>"$DOCKER_YAML_PATH" <<EOL
${GHOST_STACK_TAG}:
image: ${GHOST_IMAGE}
networks:
- ghostnet-${DOMAIN_IDENTIFIER}-${LANGUAGE_CODE}
- ghostdbnet-${DOMAIN_IDENTIFIER}-${LANGUAGE_CODE}
volumes:
- ${REMOTE_HOME}/ghost/${DOMAIN_NAME}/${LANGUAGE_CODE}/ghost:/var/lib/ghost/content
environment:
EOL
if [ "$LANGUAGE_CODE" = "en" ]; then
cat >>"$DOCKER_YAML_PATH" <<EOL
- url=https://${WWW_FQDN}
EOL
else
cat >>"$DOCKER_YAML_PATH" <<EOL
- url=https://${WWW_FQDN}/${LANGUAGE_CODE}
EOL
fi
cat >>"$DOCKER_YAML_PATH" <<EOL
- database__client=mysql
- database__connection__host=${GHOST_DB_STACK_TAG}
- database__connection__user=ghost
- database__connection__password=\${GHOST_MYSQL_PASSWORD}
- database__connection__database=ghost
- database__pool__min=0
- privacy__useStructuredData=true
deploy:
restart_policy:
condition: on-failure
${GHOST_DB_STACK_TAG}:
image: ${GHOST_DB_IMAGE}
networks:
- ghostdbnet-${DOMAIN_IDENTIFIER}-${LANGUAGE_CODE}
volumes:
- ${REMOTE_HOME}/ghost/${DOMAIN_NAME}/${LANGUAGE_CODE}/db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=\${GHOST_MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=ghost
- MYSQL_USER=ghost
- MYSQL_PASSWORD=\${GHOST_MYSQL_PASSWORD}
deploy:
restart_policy:
condition: on-failure
EOL
cat >>"$DOCKER_YAML_PATH" <<EOL
networks:
EOL
if [ "$DEPLOY_GHOST" = true ]; then
GHOSTNET_NAME="ghostnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
GHOSTDBNET_NAME="ghostdbnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
cat >>"$DOCKER_YAML_PATH" <<EOL
${GHOSTNET_NAME}:
name: "reverse-proxy_ghostnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
external: true
${GHOSTDBNET_NAME}:
EOL
fi
docker stack deploy -c "$DOCKER_YAML_PATH" "$DOMAIN_IDENTIFIER-ghost-$LANGUAGE_CODE"
sleep 2
done # language code
done # domain list

View File

@ -1,88 +0,0 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source ../../../reset_env.sh
source "$SITE_PATH/site_definition"
source ../../../domain_env.sh
if [ "$DEPLOY_GITEA" = true ]; then
GITEA_PATH="$REMOTE_GITEA_PATH/$DOMAIN_NAME/${LANGUAGE_CODE}"
ssh "$PRIMARY_WWW_FQDN" mkdir -p "$GITEA_PATH/data" "$GITEA_PATH/db"
STACK_NAME="$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
export STACK_TAG="gitea-$STACK_NAME"
export DB_STACK_TAG="giteadb-$STACK_NAME"
export DOCKER_YAML_PATH="$SITE_PATH/webstack/gitea-en.yml"
NET_NAME="giteanet-$DOMAIN_IDENTIFIER"
DBNET_NAME="giteadbnet-$DOMAIN_IDENTIFIER"
# here's the NGINX config. We support ghost and nextcloud.
echo "" > "$DOCKER_YAML_PATH"
cat >>"$DOCKER_YAML_PATH" <<EOL
version: "3.8"
services:
${STACK_TAG}:
image: ${GITEA_IMAGE}
volumes:
- ${GITEA_PATH}/data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- USER_UID=1000
- USER_GID=1000
- ROOT_URL=https://${GITEA_FQDN}
- GITEA__database__DB_TYPE=mysql
- GITEA__database__HOST=${DB_STACK_TAG}:3306
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__PASSWD=\${GITEA_MYSQL_PASSWORD}
networks:
- ${NET_NAME}
- ${DBNET_NAME}
deploy:
restart_policy:
condition: on-failure
${DB_STACK_TAG}:
image: ${GITEA_DB_IMAGE}
networks:
- ${DBNET_NAME}
volumes:
- ${GITEA_PATH}/db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=\${GITEA_MYSQL_ROOT_PASSWORD}
- MYSQL_PASSWORD=\${GITEA_MYSQL_PASSWORD}
- MYSQL_DATABASE=gitea
- MYSQL_USER=gitea
deploy:
restart_policy:
condition: on-failure
networks:
EOL
cat >>"$DOCKER_YAML_PATH" <<EOL
${NET_NAME}:
name: "reverse-proxy_${NET_NAME}-${LANGUAGE_CODE}"
external: true
${DBNET_NAME}:
EOL
docker stack deploy -c "$DOCKER_YAML_PATH" "$DOMAIN_IDENTIFIER-gitea-$LANGUAGE_CODE"
sleep 1
fi
done

View File

@ -1,82 +0,0 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source ../../../reset_env.sh
source "$SITE_PATH/site_definition"
source ../../../domain_env.sh
# ensure remote directories exist
if [ "$DEPLOY_NEXTCLOUD" = true ]; then
ssh "$PRIMARY_WWW_FQDN" "mkdir -p $REMOTE_NEXTCLOUD_PATH/$DOMAIN_NAME/en/db"
ssh "$PRIMARY_WWW_FQDN" "mkdir -p $REMOTE_NEXTCLOUD_PATH/$DOMAIN_NAME/en/html"
sleep 2
WEBSTACK_PATH="$SITE_PATH/webstack"
mkdir -p "$WEBSTACK_PATH"
export DOCKER_YAML_PATH="$WEBSTACK_PATH/nextcloud-en.yml"
# here's the NGINX config. We support ghost and nextcloud.
cat > "$DOCKER_YAML_PATH" <<EOL
version: "3.8"
services:
${NEXTCLOUD_STACK_TAG}:
image: ${NEXTCLOUD_IMAGE}
networks:
- nextcloud-${DOMAIN_IDENTIFIER}-en
- nextclouddb-${DOMAIN_IDENTIFIER}-en
volumes:
- ${REMOTE_HOME}/nextcloud/${DOMAIN_NAME}/en/html:/var/www/html
environment:
- MYSQL_PASSWORD=\${NEXTCLOUD_MYSQL_PASSWORD}
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=${NEXTCLOUD_DB_STACK_TAG}
- NEXTCLOUD_TRUSTED_DOMAINS=${DOMAIN_NAME}
- OVERWRITEHOST=${NEXTCLOUD_FQDN}
- OVERWRITEPROTOCOL=https
- SERVERNAME=${NEXTCLOUD_FQDN}
deploy:
restart_policy:
condition: on-failure
${NEXTCLOUD_DB_STACK_TAG}:
image: ${NEXTCLOUD_DB_IMAGE}
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb_read_only_compressed=OFF
networks:
- nextclouddb-${DOMAIN_IDENTIFIER}-en
volumes:
- ${REMOTE_HOME}/nextcloud/${DOMAIN_NAME}/en/db:/var/lib/mysql
environment:
- MARIADB_ROOT_PASSWORD=\${NEXTCLOUD_MYSQL_ROOT_PASSWORD}
- MYSQL_PASSWORD=\${NEXTCLOUD_MYSQL_PASSWORD}
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
deploy:
restart_policy:
condition: on-failure
networks:
nextcloud-${DOMAIN_IDENTIFIER}-en:
name: "reverse-proxy_nextcloudnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
external: true
nextclouddb-${DOMAIN_IDENTIFIER}-en:
EOL
docker stack deploy -c "$DOCKER_YAML_PATH" "$DOMAIN_IDENTIFIER-nextcloud-en"
fi
done

View File

@ -1,487 +0,0 @@
#!/bin/bash
set -eu
cd "$(dirname "$0")"
# here's the NGINX config. We support ghost and nextcloud.
NGINX_CONF_PATH="$PROJECT_PATH/nginx.conf"
# clear the existing nginx config.
echo "" > "$NGINX_CONF_PATH"
# iterate over all our domains and create the nginx config file.
iteration=0
echo "DOMAIN_LIST: $DOMAIN_LIST"
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
export CONTAINER_TLS_PATH="/etc/letsencrypt/${DOMAIN_NAME}/live/${DOMAIN_NAME}"
# source the site path so we know what features it has.
source ../../../reset_env.sh
source "$SITE_PATH/site_definition"
source ../../../domain_env.sh
echo "Doing DOMAIN_NAME: $DOMAIN_NAME"
if [ $iteration = 0 ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
events {
worker_connections 1024;
}
http {
client_max_body_size 100m;
server_tokens off;
# next two sets commands and connection_upgrade block come from https://docs.btcpayserver.org/FAQ/Deployment/#can-i-use-an-existing-nginx-server-as-a-reverse-proxy-with-ssl-termination
# Needed to allow very long URLs to prevent issues while signing PSBTs
server_names_hash_bucket_size 128;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
client_header_buffer_size 500k;
large_client_header_buffers 4 500k;
# Needed websocket support (used by Ledger hardware wallets)
map \$http_upgrade \$connection_upgrade {
default upgrade;
'' close;
}
# return 403 for all non-explicit hostnames
server {
listen 80 default_server;
return 301 https://${WWW_FQDN}\$request_uri;
}
EOL
fi
# ghost http to https redirects.
cat >>"$NGINX_CONF_PATH" <<EOL
# http://${DOMAIN_NAME} redirect to https://${WWW_FQDN}
server {
listen 80;
listen [::]:80;
server_name ${DOMAIN_NAME};
location / {
# request MAY get another redirect at https://domain.tld for www.
return 301 https://${DOMAIN_NAME}\$request_uri;
}
}
EOL
cat >>"$NGINX_CONF_PATH" <<EOL
# http://${WWW_FQDN} redirect to https://${WWW_FQDN}
server {
listen 80;
listen [::]:80;
server_name ${WWW_FQDN};
return 301 https://${WWW_FQDN}\$request_uri;
}
EOL
# nextcloud http-to-https redirect
if [ "$DEPLOY_NEXTCLOUD" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# http://${NEXTCLOUD_FQDN} redirect to https://${NEXTCLOUD_FQDN}
server {
listen 80;
listen [::]:80;
server_name ${NEXTCLOUD_FQDN};
return 301 https://${NEXTCLOUD_FQDN}\$request_uri;
}
EOL
fi
# gitea http to https redirect.
if [ "$DEPLOY_GITEA" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# http://${GITEA_FQDN} redirect to https://${GITEA_FQDN}
server {
listen 80;
listen [::]:80;
server_name ${GITEA_FQDN};
return 301 https://${GITEA_FQDN}\$request_uri;
}
EOL
fi
# REDIRECT FOR BTCPAY_USER_FQDN
if [ "$VPS_HOSTING_TARGET" = lxd ]; then
# gitea http to https redirect.
if [ "$DEPLOY_BTCPAY_SERVER" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# http://${BTCPAY_USER_FQDN} redirect to https://${BTCPAY_USER_FQDN}
server {
listen 80;
listen [::]:80;
server_name ${BTCPAY_USER_FQDN};
return 301 https://${BTCPAY_USER_FQDN}\$request_uri;
}
EOL
fi
fi
if [ "$iteration" = 0 ]; then
# TLS config for ghost.
cat >>"$NGINX_CONF_PATH" <<EOL
# global TLS settings
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1.3;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
add_header Strict-Transport-Security "max-age=63072000" always;
ssl_stapling on;
ssl_stapling_verify on;
resolver 198.54.117.10;
# TODO change resolver to local DNS resolver, or inherit from system.
# default server if hostname not specified.
server {
listen 443 default_server;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
return 403;
}
# maybe helps with Twitter cards.
#map \$http_user_agent \$og_prefix {
# ~*(googlebot|twitterbot)/ /open-graph;
#}
# this map allows us to route the clients request to the correct Ghost instance
# based on the clients browser language setting.
map \$http_accept_language \$lang {
default "";
~es es;
}
EOL
fi
cat >>"$NGINX_CONF_PATH" <<EOL
# https://${DOMAIN_NAME} redirect to https://${WWW_FQDN}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
server_name ${DOMAIN_NAME};
# catch all; send request to ${WWW_FQDN}
location / {
return 301 https://${WWW_FQDN}\$request_uri;
}
EOL
if [ "$DEPLOY_NOSTR_RELAY" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# We return a JSON object with name/pubkey mapping per NIP05.
# https://www.reddit.com/r/nostr/comments/rrzk76/nip05_mapping_usernames_to_dns_domains_by_fiatjaf/sssss
# TODO I'm not sure about the security of this Access-Control-Allow-Origin. Read up and restrict it if possible.
location = /.well-known/nostr.json {
add_header Content-Type application/json;
add_header Access-Control-Allow-Origin *;
return 200 '{ "names": { "_": "${NOSTR_ACCOUNT_PUBKEY}" } }';
}
EOL
fi
cat >>"$NGINX_CONF_PATH" <<EOL
}
#access_log /var/log/nginx/ghost-access.log;
#error_log /var/log/nginx/ghost-error.log;
EOL
# SERVER block for BTCPAY Server
if [ "$VPS_HOSTING_TARGET" = lxd ]; then
if [ "$DEPLOY_BTCPAY_SERVER" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# http://${BTCPAY_USER_FQDN} redirect to https://${BTCPAY_USER_FQDN}
server {
listen 443 ssl http2;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
server_name ${BTCPAY_USER_FQDN};
# Route everything to the real BTCPay server
location / {
# URL of BTCPay Server
proxy_pass http://10.139.144.10:80;
proxy_set_header Host \$http_host;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
# For websockets (used by Ledger hardware wallets)
proxy_set_header Upgrade \$http_upgrade;
}
}
EOL
fi
fi
echo " # set up cache paths for nginx caching" >>"$NGINX_CONF_PATH"
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
STACK_NAME="$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
cat >>"$NGINX_CONF_PATH" <<EOL
proxy_cache_path /tmp/${STACK_NAME} levels=1:2 keys_zone=${STACK_NAME}:600m max_size=100m inactive=24h;
EOL
done
# the open server block for the HTTPS listener for ghost
cat >>"$NGINX_CONF_PATH" <<EOL
# Main HTTPS listener for https://${WWW_FQDN}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
server_name ${WWW_FQDN};
# Set the crawler policy.
location = /robots.txt {
add_header Content-Type text/plain;
return 200 "User-Agent: *\\nAllow: /\\n";
}
EOL
# # add the Onion-Location header if specifed.
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$NGINX_CONF_PATH" <<EOL
# add_header Onion-Location https://${ONION_ADDRESS}\$request_uri;
# EOL
# fi
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
STACK_NAME="$DOMAIN_IDENTIFIER-$LANGUAGE_CODE"
if [ "$LANGUAGE_CODE" = en ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
location ~ ^/(ghost/|p/|private/) {
EOL
else
cat >>"$NGINX_CONF_PATH" <<EOL
location ~ ^/${LANGUAGE_CODE}/(ghost/|p/|private/) {
EOL
fi
cat >>"$NGINX_CONF_PATH" <<EOL
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header Host \$http_host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_intercept_errors on;
proxy_pass http://ghost-${STACK_NAME}:2368;
}
EOL
done
ROOT_SITE_LANGUAGE_CODES="$SITE_LANGUAGE_CODES"
for LANGUAGE_CODE in ${ROOT_SITE_LANGUAGE_CODES//,/ }; do
cat >>"$NGINX_CONF_PATH" <<EOL
# Location block to back https://${WWW_FQDN}/${LANGUAGE_CODE} or https://${WWW_FQDN}/ if english.
EOL
if [ "$LANGUAGE_CODE" = en ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
location / {
EOL
if (( "$LANGUAGE_CODE_COUNT" > 1 )); then
# we only need this clause if we know there is more than once lanuage being rendered.
cat >>"$NGINX_CONF_PATH" <<EOL
# Redirect the user to the correct language using the map above.
if ( \$http_accept_language !~* '^en(.*)\$' ) {
#rewrite (.*) \$1/\$lang;
return 302 https://${WWW_FQDN}/\$lang;
}
EOL
fi
else
cat >>"$NGINX_CONF_PATH" <<EOL
location /${LANGUAGE_CODE} {
EOL
fi
cat >>"$NGINX_CONF_PATH" <<EOL
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header Host \$http_host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_intercept_errors on;
proxy_pass http://ghost-${DOMAIN_IDENTIFIER}-${LANGUAGE_CODE}:2368;
# https://stanislas.blog/2019/08/ghost-nginx-cache/ for nginx caching instructions
# Remove cookies which are useless for anonymous visitor and prevent caching
proxy_ignore_headers Set-Cookie Cache-Control;
proxy_hide_header Set-Cookie;
# Add header for cache status (miss or hit)
add_header X-Cache-Status \$upstream_cache_status;
proxy_cache ${DOMAIN_IDENTIFIER}-${LANGUAGE_CODE};
# Default TTL: 1 day
proxy_cache_valid 5s;
# Cache 404 pages for 1h
proxy_cache_valid 404 1h;
# use conditional GET requests to refresh the content from origin servers
proxy_cache_revalidate on;
proxy_buffering on;
# Allows starting a background subrequest to update an expired cache item,
# while a stale cached response is returned to the client.
proxy_cache_background_update on;
# Bypass cache for errors
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
}
EOL
done
# this is the closing server block for the ghost HTTPS segment
cat >>"$NGINX_CONF_PATH" <<EOL
}
EOL
if [ "$DEPLOY_NEXTCLOUD" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# TLS listener for ${NEXTCLOUD_FQDN}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
server_name ${NEXTCLOUD_FQDN};
location / {
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header Host \$host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://${NEXTCLOUD_STACK_TAG}:80;
}
# https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/reverse_proxy_configuration.html
location /.well-known/carddav {
return 301 \$scheme://\$host/remote.php/dav;
}
location /.well-known/caldav {
return 301 \$scheme://\$host/remote.php/dav;
}
}
EOL
fi
# TODO this MIGHT be part of the solution for Twitter Cards.
# location /contents {
# resolver 127.0.0.11 ipv6=off valid=5m;
# proxy_set_header X-Real-IP \$remote_addr;
# proxy_set_header Host \$http_host;
# proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Proto \$scheme;
# proxy_intercept_errors on;
# proxy_pass http://ghost-${DOMAIN_IDENTIFIER}-${SITE_LANGUAGE_CODES}::2368\$og_prefix\$request_uri;
# }
# this piece is for GITEA.
if [ "$DEPLOY_GITEA" = true ]; then
cat >>"$NGINX_CONF_PATH" <<EOL
# TLS listener for ${GITEA_FQDN}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate $CONTAINER_TLS_PATH/fullchain.pem;
ssl_certificate_key $CONTAINER_TLS_PATH/privkey.pem;
ssl_trusted_certificate $CONTAINER_TLS_PATH/fullchain.pem;
server_name ${GITEA_FQDN};
location / {
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header Host \$host;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://gitea-${DOMAIN_IDENTIFIER}-en:3000;
}
}
EOL
fi
iteration=$((iteration+1))
done
# add the closing brace.
cat >>"$NGINX_CONF_PATH" <<EOL
}
EOL

View File

@ -1,121 +0,0 @@
#!/bin/bash
set -euox
cd "$(dirname "$0")"
#https://github.com/fiatjaf/expensive-relay
# NOSTR RELAY WHICH REQUIRES PAYMENTS.
DOCKER_YAML_PATH="$PROJECT_PATH/nginx.yml"
cat > "$DOCKER_YAML_PATH" <<EOL
version: "3.8"
services:
nginx:
image: ${NGINX_IMAGE}
ports:
- 0.0.0.0:443:443
- 0.0.0.0:80:80
networks:
EOL
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source ../../../reset_env.sh
source "$SITE_PATH/site_definition"
source ../../../domain_env.sh
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
# We create another ghost instance under /
cat >> "$DOCKER_YAML_PATH" <<EOL
- ghostnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE
EOL
if [ "$LANGUAGE_CODE" = en ]; then
if [ "$DEPLOY_GITEA" = "true" ]; then
cat >> "$DOCKER_YAML_PATH" <<EOL
- giteanet-$DOMAIN_IDENTIFIER-en
EOL
fi
if [ "$DEPLOY_NEXTCLOUD" = "true" ]; then
cat >> "$DOCKER_YAML_PATH" <<EOL
- nextcloudnet-$DOMAIN_IDENTIFIER-en
EOL
fi
fi
done
done
cat >> "$DOCKER_YAML_PATH" <<EOL
volumes:
- ${REMOTE_HOME}/letsencrypt:/etc/letsencrypt:ro
configs:
- source: nginx-config
target: /etc/nginx/nginx.conf
deploy:
restart_policy:
condition: on-failure
configs:
nginx-config:
file: ${PROJECT_PATH}/nginx.conf
EOL
################ NETWORKS SECTION
cat >> "$DOCKER_YAML_PATH" <<EOL
networks:
EOL
for DOMAIN_NAME in ${DOMAIN_LIST//,/ }; do
export DOMAIN_NAME="$DOMAIN_NAME"
export SITE_PATH="$SITES_PATH/$DOMAIN_NAME"
# source the site path so we know what features it has.
source ../../../reset_env.sh
source "$SITE_PATH/site_definition"
source ../../../domain_env.sh
# for each language specified in the site_definition, we spawn a separate ghost container
# at https://www.domain.com/$LANGUAGE_CODE
for LANGUAGE_CODE in ${SITE_LANGUAGE_CODES//,/ }; do
cat >> "$DOCKER_YAML_PATH" <<EOL
ghostnet-$DOMAIN_IDENTIFIER-$LANGUAGE_CODE:
attachable: true
EOL
done
if [ "$DEPLOY_GITEA" = true ]; then
cat >> "$DOCKER_YAML_PATH" <<EOL
giteanet-$DOMAIN_IDENTIFIER-en:
attachable: true
EOL
fi
if [ "$DEPLOY_NEXTCLOUD" = "true" ]; then
cat >> "$DOCKER_YAML_PATH" <<EOL
nextcloudnet-$DOMAIN_IDENTIFIER-en:
attachable: true
EOL
fi
done
docker stack deploy -c "$DOCKER_YAML_PATH" "reverse-proxy"
# iterate over all our domains and create the nginx config file.

View File

@ -1,23 +0,0 @@
# # tor config
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$NGINX_CONF_PATH" <<EOL
# # server listener for tor v3 onion endpoint
# server {
# listen 443 ssl http2;
# listen [::]:443 ssl http2;
# server_name ${ONION_ADDRESS};
# #access_log /var/log/nginx/tor-www.log;
# # administration not allowed over tor interface.
# location /ghost { deny all; }
# location / {
# proxy_set_header X-Forwarded-For 1.1.1.1;
# proxy_set_header X-Forwarded-Proto https;
# proxy_set_header X-Real-IP 1.1.1.1;
# proxy_set_header Host \$http_host;
# proxy_pass http://tor-ghost:2368;
# }
# }
# EOL
# fi

View File

@ -1,149 +0,0 @@
# if [ "$DEPLOY_NEXTCLOUD" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# nextcloud-db:
# image: ${NEXTCLOUD_DB_IMAGE}
# command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --log-bin --innodb_read_only_compressed=OFF
# networks:
# - nextclouddb-net
# volumes:
# - ${REMOTE_HOME}/nextcloud/db/data:/var/lib/mysql
# environment:
# - MARIADB_ROOT_PASSWORD=\${NEXTCLOUD_MYSQL_ROOT_PASSWORD}
# - MYSQL_PASSWORD=\${NEXTCLOUD_MYSQL_PASSWORD}
# - MYSQL_DATABASE=nextcloud
# - MYSQL_USER=nextcloud
# deploy:
# restart_policy:
# condition: on-failure
# nextcloud:
# image: ${NEXTCLOUD_IMAGE}
# networks:
# - nextclouddb-net
# - nextcloud-net
# volumes:
# - ${REMOTE_HOME}/nextcloud/html:/var/www/html
# environment:
# - MYSQL_PASSWORD=\${NEXTCLOUD_MYSQL_PASSWORD}
# - MYSQL_DATABASE=nextcloud
# - MYSQL_USER=nextcloud
# - MYSQL_HOST=nextcloud-db
# - NEXTCLOUD_TRUSTED_DOMAINS=${DOMAIN_NAME}
# - OVERWRITEHOST=${NEXTCLOUD_FQDN}
# - OVERWRITEPROTOCOL=https
# - SERVERNAME=${NEXTCLOUD_FQDN}
# deploy:
# restart_policy:
# condition: on-failure
# EOL
# fi
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# # a hidden service that routes to the nginx container at http://onionurl.onion server block
# tor-onion:
# image: tor:latest
# networks:
# - tor-net
# volumes:
# - ${REMOTE_HOME}/tor:/var/lib/tor
# - tor-logs:/var/log/tor
# configs:
# - source: tor-config
# target: /etc/tor/torrc
# mode: 0644
# deploy:
# mode: replicated
# replicas: 1
# restart_policy:
# condition: on-failure
# tor-ghost:
# image: ${GHOST_IMAGE}
# networks:
# - ghostdb-net
# - ghost-net
# volumes:
# - ${REMOTE_HOME}/tor_ghost:/var/lib/ghost/content
# environment:
# - url=https://${ONION_ADDRESS}
# - database__client=mysql
# - database__connection__host=ghostdb
# - database__connection__user=ghost
# - database__connection__password=\${GHOST_MYSQL_PASSWORD}
# - database__connection__database=ghost
# deploy:
# restart_policy:
# condition: on-failure
# EOL
# fi
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# - torghost-net
# EOL
# fi
# if [ "$DEPLOY_NEXTCLOUD" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# - nextcloud-net
# EOL
# fi
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# - tor-net
# EOL
# fi
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# volumes:
# tor-data:
# tor-logs:
# EOL
# fi
# #-------------------------
# if [ "$DEPLOY_NEXTCLOUD" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# nextclouddb-net:
# nextcloud-net:
# EOL
# fi
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# tor-net:
# torghost-net:
# EOL
# fi
# # -------------------------------
# if [ "$DEPLOY_ONION_SITE" = true ]; then
# cat >>"$DOCKER_YAML_PATH" <<EOL
# tor-config:
# file: $(pwd)/tor/torrc
# EOL
# fi
# # -----------------------------

View File

@ -1,11 +0,0 @@
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y tor
#COPY ./torrc /etc/tor/torrc
#RUN chown root:root /etc/tor/torrc
#RUN chmod 0644 /etc/tor/torrc
#RUN mkdir /data
#VOLUME /data
# RUN chown 1000:1000 -R /data
#USER 1000:1000
CMD tor -f /etc/tor/torrc

View File

@ -1,8 +0,0 @@
# we configure a hidden service that listens on onion:80 and redirects to nginx:80 at the at the torv3 onion address
SocksPort 0
HiddenServiceDir /var/lib/tor/www
HiddenServiceVersion 3
HiddenServicePort 443 nginx:443
Log info file /var/log/tor/tor.log

View File

@ -1,5 +0,0 @@
HiddenServiceDir /var/lib/tor/www
HiddenServiceVersion 3
HiddenServicePort 443 127.0.0.1:443
Log info file /var/log/tor/tor.log

View File

@ -1,34 +0,0 @@
#!/bin/bash
set -ex
# TODO, ensure VPS_HOSTING_TARGET is in range.
export NEXTCLOUD_FQDN="$NEXTCLOUD_HOSTNAME.$DOMAIN_NAME"
export BTCPAY_FQDN="$BTCPAY_HOSTNAME.$DOMAIN_NAME"
export BTCPAY_USER_FQDN="$BTCPAY_HOSTNAME_IN_CERT.$DOMAIN_NAME"
export WWW_FQDN="$WWW_HOSTNAME.$DOMAIN_NAME"
export GITEA_FQDN="$GITEA_HOSTNAME.$DOMAIN_NAME"
export NOSTR_FQDN="$NOSTR_HOSTNAME.$DOMAIN_NAME"
export ADMIN_ACCOUNT_USERNAME="info"
export CERTIFICATE_EMAIL_ADDRESS="$ADMIN_ACCOUNT_USERNAME@$DOMAIN_NAME"
export REMOTE_NEXTCLOUD_PATH="$REMOTE_HOME/nextcloud"
export REMOTE_GITEA_PATH="$REMOTE_HOME/gitea"
export BTC_CHAIN="$BTC_CHAIN"
export WWW_INSTANCE_TYPE="$WWW_INSTANCE_TYPE"
export BTCPAY_ADDITIONAL_HOSTNAMES="$BTCPAY_ADDITIONAL_HOSTNAMES"
BACKUP_TIMESTAMP="$(date +"%Y-%m")"
UNIX_BACKUP_TIMESTAMP="$(date +%s)"
SHASUM_OF_DOMAIN="$(echo -n "$DOMAIN_NAME" | sha256sum | awk '{print $1;}' )"
export DOMAIN_IDENTIFIER="${SHASUM_OF_DOMAIN: -6}"
echo "$DOMAIN_IDENTIFIER" > "$SITE_PATH/domain_id"
export BACKUP_TIMESTAMP="$BACKUP_TIMESTAMP"
export UNIX_BACKUP_TIMESTAMP="$UNIX_BACKUP_TIMESTAMP"
export LANGUAGE_CODE_COUNT=$(("$(echo "$SITE_LANGUAGE_CODES" | tr -cd , | wc -c)"+1))
STACK_NAME="$DOMAIN_IDENTIFIER-en"
export NEXTCLOUD_STACK_TAG="nextcloud-$STACK_NAME"
export NEXTCLOUD_DB_STACK_TAG="nextclouddb-$STACK_NAME"

14
farscapian.gpg Normal file
View File

@ -0,0 +1,14 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mFIEAAAAABMIKoZIzj0DAQcCAwQ3hQeRT8HLyQEtKJ5C3dKilfWhSpqkPFtfuE0I
i+MNLavAM7tL9gShij7tEcyZe0Iy2hc38TizSlQJciIdgtHUtCJEZXJlayBTbWl0
aCA8ZGVyZWtAZmFyc2NhcGlhbi5jb20+iIAEExMIABwFAgAAAAACCwkCGwMEFQgJ
CgQWAgMBAheAAh4BABYJELRD5TChThyQCxpUUkVaT1ItR1BHXMcA/2k4QtiV0eNQ
299XW4Wvoac1Be6+WTPRIaC/PYnd0pR7AP4hi5ou6uyKtqkfhLtRQHN/9ny3MBEG
whGxb/bCIzOdILhWBAAAAAASCCqGSM49AwEHAgMEI0VBpCTeIpfdH2UcWiSPYGAJ
Z1Rsp0uKf6HzZnpGRAdCTNgCh+pVBibP0Cz0pNdM7IfHSfS+OP4/Lb1B5N9BSAMB
CAeIbQQYEwgACQUCAAAAAAIbDAAWCRC0Q+UwoU4ckAsaVFJFWk9SLUdQRxM4AQCw
m24svH13uNAebQurOloy/1qZgNdXANBQQ05oi1tEyAD/eGFFVdgs5L6Hpg/GJLvo
X8bd1+1sa2d9TldbgfNfRA0=
=vZGY
-----END PGP PUBLIC KEY BLOCK-----

View File

@ -1,74 +1,214 @@
#!/bin/bash
set -ex
set -exu
cd "$(dirname "$0")"
# let's check to ensure the management machine is on the Baseline ubuntu 21.04
if ! lsb_release -d | grep -q "Ubuntu 22.04 LTS"; then
echo "ERROR: Your machine is not running the Ubuntu 22.04 LTS baseline OS on your management machine."
# https://www.sovereign-stack.org/install/
# this script is not meant to be executed from the SSME; Let's let's check and abort if so.
if [ "$(hostname)" = ss-mgmt ]; then
echo "ERROR: This command is meant to be executed from the bare metal management machine -- not the SSME."
exit 1
fi
if [ ! -f /usr/share/keyrings/docker-archive-keyring.gpg ]; then
cat ./certs/docker.gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# ensure the iptables forward policy is set to ACCEPT so your host can act as a router
# Note this is necessary if docker is running (or has been previuosly installed) on the
# same host running incus.
sudo iptables -F FORWARD
sudo iptables -P FORWARD ACCEPT
# run the incus install script.
sudo bash -c ./install_incus.sh
# run incus init
cat <<EOF | sudo incus admin init --preseed
config: {}
networks:
- config:
ipv4.address: auto
ipv4.dhcp: true
ipv6.address: none
description: "Default network bridge for ss-mgmt outbound network access."
name: incusbr0
type: bridge
project: default
storage_pools:
- config:
size: 30GiB
description: ""
name: sovereign-stack
driver: zfs
profiles:
- config: {}
description: "Default profile for ss-mgmt."
devices:
enp5s0:
name: enp5s0
network: incusbr0
type: nic
root:
path: /
pool: sovereign-stack
type: disk
name: default
projects: []
cluster: null
EOF
. ./deployment/deployment_defaults.sh
. ./deployment/base.sh
# we need to get the base image. IMport it if it's cached, else download it then cache it.
if ! incus image list | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
# if the image if cached locally, import it from disk, otherwise download it from ubuntu
IMAGE_PATH="$HOME/ss/cache/ss-ubuntu-jammy"
IMAGE_IDENTIFIER=$(find "$IMAGE_PATH" | grep ".qcow2" | head -n1 | cut -d "." -f1)
METADATA_FILE="$IMAGE_PATH/meta-$IMAGE_IDENTIFIER.tar.xz"
IMAGE_FILE="$IMAGE_PATH/$IMAGE_IDENTIFIER.qcow2"
if [ -d "$IMAGE_PATH" ] && [ -f "$METADATA_FILE" ] && [ -f "$IMAGE_FILE" ]; then
incus image import "$METADATA_FILE" "$IMAGE_FILE" --alias "$UBUNTU_BASE_IMAGE_NAME"
else
incus image copy "images:$BASE_INCUS_IMAGE" local: --alias "$UBUNTU_BASE_IMAGE_NAME" --vm --auto-update
mkdir -p "$IMAGE_PATH"
incus image export "$UBUNTU_BASE_IMAGE_NAME" "$IMAGE_PATH" --vm
fi
fi
sudo apt-get update
# if the ss-mgmt doesn't exist, create it.
SSH_PATH="$HOME/.ssh"
SSH_PRIVKEY_PATH="$SSH_PATH/id_rsa"
SSH_PUBKEY_PATH="$SSH_PRIVKEY_PATH.pub"
sudo apt-get install -y wait-for-it dnsutils rsync sshfs curl gnupg \
apt-transport-https ca-certificates lsb-release \
docker-ce-cli docker-ce containerd.io docker-compose-plugin \
python3-pip python3-dev libusb-1.0-0-dev libudev-dev pinentry-curses \
libcanberra-gtk-module
# for trezor installation
pip3 install setuptools wheel
pip3 install trezor_agent
if [ ! -f /etc/udev/rules.d/51-trezor.rules ]; then
sudo cp ./51-trezor.rules /etc/udev/rules.d/51-trezor.rules
if [ ! -f "$SSH_PRIVKEY_PATH" ]; then
ssh-keygen -f "$SSH_PRIVKEY_PATH" -t rsa -b 4096
fi
# install lxd as a snap if it's not installed. We only really use the client part of this package
# on the management machine.
if ! snap list | grep -q lxd; then
sudo snap install lxd
# add SSH_PUBKEY_PATH to authorized_keys
grep -qxF "$(cat "$SSH_PUBKEY_PATH")" "$SSH_PATH/authorized_keys" || cat "$SSH_PUBKEY_PATH" >> "$SSH_PATH/authorized_keys"
FROM_BUILT_IMAGE=false
if ! incus list --format csv | grep -q ss-mgmt; then
# TODO check to see if there's an existing ss-mgmt image to spawn from, otherwise do this.
if incus image list | grep -q ss-mgmt; then
FROM_BUILT_IMAGE=true
incus init ss-mgmt ss-mgmt --vm -c limits.cpu=4 -c limits.memory=4GiB --profile=default
else
incus init "images:$BASE_INCUS_IMAGE" ss-mgmt --vm -c limits.cpu=4 -c limits.memory=4GiB --profile=default
fi
fi
# TODO WORK ON GETTING RID OF THIS DEPENDENCY
if [ ! -f /usr/local/bin/docker-machine ]; then
# let's ensure docker-machine is available. This is only temporary though.
curl -L "https://github.com/docker/machine/releases/download/v0.16.2/docker-machine-$(uname -s)-$(uname -m)" >/tmp/docker-machine &&
chmod +x /tmp/docker-machine &&
sudo cp /tmp/docker-machine /usr/local/bin/docker-machine
# mount the pre-verified sovereign stack git repo into the new vm
if ! incus config device show ss-mgmt | grep -q ss-code; then
incus config device add ss-mgmt ss-code disk source="$(pwd)" path=/home/ubuntu/sovereign-stack
fi
# make ss-deploy available to the user
if ! groups | grep -q docker; then
sudo groupadd docker
# create the ~/ss path and mount it into the vm.
source ./deployment/deployment_defaults.sh
source ./deployment/base.sh
mkdir -p "$SS_ROOT_PATH"
if ! incus config device show ss-mgmt | grep -q ss-root; then
incus config device add ss-mgmt ss-root disk source="$SS_ROOT_PATH" path=/home/ubuntu/ss
fi
sudo usermod -aG docker "$USER"
# if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
BITCOIN_DIR="$HOME/.bitcoin"
REMOTE_BITCOIN_CACHE_PATH="/home/ubuntu/ss/cache/bitcoin"
BITCOIN_TESTNET_BLOCKS_PATH="$BITCOIN_DIR/testnet3/blocks"
if [ -d "$BITCOIN_TESTNET_BLOCKS_PATH" ]; then
if ! incus config device show ss-mgmt | grep -q ss-testnet-blocks; then
incus config device add ss-mgmt ss-testnet-blocks disk source="$BITCOIN_TESTNET_BLOCKS_PATH" path=$REMOTE_BITCOIN_CACHE_PATH/testnet/blocks
fi
fi
# make the Sovereign Stack commands available to the user.
# we use ~/.bashrc
# if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
BITCOIN_TESTNET_CHAINSTATE_PATH="$BITCOIN_DIR/testnet3/chainstate"
if [ -d "$BITCOIN_TESTNET_CHAINSTATE_PATH" ]; then
if ! incus config device show ss-mgmt | grep -q ss-testnet-chainstate; then
incus config device add ss-mgmt ss-testnet-chainstate disk source="$BITCOIN_TESTNET_CHAINSTATE_PATH" path="$REMOTE_BITCOIN_CACHE_PATH/testnet/chainstate"
fi
fi
# if a ~/.bitcoin/blocks dir exists, mount it in.
BITCOIN_MAINNET_BLOCKS_PATH="$BITCOIN_DIR/blocks"
if [ -d "$BITCOIN_MAINNET_BLOCKS_PATH" ]; then
if ! incus config device show ss-mgmt | grep -q ss-mainnet-blocks; then
incus config device add ss-mgmt ss-mainnet-blocks disk source="$BITCOIN_MAINNET_BLOCKS_PATH" path="$REMOTE_BITCOIN_CACHE_PATH/mainnet/blocks"
fi
fi
# if a ~/.bitcoin/testnet3/blocks direrectory exists, mount it in.
BITCOIN_MAINNET_CHAINSTATE_PATH="$BITCOIN_DIR/chainstate"
if [ -d "$BITCOIN_MAINNET_CHAINSTATE_PATH" ]; then
if ! incus config device show ss-mgmt | grep -q ss-mainnet-blocks; then
incus config device add ss-mgmt ss-mainnet-chainstate disk source="$BITCOIN_MAINNET_CHAINSTATE_PATH" path="$REMOTE_BITCOIN_CACHE_PATH/mainnet/chainstate"
fi
fi
# mount the ssh directory in there.
if [ -f "$SSH_PUBKEY_PATH" ]; then
if ! incus config device show ss-mgmt | grep -q ss-ssh; then
incus config device add ss-mgmt ss-ssh disk source="$HOME/.ssh" path=/home/ubuntu/.ssh
fi
fi
# start the vm if it's not already running
if incus list --format csv | grep -q "ss-mgmt,STOPPED"; then
incus start ss-mgmt
sleep 10
fi
# wait for the vm to have an IP address
. ./management/wait_for_ip.sh
# do some other preparations for user experience
incus file push ./management/bash_aliases ss-mgmt/home/ubuntu/.bash_aliases
incus file push ./management/bash_profile ss-mgmt/home/ubuntu/.bash_profile
incus file push ./management/bashrc ss-mgmt/home/ubuntu/.bashrc
incus file push ./management/motd ss-mgmt/etc/update-motd.d/sovereign-stack
# install SSH
incus exec ss-mgmt apt-get update
incus exec ss-mgmt -- apt-get install -y openssh-server
incus file push ./management/sshd_config ss-mgmt/etc/ssh/sshd_config
incus exec ss-mgmt -- sudo systemctl restart sshd
# add 'ss-manage' to the bare metal ~/.bashrc
ADDED_COMMAND=false
if ! < "$HOME/.bashrc" grep -q "ss-deploy"; then
echo "alias ss-deploy='/home/$USER/sovereign-stack/deploy.sh \$@'" >> "$HOME/.bashrc"
if ! < "$HOME/.bashrc" grep -q "ss-manage"; then
echo "alias ss-manage='$(pwd)/manage.sh \$@'" >> "$HOME/.bashrc"
ADDED_COMMAND=true
fi
if ! < "$HOME/.bashrc" grep -q "ss-cluster"; then
echo "alias ss-cluster='/home/$USER/sovereign-stack/cluster.sh \$@'" >> "$HOME/.bashrc"
ADDED_COMMAND=true
fi
# Let's remove any entry in our known_hosts, then add it back.
# we are using IP address here so we don't have to rely on external DNS
# configuration for the base image preparataion.
ssh-keygen -R "$IP_V4_ADDRESS"
if ! < "$HOME/.bashrc" grep -q "ss-projects"; then
echo "alias ss-projects='/home/$USER/sovereign-stack/projects.sh \$@'" >> "$HOME/.bashrc"
ADDED_COMMAND=true
ssh-keyscan -H "$IP_V4_ADDRESS" >> "$SSH_HOME/known_hosts"
ssh "ubuntu@$IP_V4_ADDRESS" sudo chown -R ubuntu:ubuntu /home/ubuntu
if [ "$FROM_BUILT_IMAGE" = false ]; then
ssh "ubuntu@$IP_V4_ADDRESS" /home/ubuntu/sovereign-stack/management/provision.sh
incus stop ss-mgmt
if ! incus image list | grep -q "ss-mgmt"; then
echo "Publishing image. Please wait, this may take a while..."
incus publish ss-mgmt --alias=ss-mgmt
fi
incus start ss-mgmt
fi
if [ "$ADDED_COMMAND" = true ]; then
echo "WARNING! You need to run 'source ~/.bashrc' before continuing."
fi
echo "NOTICE! You need to run 'source ~/.bashrc' before continuing. After that, type 'ss-manage' to enter your management environment."
fi

75
install_incus.sh Executable file
View File

@ -0,0 +1,75 @@
#!/bin/bash
set -exu
cd "$(dirname "$0")"
if [ $UID -ne 0 ]; then
echo "ERROR: run with sudo."
exit 1
fi
# put the zabbly key in there.
mkdir -p /etc/apt/keyrings/
cat <<EOF > /etc/apt/keyrings/zabbly.asc
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQGNBGTlYcIBDACYQoVXVyQ6Y3Of14GwEaiv/RstQ8jWnH441OtvDbD/VVT8yF0P
pUfypWjQS8aq0g32Qgb9H9+b8UAAKojA2W0szjJFlmmSq19YDMMmNC4AnfeZlKYM
61Zonna7fPaXmlsTlSiUeo/PGvmAXrkFURC9S8FbhZdWEcUpf9vcKAoEzV8qGA4J
xbKlj8EOjSkdq3OQ1hHjP8gynbbzMhZQwjbnWqoiPj35ed9EMn+0QcX+GmynGq6T
hBXdRdeQjZC6rmXzNF2opCyxqx3BJ0C7hUtpHegmeoH34wnJHCqGYkEKFAjlRLoW
tOzHY9J7OFvB6U7ENtnquj7lg2VQK+hti3uiHW+oide06QgjVw2irucCblQzphgo
iX5QJs7tgFFDsA9Ee0DZP6cu83hNFdDcXEZBc9MT5Iu0Ijvj7Oeym3DJpkCuIWgk
SeP56sp7333zrg73Ua7YZsZHRayAe/4YdNUua+90P4GD12TpTtJa4iRWRd7bis6m
tSkKRj7kxyTsxpEAEQEAAbQmWmFiYmx5IEtlcm5lbCBCdWlsZHMgPGluZm9AemFi
Ymx5LmNvbT6JAdQEEwEKAD4WIQRO/FkGlssVuHxzo62CzIeXyDjc/QUCZOVhwgIb
AwUJA8JnAAULCQgHAgYVCgkICwIEFgIDAQIeAQIXgAAKCRCCzIeXyDjc/W05C/4n
lGRTlyOETF2K8oWbjtan9wlttQ+pwymJCnP8T+JJDycGL8dPsGdG1ldHdorVZpFi
1P+Bem9bbiW73TpbX+WuCfP1g3WN7AVa2mYRfSVhsLNeBAMRgWgNW9JYsmg99lmY
aPsRYZdGu/PB+ffMIyWhjL3CKCbYS6lV5N5Mi4Lobyz/I1Euxpk2vJhhUqh786nJ
pQpDnvEl1CRANS6JD9bIvEdfatlAhFlrz1TTf6R7SlppyYI7tme4I/G3dnnHWYSG
cGRaLwpwobTq0UNSO71g7+at9eY8dh5nn2lZUvvxZvlbXoOoPxKUoeGVXqoq5F7S
QcMVAogYtyNlnLnsUfSPw6YFRaQ5o00h30bR3hk+YmJ47AJCRY9GIc/IEdSnd/Z5
Ea7CrP2Bo4zxPgcl8fe311FQRTRoWr19l5PXZgGjzy6siXTrYQi6GjLtqVB5SjJf
rrIIy1vZRyDL96WPu6fS+XQMpjsSygj+DBFk8OAvHhQhMCXHgT4BMyg4D5GE0665
AY0EZOVhwgEMAMIztf6WlRsweysb0tzktYE5E/GxIK1lwcD10Jzq3ovJJPa2Tg2t
J6ZBmMQfwU4OYO8lJxlgm7t6MYh41ZZaRhySCtbJiAXqK08LP9Gc1iWLRvKuMzli
NFSiFDFGT1D6kwucVfL/THxvZlQ559kK+LB4iXEKXz37r+MCX1K9uiv0wn63Vm0K
gD3HDgfXWYJcNyXXfJBe3/T5AhuSBOQcpa7Ow5n8zJ+OYg3FFKWHDBTSSZHpbJFr
ArMIGARz5/f+EVj9XGY4W/+ZJlxNh8FzrTLeRArmCWqKLPRG/KF36dTY7MDpOzlw
vu7frv+cgiXHZ2NfPrkH8oOl4L+ufze5KBGcN0QwFDcuwCkv/7Ft9Ta7gVaIBsK7
12oHInUJ6EkBovxpuaLlHlP8IfmZLZbbHzR2gR0e6IhLtrzd7urB+gXUtp6+wCL+
kWD14TTJhSQ+SFU8ajvUah7/1m2bxdjZNp9pzOPGkr/jEjCM0CpZiCY62SeIJqVc
4/ID9NYLAGmSIwARAQABiQG8BBgBCgAmFiEETvxZBpbLFbh8c6OtgsyHl8g43P0F
AmTlYcICGwwFCQPCZwAACgkQgsyHl8g43P0wEgv+LuknyXHpYpiUcJOl9Q5yLokd
o7tJwJ+9Fu7EDAfM7mPgyBj7Ad/v9RRP+JKWHqIYEjyrRnz9lmzciU+LT/CeoQu/
MgpU8wRI4gVtLkX2238amrTKKlVjQUUNHf7cITivUs/8e5W21JfwvcSzu5z4Mxyw
L6vMlBUAixtzZSXD6O7MO9uggHUZMt5gDSPXG2RcIgWm0Bd1yTHL7jZt67xBgZ4d
hUoelMN2XIDLv4SY78jbHAqVN6CLLtWrz0f5YdaeYj8OT6Ohr/iJQdlfVaiY4ikp
DzagLi0LvG9/GuB9eO6yLuojg45JEH8DC7NW5VbdUITxQe9NQ/j5kaRKTEq0fyZ+
qsrryTyvXghxK8oMUcI10l8d41qXDDPCA40kruuspCZSAle3zdqpYqiu6bglrgWr
Zr2Nm9ecm/kkqMIcyJ8e2mlkuufq5kVem0Oez+GIDegvwnK3HAqWQ9lzdWKvnLiE
gNkvg3bqIwZ/WoHBnSwOwwAzwarJl/gn8OG6CIeP
=8Uc6
-----END PGP PUBLIC KEY BLOCK-----
EOF
sh -c 'cat <<EOF > /etc/apt/sources.list.d/zabbly-incus-stable.sources
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.asc
EOF'
apt-get update
# we || true this here because installing incus fails.
# TODO see if this can be fixed by installing JUST the incus client.
# none of the systemd/daemon stuff is needed necessarily.
apt-get install incus -y

39
manage.sh Executable file
View File

@ -0,0 +1,39 @@
#!/bin/bash
# https://www.sovereign-stack.org/ss-manage/
set -eu
cd "$(dirname "$0")"
# check to ensure dependencies are met.
if ! command -v incus >/dev/null 2>&1; then
echo "This script requires incus to be installed. Please run 'install.sh'."
exit 1
fi
if ! incus remote get-default | grep -q "local"; then
incus remote switch "local"
fi
if ! incus list -q --format csv | grep -q ss-mgmt; then
echo "ERROR: the 'ss-mgmt' VM does not exist. You may need to run install.sh"
exit 1
fi
# if the mgmt machine doesn't exist, then warn the user to perform ./install.sh
if ! incus list --format csv | grep -q "ss-mgmt"; then
echo "ERROR: the management machine VM does not exist. You probably need to run './install.sh'."
echo "INFO: check out https://www.sovereign-stack.org/tag/code-lifecycle-management/ for more information."
fi
# if the machine does exist, let's make sure it's RUNNING.
if incus list --format csv | grep -q "ss-mgmt,STOPPED"; then
echo "INFO: The SSME was in a STOPPED state. Starting the environment. Please wait."
incus start ss-mgmt
sleep 30
fi
. ./management/wait_for_ip.sh
# let's ensure ~/.ssh/ssh_config is using the correct IP address for ss-mgmt.
ssh ubuntu@"$IP_V4_ADDRESS"

View File

@ -7,10 +7,6 @@
# put this into /usr/lib/udev/rules.d or /lib/udev/rules.d
# depending on your distribution
# Trezor
SUBSYSTEM=="usb", ATTR{idVendor}=="534c", ATTR{idProduct}=="0001", MODE="0660", GROUP="plugdev", TAG+="uaccess", TAG+="udev-acl", SYMLINK+="trezor%n"
KERNEL=="hidraw*", ATTRS{idVendor}=="534c", ATTRS{idProduct}=="0001", MODE="0660", GROUP="plugdev", TAG+="uaccess", TAG+="udev-acl"
# Trezor v2
SUBSYSTEM=="usb", ATTR{idVendor}=="1209", ATTR{idProduct}=="53c0", MODE="0660", GROUP="plugdev", TAG+="uaccess", TAG+="udev-acl", SYMLINK+="trezor%n"
SUBSYSTEM=="usb", ATTR{idVendor}=="1209", ATTR{idProduct}=="53c1", MODE="0660", GROUP="plugdev", TAG+="uaccess", TAG+="udev-acl", SYMLINK+="trezor%n"

12
management/bash_aliases Normal file
View File

@ -0,0 +1,12 @@
#!/bin/bash
alias ss-help='cat /home/ubuntu/sovereign-stack/deployment/help.txt'
alias ss-show='/home/ubuntu/sovereign-stack/deployment/show.sh $@'
alias ss-remote='/home/ubuntu/sovereign-stack/deployment/remote.sh $@'
alias ss-up='/home/ubuntu/sovereign-stack/deployment/up.sh $@'
alias ss-down='/home/ubuntu/sovereign-stack/deployment/down.sh $@'
alias ss-reset='/home/ubuntu/sovereign-stack/deployment/reset.sh $@'
alias ss-stop='/home/ubuntu/sovereign-stack/deployment/stop.sh $@'
alias ss-start='/home/ubuntu/sovereign-stack/deployment/start.sh $@'
alias ss-restore='/home/ubuntu/sovereign-stack/deployment/restore.sh $@'
alias ll='ls -lah'

6
management/bash_profile Normal file
View File

@ -0,0 +1,6 @@
#!/bin/bash
# this wires up the aliases for remote ssh sessions.
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

117
management/bashrc Normal file
View File

@ -0,0 +1,117 @@
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
# don't put duplicate lines or lines starting with space in the history.
# See bash(1) for more options
HISTCONTROL=ignoreboth
# append to the history file, don't overwrite it
shopt -s histappend
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=1000
HISTFILESIZE=2000
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# If set, the pattern "**" used in a pathname expansion context will
# match all files and zero or more directories and subdirectories.
#shopt -s globstar
# make less more friendly for non-text input files, see lesspipe(1)
[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
#force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=
fi
fi
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
else
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
;;
*)
;;
esac
# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
fi
# colored GCC warnings and errors
export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
# Add an "alert" alias for long running commands. Use like so:
# sleep 10; alert
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi

4
management/motd Normal file
View File

@ -0,0 +1,4 @@
#!/bin/bash
echo "Welcome to the management environment. Run 'ss-help' to get started."

42
management/provision.sh Executable file
View File

@ -0,0 +1,42 @@
#!/bin/bash
set -e
cd "$(dirname "$0")"
# NOTE! This script MUST be executed as root.
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release jq bc
sudo mkdir -m 0755 -p /etc/apt/keyrings
# add the docker gpg key to keyring for docker-ce-cli
if [ ! -f /etc/apt/keyrings/docker.gpg ]; then
cat /home/ubuntu/sovereign-stack/certs/docker.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null 2>&1
fi
# TODO REVIEW mgmt software requirements
sudo apt-get update
sudo apt-get install -y wait-for-it dnsutils rsync sshfs apt-transport-https docker-ce-cli libcanberra-gtk-module nano git gridsite-clients
sudo bash -c "$HOME/sovereign-stack/install_incus.sh"
sudo incus admin init --minimal
# add docker group
if ! grep -q "^docker:" /etc/group; then
sudo groupadd docker
fi
# add incus-admin group
if ! grep -q "^incus-admin:" /etc/group; then
sudo groupadd incus-admin
fi
if ! groups ubuntu | grep -q "\bdocker\b"; then
sudo usermod -aG docker ubuntu
fi
if ! groups ubuntu | grep -q "\bincus-admin\b"; then
sudo usermod -aG incus-admin ubuntu
fi

View File

@ -1,54 +0,0 @@
version: '3.8'
services:
registry-mirror:
image: ${REGISTRY_DOCKER_IMAGE}
ports:
- 5000:5000
volumes:
- ${CACHES_DIR}/registry_images:/data
environment:
REGISTRY_HTTP_ADDR: :5000
REGISTRY_HTTP_HOST: http://0.0.0.0:5000
REGISTRY_HTTP_SECRET: ${REGISTRY_HTTP_SECRET:-CHANGEME}
REGISTRY_PROXY_REMOTEURL: ${REGISTRY_PROXY_REMOTEURL:-https://registry-1.docker.io}
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
configs:
- source: registry-config
target: /etc/docker/registry/config.yml
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
# secrets:
# - source: regmirror_tls_cert
# target: /var/lib/registry/regmirror:5000.cert
# uid: '0'
# gid: '0'
# mode: 0400
# - source: regmirror_tls_key
# target: /var/lib/registry/regmirror:5000.key
# uid: '0'
# gid: '0'
# mode: 0400
# - source: regmirror_ca_crt
# target: /var/lib/registry/ca.crt
# uid: '0'
# gid: '0'
# mode: 0400
# secrets:
# regmirror_tls_cert:
# file: regmirror.cert
# regmirror_tls_key:
# file: regmirror.key
# regmirror_ca_crt:
# file: ca.crt
configs:
registry-config:
file: ${CLUSTER_PATH}/registry.yml
volumes:
registry_data:

116
management/sshd_config Normal file
View File

@ -0,0 +1,116 @@
# This is the sshd server system-wide configuration file. See
# sshd_config(5) for more information.
# This sshd was compiled with PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented. Uncommented options override the
# default value.
Include /etc/ssh/sshd_config.d/*.conf
Port 22
#AddressFamily any
ListenAddress 0.0.0.0
#ListenAddress ::
#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed25519_key
# Ciphers and keying
#RekeyLimit default none
# Logging
#SyslogFacility AUTH
#LogLevel INFO
# Authentication:
#LoginGraceTime 2m
#PermitRootLogin prohibit-password
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
#PubkeyAuthentication yes
# Expect .ssh/authorized_keys2 to be disregarded by default in future.
#AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2
#AuthorizedPrincipalsFile none
#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
#PermitEmptyPasswords no
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
KbdInteractiveAuthentication no
# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the KbdInteractiveAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via KbdInteractiveAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and KbdInteractiveAuthentication to 'no'.
UsePAM yes
#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
#PrintMotd no
#PrintLastLog yes
#TCPKeepAlive yes
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
#PidFile /run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none
# no default banner path
#Banner none
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
# override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
PrintMotd yes

32
management/wait_for_ip.sh Executable file
View File

@ -0,0 +1,32 @@
#!/bin/bash
set -e
IP_V4_ADDRESS=
while true; do
# wait for
if incus list ss-mgmt | grep -q enp5s0; then
break;
else
sleep 1
fi
done
while true; do
IP_V4_ADDRESS=$(incus list ss-mgmt --format csv --columns=4 | grep enp5s0 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')
if [ -n "$IP_V4_ADDRESS" ]; then
# give the machine extra time to spin up.
break;
else
sleep 1
printf '.'
fi
done
export IP_V4_ADDRESS="$IP_V4_ADDRESS"
# wait for the VM to complete its default cloud-init.
while incus exec ss-mgmt -- [ ! -f /var/lib/cloud/instance/boot-finished ]; do
sleep 1
done

View File

@ -1,39 +0,0 @@
#!/bin/bash
set -x
SSH_ENDPOINT_HOSTNAME="atlantis"
SSH_ENDPOINT_DOMAIN_NAME="ancapistan.io"
TEST_DOMAIN="ancapistan.casa"
CLUSTER_NAME="development"
export LXD_VM_NAME="${TEST_DOMAIN//./-}"
if [ -n "$TEST_DOMAIN" ]; then
lxc delete --force www-"$LXD_VM_NAME"
lxc delete --force btcpay-"$LXD_VM_NAME"
lxc delete --force sovereign-stack
lxc delete --force sovereign-stack-base
lxc profile delete www-"$LXD_VM_NAME"
lxc profile delete btcpay-"$LXD_VM_NAME"
fi
lxc profile delete sovereign-stack
lxc image rm sovereign-stack-base
lxc image rm ubuntu-base
lxc network delete lxdbrSS
lxc storage delete sovereign-stack
lxc remote switch "local"
lxc remote remove "$CLUSTER_NAME"
source "$HOME/.bashrc"
./cluster.sh create "$CLUSTER_NAME" "$SSH_ENDPOINT_HOSTNAME.$SSH_ENDPOINT_DOMAIN_NAME"
#--data-plane-interface=enp89s0
#./deploy.sh

View File

@ -1,24 +0,0 @@
#!/bin/bash
set -eu
export DOMAIN_NAME=
export DUPLICITY_BACKUP_PASSPHRASE=
export BTCPAY_HOSTNAME_IN_CERT=
export DEPLOY_GHOST=true
export DEPLOY_NEXTCLOUD=false
export DEPLOY_NOSTR_RELAY=true
export NOSTR_ACCOUNT_PUBKEY=
export DEPLOY_GITEA=false
export DEPLOY_ONION_SITE=false
export GHOST_MYSQL_PASSWORD=
export GHOST_MYSQL_ROOT_PASSWORD=
export NEXTCLOUD_MYSQL_PASSWORD=
export NEXTCLOUD_MYSQL_ROOT_PASSWORD=
export GITEA_MYSQL_PASSWORD=
export GITEA_MYSQL_ROOT_PASSWORD=
export SITE_LANGUAGE_CODES="en"
export LANGUAGE_CODE="en"
SCRIPT_DIR="$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
source "$SCRIPT_DIR/defaults.sh"

View File

@ -1,7 +0,0 @@
#!/bin/bash
lxc list
lxc network list
lxc profile list
lxc storage list
lxc image list

View File

@ -1,17 +0,0 @@
FROM ubuntu:22.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y wait-for-it dnsutils rsync sshfs snapd lxd-client
RUN mkdir /sovereign-stack
COPY ./deployment /sovereign-stack
WORKDIR /sovereign-stack
RUN mkdir /domain
VOLUME /domain
ENV SITE_PATH=/domain
COPY ./entrypoint.sh /entrypoint.sh
RUN chmod 0744 /entrypoint.sh
CMD /entrypoint.sh

View File

@ -1,8 +0,0 @@
#!/bin/bash
if [ -z "$DOMAIN_NAME" ]; then
echo "ERROR: DOMAIN_NAME not defined.".
exit 1
fi
/sovereign-stack/deploy.sh --domain="$DOMAIN_NAME" --hosting-provider=lxd

View File

@ -1,32 +0,0 @@
version: "3.8"
services:
# a hidden service that routes to the nginx container at http://onionurl.onion server block
tor-onion:
image: tor:latest
networks:
- tor-net
volumes:
- ${REMOTE_HOME}/tor:/var/lib/tor
- tor-logs:/var/log/tor
configs:
- source: tor-config
target: /etc/tor/torrc
mode: 0644
deploy:
mode: replicated
replicas: 1
restart_policy:
condition: on-failure
volumes:
tor-data:
tor-logs:
networks:
tor-net:
attachable: true
configs:
tor-config:
file: ${TOR_CONFIG_PATH}

93
uninstall.sh Executable file
View File

@ -0,0 +1,93 @@
#!/bin/bash
set -exu
# this script uninstalls incus from the MANAGEMENT MACHINE
# if you want to remove incus from remote cluster hosts, run ss-reset.
PURGE_INCUS=false
# grab any modifications from the command line.
for i in "$@"; do
case $i in
--purge)
PURGE_INCUS=true
shift
;;
*)
echo "Unexpected option: $1"
exit 1
;;
esac
done
# this script undoes install.sh
if ! command -v incus >/dev/null 2>&1; then
echo "This script requires incus to be installed. Please run 'install.sh'."
exit 1
fi
if ! incus remote get-default | grep -q "local"; then
echo "ERROR: You MUST be on the local remote when uninstalling the SSME."
echo "INFO: You can use 'incus remote switch local' to do this."
exit 1
fi
if ! incus project list | grep -q "default (current)"; then
echo "ERROR: You MUST be on the default project when uninstalling the SSME."
echo "INFO: You can use 'incus project switch default' to do this."
exit 1
fi
if incus list --format csv | grep -q "ss-mgmt"; then
if incus list --format csv -q | grep -q "ss-mgmt,RUNNING"; then
incus stop ss-mgmt
fi
if incus config device list ss-mgmt -q | grep -q "ss-code"; then
incus config device remove ss-mgmt ss-code
fi
if incus config device list ss-mgmt -q | grep -q "ss-root"; then
incus config device remove ss-mgmt ss-root
fi
if incus config device list ss-mgmt -q | grep -q "ss-ssh"; then
incus config device remove ss-mgmt ss-ssh
fi
incus delete ss-mgmt
fi
if [ "$PURGE_INCUS" = true ]; then
if incus profile device list default | grep -q root; then
incus profile device remove default root
fi
if incus profile device list default | grep -q enp5s0; then
incus profile device remove default enp5s0
fi
if incus network list --project default | grep -q incusbr0; then
incus network delete incusbr0
fi
# this file contains the BASE_IMAGE_NAME
. ./deployment/base.sh
if incus image list | grep -q "$UBUNTU_BASE_IMAGE_NAME"; then
incus image delete "$UBUNTU_BASE_IMAGE_NAME"
fi
if incus storage list --format csv | grep -q sovereign-stack; then
incus storage delete sovereign-stack
fi
if dpkg -l | grep -q incus; then
sudo apt purge incus -y
fi
fi