-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use qvm.anon-whonix, deprecate securedrop-handle-upgrade
for sys-whonix
#1227
base: main
Are you sure you want to change the base?
Conversation
b428544
to
a53d4f3
Compare
a53d4f3
to
0def809
Compare
0def809
to
6d4149d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR, however, I am not sure this is the right approach. I agree with the principle of simplifying logic and avoiding duplicated references (which I'm all for). However, if we do this, we have to go all the way in, over fear that we'll have out of sync versions. Currently there are two "whonix version" variables (outs and Qubes'), which could lead to breakage in our logic introduced by new whonix version being rolled out (something like this). I'm happy to go into a bit more detail if it sounds unclear.
I'm planning to write something up in #1225 describing my thoughts on system template upgrades.
I'm wondering what you think is a bad approach about applying If we don't apply whonix's salt states, we are saying that we will be responsible for all whonix vm configuration besides just the template download, including RPC policies, and I think there's a risk that our config and theirs will (continue to) diverge. I would say that as long as we are using whonix VMs, my preference would be to use them as closely as possible to how they are tested upstream, or at least, to build on top of an accepted upstream configuration. (Edit: a last thing I should note is that we were relying on the qvm.anon-whonix state up until Qubes 4.1 support. There isn't a lot of information in that PR on why it was dropped, but my guess is it was to be confident about supporting one whonix version across different Qubes OS versions. However, I'm not sure enough attention was paid to the side-effects of losing those salt states.) Re template upgrades/version number: It's not ideal that we also maintain a hardcoded version number corresponding to our intended whonix version, because it is duplicative. But what it means is that as soon as a new whonix version is supported, users will have it downloaded onto their machines. Then, when we have tested the latest whonix templates, we set our version to apply them to both our components (sd-whonix) and also to sys- and anon-whonix, and the changes are not applied to our VMs or to preexisting sys- and anon- whonix until we explicitly bump the supported whonix version. Maybe I'm missing something, I'm certainly open to hearing that, but I'm not sure that I see the downsides to this approach? (It does mean that we would have to be timely about whonix version bumps, but we have to do so anyway, because iirc whonix templates go EOL fairly quickly after a new version is available.) |
6d4149d
to
1fa7a86
Compare
(rebased + force-pushed to try to get the CI runner to do it's thing, looks like it was stalled out for a while) |
I see your point. Even if I was originally more concerned of the original whonix formulas not accounting for inplace changes and being more aimed at running on install (and we do know that there are quite a few challenges with the mix of salt and state). But upon review the I'm still have some questions about the specific implementation of using salt instead of the template-upgrade-script, especially given the unnecessary overhead of shutting down templates in formulas, regardless of the need to switch templates. But doing it in salt is not a regression, so I think it's fine to have it and get a final answer on that later. |
It's an even better story than that - we don't change and then revert. The reason that we include these lines is that, even when we were relying on |
I think this is worth discussing for sure and happy to hear your thoughts on #1225. In the case of sys-whonix in particular, all that that script does is kill the sys-whonix vm if its base template is wrong, and it's still being invoked via Salt, so I don't know that it's any better to do Salt |
I agree that this look OK, but if you take a look at how the backporting of Whonix 17 to Qubes 4.1, it was done via |
I was testing out the a scenario from Whonix 16, before following the test plan. But I am getting a recursive requisite:
No need to investigate. I may be doing something wrong. Just wanted post a progress update. I'll follow up tomorrow. |
Ah, interesting. I see that that was ~2 days before the anticipated whonix 16 EOL; we can track upgrades to make sure that we are paying attention, but my thinking is that whonix version bumps don't affect us much and we should be able to keep up and release a version bump without much fuss in order to coincide with upstream? I don't know if I fully understand what you mean about testing out the scenario from whonix 16 - can you explain what steps you took or what configuration you were using when you hit the recursive requisite? (Edit: and/or also post more of the console output?) |
Sorry, I should have clarified. I have hit the recursive requisite when upgrading from main to this branch. Basically following your scenario. |
After failing on |
1fa7a86
to
ef87138
Compare
I'm pulling this back into draft for a bit - it's an interesting/hilarious/unfortunate issue that as far as I can tell comes down to a naming "feature" in Salt where states can be referenced by either ID or by There are workarounds (for example, require the |
8301aec
to
4528473
Compare
CI failure is on an unrelated section (sd-log shutdown) - will rebase/push hopefully for a successful CI run. |
…enforce updated template status. Use salt to power off sys whonix VMs instead of securedrop-handle-upgrade script.
4528473
to
518ea6b
Compare
As currently written, this PR would deprecate sd-whonix-version and rely on the version set in the qvm.whonix pillar, meaning that as soon as the version is bumped upstream, sdw users would be upgraded. After talking to @zenmonkeykstop, this approach seems preferable for simplicity and also because it avoids a case where there is time between an upstream version bump and our version bump, during which users performing a clean install would be downgraded to a lower whonix version. This does mean that we should test the sd whonix config package against whonix testing, which we can do on dev machines using the builtin qvm.whonix-testing pillar, and/or also on CI. |
Status
Ready for Review
Description of Changes
Refs #1060
Refs #1225
Refs / (possible fix) freedomofpress/securedrop-workstation-ci#68
Changes proposed in this pull request:
extend
) to enforce template upgrade rather than usingsecuredrop-handle-upgrade
. Salt states also used instead of securedrop-handle-upgrade to shut down sys whonix vms in order to apply changes.Testing
** Manual upgrade testing (existing SDW install) **
sys-whonix
andanon-whonix
to any other template, or make changes to the existing template such as disabling apparmor. (Don't run/use these vms while you are making a mess, of course. This is basically to simulate a template upgrade/template change).apply
.anon-gateway
tag.Deployment
Any special considerations for deployment? Consider both:
n/a.
Upgrading: todo. on existing installs, sd-whonix will be missing the tags that it should have inherited from whonix-gateway-17.Since sd-whonix is based on the sys-whonix template, and not cloned, it should have the anon-gateway tag after a successful Salt run.Checklist
If you have made changes to the provisioning logic
make test
) pass indom0
If you have added or removed files
MANIFEST.in
andrpm-build/SPECS/securedrop-workstation-dom0-config.spec
If documentation is required