Much better article with very real tips about what options to try than yesterday's (weirdly flagged/dead?) post on the topic. Which while I really enjoyed lacked substance; I was in the comments trying to provide a more useful basis with some real examples, but this is an exemplary list of awesome ways systemd can easily quickly readily provide aassive boost to isolation & security. Great write up!
Thanks for sharing this. It looks like you can also use "systemd-analyze" with the "--user" flag to inspect systemd user units as well ("systemd-analyze --user security"). I've started using systemd more now that I've transitioned my containers to Podman, and this will be a helpful utility to improve the security of my systemd unit/container services.
Why don't distros flip more of these switches? Are there cons of being more aggressive with these settings? It's really a lot for many people to tinker with.
> Are there cons of being more aggressive with these settings?
well, the con is you might unknowingly break some setups. take NetworkManager: after tightening it down, did you check both IPv4 and IPv6 connectivity? did you check that both the `dns=systemd-resolved` and `dns=default` modes of operation (i.e. who manages /etc/resolv.conf) work? did you check its ModemManager integrations, that it can still manage cellular connections? did you check that the openvpn and cisco anyconnect plugins still work? what about the NetworkManager-dispatcher hooks?
> Why don't distros flip more of these switches?
besides the bit of "how many distro maintainers actually understand the thing they're maintaining well enough to know which switches can be flipped without breaking more than 0.01% of user setups", there's the bit of "should these flags be owned by the distro, or by the upstream package?" if the distro manages these, they'll get more regressions on the next upstream release. if the upstream sets these, they can't be as aggressive without breaking one or two of their downstreams.
This question is sorta similar to "Why don't distros enable restrictive MAC policies by default"
Maintainers _could_ take the time to lock down sshd and limit the damage it can do if exploited, but there are costs associated with that:
1. Upfront development cost
2. Maintenance cost from handling bug reports (lots of edge cases for users)
3. Maintenance cost from keeping this aligned with upstream changes
You could extend this argument and say that distros shouldn't bother with _any_ security features, but part of the job of a distro maintainer is to strike a balance here, and similar to SELinux / AppArmor / whatever, most mainstream desktop distro maintainers probably don't think the juice is worth the squeeze.
> Which distro has the best out-of-the-box output for?:
systemd-analyze security
desbma/shh generates SyscallFilter and other systems unit rules from straces similar to how audit2allow generates SELinux policies by grepping for AVC denials in permissive mode (given kernel parameters `enforcing=0 selinux=1`), but should strace be installed in production?:
I don't know but I would imagine the settings are sometimes new. Not everyone is a systemd nerd, and even if they were you may actually break stuff for old versions of systemd if you try to turn them on.
We could also ask why nobody seems to use SELinux or AppArmor, or any other random security feature. Most distros have these things available but most developers and users are not familiar, don't truly need it, etc.
You can ofcourse achieve all these things in your init scripts which are unique in their way and not uniform at all, just to give credit where credit is due. But systemd makes it practical to use our beloved kernel and it's features in an uniform and standard way... :)
I started my Linux journey so late I can't imagine living without systemd, the few systems I've encountered without systemd are such a major PITA to use.
I recently discovered "unshare" which I could use to remount entire /nix RW for some hardlinking shenanigans without affecting other processes.
systemd is so good, warty UX when interacting with it but the alternative is honestly Windows in my case.
systemd is very complex software. Alternative is very simple software with complex scripting which will reimplement parts of systemd in a buggy way (and that's not necessarily a bad thing). systemd probably is inspired by Windows and other service managers, while old sysv init is just a tiny launcher for script tree.
Just an example of systemd limitation is that systemd does not support musl, so if you want to build a tiny embedded sysroot, you already have some limitations.
>Just an example of systemd limitation is that systemd does not support musl, so if you want to build a tiny embedded sysroot, you already have some limitations.
OpenEmbedded has carried a patchset to build systemd against musl for use in Yocto for a long time.
postmarketOS already got approval from Poettering to make a musl-linked systemd more officially supported.
So you might say that... any sufficiently complicated init system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of systemd?
I don't know about half, but some part of it - definitely.
I'd also add, that there are some non-trivial requirements for good server daemon programs, like fork, detach from terminal, may be fork again, umask, chdir, may be close some descriptors, maintain PID file, output to syslog, drop privileges and so on. And a lot of those things are implemented in systemd, so basically you can just write very dumb server which will work properly under systemd. So some part of systemd have to be implemented in every server daemon program.
Not really, AIUI pottering just thought launchd's socket activation/inetd like functionality was neat: https://0pointer.de/blog/projects/systemd.html. Upstart is more of a direct ancestor:
> Why didn't you just add this to Upstart, why did you invent something new?
> Well, the point of the part about Upstart above was to show that the core design of Upstart is flawed, in our opinion. Starting completely from scratch suggests itself if the existing solution appears flawed in its core. However, note that we took a lot of inspiration from Upstart's code-base otherwise.
> If you love Apple launchd so much, why not adopt that?
> launchd is a great invention, but I am not convinced that it would fit well into Linux, nor that it is suitable for a system like Linux with its immense scalability and flexibility to numerous purposes and uses.
launchd is horrible though, the folks complaining about systemd would be up in arms if they had to write poorly typed XML key/value files
Aside, I'm quite surprise that systemd won out, given all the hate for so many years. You got nearly every forum and HN thread that even tangently mention systemd, flooded with people proclaiming systemd to be the spawn of the devil "forced down their throat". At the same time I noticed that a lot of regular Linux users (like developers that just want to deploy stuff) seemed to like systemd unit files. I guess all those complainers, however vocal, were neither the majority, nor the people who actually maintained distributions.
This feeling is particular striking for me, because I once worked on a Linux project with the aim of improving packaging and software distribution. We also got a lot of hate, mainly for not being .deb or.rpm, and it looked to me as if the hate was a large reason for the failure of the project.
Systemd won because for all the deserved hate it solves real problems that nobody else was really trying (or their attempts were worse). There are some really good points to systemd - you just have to takes the bad with it.
My feeling, without evidence, is that a lot of the hate is vocal people who don't like this thing that does stuff differently to what they're used to. I get the feeling, I still understand /etc/init.d and runlevel symlinks and /var/log better than systemd, but that's because I have many more years of experience interacting with it, breaking it, fixing it. Whenever I have to do stuff with systemd it's a bit annoying to have to go learn new stuff, but when I do I find it reasonably straightforward. Just different, and most likely better.
A lot of the hate was because the main developers interactions wiht just about everyone was horrible, the code was opaque at best most of the time and constantly changed, scope creep, and the feeling that it was a massive power grab by Red Hat.
It can be argued that it didn't solve very many problems and added a huge amount of complexity.
The problem with the complaints is that they're all made up. Well, except maybe developer interactions - assholes can be assholes.
systemd isn't opaque, it's open-source. systemd is objectively less opaque than init scripts, because it's very well documented. Init scripts are not.
Sure, you can read them. But then you'd realize that glued together init scripts just re-implement systemd but buggier and slower, at which point you might as well just read the systemd source. Or, better yet, the documentation.
systemd ALSO does not constantly change. The init system has been virtually untouched in a decade, save for bug fixes and a few new features. Your unit files will "just work", across many years and many distros. Yes, systemd is more portable than init scripts.
systemd ALSO does not have any scope creep. Here, people get confused between systemd-init, and systemd the project.
systemd-init is just an init system. Nothing more, nothing less, for a long time now, and forever. There is no scope creep, the unix principle is safe, yadda yadda yadda.
systemd coincidentally is also the name of a project which includes many binaries. All of those binaries are optional. They aren't statically linked, they're not even dynamically linked - they communicate over IPC like every other program on your computer.
systemd is also not complex, it's remarkably simple. systemd unit files are flat, declarative config files. Bash is a turing-complete scripting language with more footguns than C++. Which sounds more complex?
> systemd[-init] ALSO does not constantly change. The init system has been virtually untouched in a decade...
Sure, I'll bite. It'll be more interesting than watching some stupid Twitch streamer.
Gentoo Linux was using OpenRC back in 2002. Looking at the copyright notice in relevant source files, it looks like OpenRC is a project that has been under development since 1999, so I'd expect it was in use back in 1999. However, I will use 2002 as the start date for this discussion because that's when I started using it.
The simple OpenRC service file I mention in footnote 3 in [0] is syntactically identical to the syslogd service file added in to the OpenRC repo back in this commit in late 2007 [1]. The commit that appears to add support for 'command_args' and friends is earlier that day.
So, four years before SystemD's experimental release, the minimal OpenRC service file (that I talk about in [0]) was no more complicated than what would become the minimal SystemD service file no less than four years later. What's more, the more-verbose syntax for service files written in 2002 was supported by 2007's OpenRC, and continues to be supported by 2025's OpenRC.
23 years is quite a bit longer than 15.
> systemd is also not complex, it's remarkably simple. systemd unit files are flat, declarative config files.
See above (and below).
> Here, people get confused between systemd-init, and systemd the project.
In that case, it doesn't do you credit to use "systemd-init" and "systemd" interchangably in your commentary. SystemD absolutely has scope creep. systemd-init... well, I think I remember when it wasn't possible to have it rexecute itself for a no-reboot upgrade of PID 1. And does it still have a dependency on DBus, or did they see sense and get rid of that?
systemd, the project, isn't a binary. Its dozens of binaries. It can't have scope creep because that's impossible - they're literally dozens of separate things.
We call that the unix principle, lol.
Saying systemd has scope creep is like saying GNU has scope creep because they have a compiler and a text editor. Makes no fucking sense.
I also don't consider a dependency on dbus "scope creep". It has to communicate over IPC - okay, don't reinvent the wheel, just use dbus. Every program ever supports dbus if it has a public API over IPC. Sorry if that bothers you.
And sure, maybe OpenRC is just as simple as systemd, but the reality is every distro chose systemd and that's that, and for MOST of them they switched from primarily scripts to unit files.
yes, especially the scope and power creep, which is antithetical to what unix was all about. Which was doing one thing, and doing it well.
What started as a neat way to start servers in parallel, as systemd handles the sockets, now can control your home directory. Like what?
One of my earliest memories of using systemd involved logs being corrupted. journalctl would just say the file is corrupted but wouldn't even print the name of the file -- I had to resort to strace. That left a real bad taste in my mouth.
My systemd grumpiness was mostly due to only just (nearly) finishing an upstart migration. The thought of another one so soon after wasn't fun, even if I liked some of the new features. Those days are over though, and I'm glad there's a mostly unified approach.
> I guess all those complainers, however vocal, were neither the majority, nor the people who actually maintained distributions.
This matched my experience: there were a few vocal haters who were very loud but tended not to be professional sysadmins or shipping binaries to other people, and they didn’t have a realistic alternative. If you distributed or managed software, you had a single, robust solution for keeping daemons running with service accounts, restarts, dependencies, etc. for Windows NT circa 1993 and macOS in 2005 so Linux not having something comparable was just this ongoing source of paper cuts which caused some Linux shops to have unexpected, highly visible downtime (e.g. multiple times I saw data center outages where all of the Windows stuff and the properly configured Upstart/SystemD stuff come up after retrying but high-profile apps using SysV init stayed down for hours because the admins had to clean it up by hand).
Anyone who packaged software was also happy to stop supporting different combinations of buggy shell scripts and utilities, too – every RPM I built went from hundreds of lines of .sh to a couple dozen lines of better systemd. systemd certainly isn’t perfect but if you had an actual job to do you were going to look at systemd as the best path to reduce that overhead.
Having the backing of RedHat certainly didn't hurt... I don't care either way, although I still think OpenRC style scripts are much easier to debug and sometimes have more elegant solutions (templates vs symbolic links).
I can't say I know what OpenRC really does, but at it's core systemd is just a glorified script runner as well? You have ExecCondition, ExecStartPre, ExecStart, ExecStartPost, ExecReload, ExecStop, ExecStopPost which are all quite self explanatory, this alongside a billion optional parameters which I wouldn't know how to implement in scripts without endless hours of dread.
I remember an old Debian machine with /etc/init.d/something [start|stop|reload|restart] but I can't recall being able to automatically restart services or monitor status easily. (I didn't speak $shell well back then either)
systemd tries to avoid scripts as much as possible.
/etc/init.d/whatever were all shell scripts, and they all had to implement all the features themselves. So `/etc/init.d/foo restart` wasn't guaranteed to work for every script, because "restart" is something that each script handled individually. And maybe this one just didn't bother implementing it.
There's no good status monitoring in sysV because it's all convenience wrappers, not a coherent system.
I've been reading through how many NixOS modules use systemd units and there's a lot of scripts being executed, the final line execs the service (if there is one, NixOS uses systemd for maintenance tasks, refreshing certificates and many more things). While NixOS doesn't speak for the broader community what I'm trying to say is that it can execute anything, if that's a script or a daemon doesn't matter as long as it works for you.
Thanks for the sysV explanation, it sounds worse to me.
> /etc/init.d/whatever were all shell scripts, and they all had to implement all the features themselves.
A minimal SystemD service file and a minimal OpenRC service file are equally complex.
Here's the OpenRC service file for ntpd (which is not a minimal service file, but is pretty close):
#!/sbin/openrc-run
description="ntpd - the network time protocol daemon"
pidfile="/var/run/ntpd.pid"
command="/usr/sbin/ntpd"
command_args="${NTPD_OPTS}"
command_args_background="-p ${pidfile}"
command_args_foreground="-n"
depend() {
use net dns logger
after ntp-client
}
start_pre() {
if [ ! -f /etc/ntp.conf ] ; then
eerror "Please create /etc/ntp.conf"
return 1
fi
return 0
}
'depend' handles service dependency declaration and start/stop ordering (obviously).
'start_pre' is a sanity check that could be removed, or reduced to calling an external script (just like -IIRC- systemd forces you to do). There are _pre and _post hooks for both start and stop.
For a service that has no dependencies on other services, backgrounds itself, and creates a pidfile automatically, the smallest OpenRC service file is four non-blank lines: the '#!/sbin/openrc-run' shebang followed by lines declaring 'pidfile', 'command', and 'command_args'. A program that runs only in the foreground adds one more line, which tells OpenRC to handle daemonizing the thing and writing its pidfile: 'command_background="true"'. See [3] for an example of one such service file.
If you want service supervision, it's as simple as adding 'supervisor=supervise-daemon', and ensuring that your program starts in the foreground. If it doesn't foreground itself automatically, then adding 'command_args_foreground=<Program Foregrounding Args>' will do the trick.
If you're interested in more information about OpenRC service file syntax, check out the guide for them at [0], and for a lot more information, the manual for openrc-run at [1]. For supervision, check out the supervision guide at [2].
So on one hand, yes, that's a vast improvement over SysV.
On the other hand, no sir, I still don't like it. It looks very much like Bash. I'm not very fond of Bash to start with and it might not even be actual Bash? Can't tell from the manpage.
But scrolling down to the bottom of the manpage I see a pretty long sample script, and that's exactly what I want to see completely gone. I don't want to look at a 3-way way merge of a service during an upgrade ever again and try and figure out what's all that jank doing. IMO if any of that shell scripting has any reason to be in a service file, it's a bug to be fixed.
My ideal is the simple systemd services: description, dependencies, one command to start, done. No jank with cleaning up temp files, or signals, or pid files (can they please die already), or any of that.
And one of the nice things about systemd services not being a script is that overrides are straightforward and there's never any diffs involved.
> ...at it's core systemd is just a glorified script runner as well?
Yep. And it has a ton of accidental complexity in it. [0] At $DAYJOB, we ran into a production-down incident related to inscrutable SystemD failures once a year. It was always the case that the documentation indicated that our configuration and usage was A-OK. If there ever was a bug report filed, it was always the case that the SystemD maintainers either said words to the effect of "Despite the fact that the docs say that should work, that's an unsupported use case." or "Wow. Weird. Yeah, I guess that behavior is wrong, and it's true that the docs don't warn you about that.", and then go on to do nothing.
SystemD is -IME- like (again, IME) PulseAudio and NetworkManager... it's really great until you hit a show-stopping bug, and then you're just turbofucked because the folks who built and maintain it it want to treat it like it's a black box that works perfectly.
[0] NOTE: I am absolutely not opposed to complex things. I'm opposed to needlessly complex things, and very much opposed to things whose accidental complexity causes production issues, and the system's maintainers' reply to the bug report and minimal repro is "Wow, that's weird. I don't want to fix that. Maybe we should document that that doesn't work." and then go on to do absolutely nothing.
I use runit for some non-system-level stuff. It's extremely simple, possibly too simple. It doesn't manage load order - if a dependency hasn't loaded yet, you just exit the script and the service manager tries again in 2 seconds. Service scripts are just shell scripts.
There are two ways to design a system: so simple that it has obviously no bugs, and so complex that it has no obvious bugs.
That's because systemd knew who the target users of it were: people making distributions, and professional users with little desire to be woken up at 3 AM to troubleshoot a stuck PID file.
Most of the complainers weren't really relevant. They weren't making the decisions on what goes in a distro, and an init system is overall a fairly minor component most users don't use all that often anyway.
> This feeling is particular striking for me, because I once worked on a Linux project with the aim of improving packaging and software distribution. We also got a lot of hate, mainly for not being .deb or.rpm, and it looked to me as if the hate was a large reason for the failure of the project.
I think that's a good deal trickier because packaging is something a Linux user does get involved with quite often, and packaging systems don't mix well. A RPM based distro with some additional packager grafted on top is a recipe for disaster.
Still, I think it's also a case of the same thing: sell it to the right people. Find people making new distros suffering problems with DEB and RPM and tell them your tool can save them a lot of pain. The users can come in later.
> Still, I think it's also a case of the same thing: sell it to the right people. Find people making new distros suffering problems with DEB and RPM and tell them your tool can save them a lot of pain.
To quote one of my favorite Clone Wars episodes: Fifty tried, fifty died [1].
There have been so, so many attempts at solving the "how to ship binary builds for Linux" question... both deb and rpm have their good and their bad, and on top of that you got `alien`, flatpak, Docker images, the sledgehammer aka shipping everything as a fully static binary (e.g. UT2004 did this) or outright banning prebuilt binaries (the Gentoo and buildroot way). But that's not the actual problem that needs solving.
The actual problem is dependency hell. You might be lucky to be able to transplant a Debian deb into an Ubuntu installation and vice versa, or a SLES rpm to RHEL, but only if the host-side shared libraries that the package depends on are compatible enough on a binary level with what the package expects.
That suddenly drives up the complexity requirements for shipping software even for a single Linux distribution massively. In contrast to Windows, where Microsoft still invests significant financial resources into API-side backwards compatibility, this is not a thing in any Linux distribution. Even if you're focusing just on Debian and Ubuntu, you have to compile your software at least four different times (one each for Debian Stable, Debian Testing, Ubuntu <current rolling release> and Ubuntu <current LTS>), simply because of different versions of dependencies. Oh and in the worst case you might need different codepaths to account for API changes between these different dependency versions.
And even if you had some sort of DSL that generated the respective package manager control files to build packages for the most common combinations of package manager, distributions and actively supported releases of these, there's so, so much work involved in setting up and maintaining the repositories. Add in actually submitting your packages to upstream (which is only possible for reasonably-ish open source packages in the first place), and the process becomes even more of a nightmare.
And that's all before digging into the topics of autotools, vendoring (hello nodejs/php/python ecosystems), digital signature keyrings, desktop manager ecosystems and god knows what else. Oh, and distribution bureaucracy is even more of a nightmare... because you now have to deal with quirks in other people's software too, and in the worst case with a time span of many years of your own releases plus the distribution release cadence!
Shipping software that's not fully OSS on Linux sucks, shipping closed source software for Linux sucks even more. Windows has had that sort of developer experience figured out from day one. Even if you didn't want to pirate or pay up for InstallShield, it was and is trivial to just write an executable, compile it and it will run everywhere.
IMO, binary compatibility on Linux isn't really solvable. There's just a thousand tiny projects that make up the Linux base that aren't on the same page, and that's not about to change.
I do think packaging can be improved. I hate almost everything about how dpkg works, it's amazing. So I'm squarely in the RPM camp because I find the tooling a lot more tolerable, but still surely further improvements can be made.
Anyway, the ecosystem stays heathy because of code contributions. So what’s the point of binary compatibility (from the point of view of the people actually making Linux work: Open Source developers and repo maintainers)?
> So what’s the point of binary compatibility (from the point of view of the people actually making Linux work: Open Source developers and repo maintainers)?
Want to see Linux on the desktop actually happen? Then allow a hassle free way for commercial software that is not "pray that WINE works good enough" aka use win32 as an ABI layer.
Of course we can stay on our high horses and demand that everything be open source and that life for closed source developers be made as difficult as possible (the Linux kernel is particularly and egregiously bad in that perspective), but then we don't get to whine about why Linux on the desktop hasn't fucking happened yet.
I don’t really know what the point of this “Linux on the desktop” event would be, or even what it is. (Clearly it isn’t just Linux on desktops, because that’s been working fine forever).
The whole point of my comment was to keep in mind the incentives of different sub-groups. If “Linux on the desktop” doesn’t benefit the people that make Linux work, I don’t see what the big deal is.
> I don’t really know what the point of this “Linux on the desktop” event would be, or even what it is.
Getting Linux adopted in F500 companies as the default desktop OS. That is the actual litmus test, because (large) companies need an OS that can be centrally managed with ease, doesn't generate a flood of DPU (Dumbest Possible User) support demand and can run the proprietary software that's vital to the company's needs in addition to the various spyware required by cybersecurity insurances and auditors these days.
At the moment, Linux just Is Not There. Windows has GPOs and AD (that, in addition, ties into Office 365 perfectly fine), Mac has JAMF and a few other MDM solutions. Many a corporate software doesn't even run properly under WINE (not surprising, the focus of Proton and, by it, WINE is gaming), there's a myriad ways of doing central management, and good luck trying to re-educate employees that have been at the company so long they grew roots into their chairs.
I only started packaging relatively recently. Using OBS definitely made things easier, but it's crazy how much nicer RPM is than dpkg. So much better to have more-or-less everything inside a spec file with macros, versus dpgk's mess of static, purpose-specific files.
> ...or outright banning prebuilt binaries (the Gentoo and buildroot way).
You, uh, haven't used Gentoo in like twenty years, have you? You've been able to host your own prebuilt binaries (or use the prebuilts of others who bothered sharing them) for as long as I can remember (FWIW, I started using Gentoo in 2002 or 2004). The Gentoo folks decided to set up official binary package servers at the end of 2023 (look at the Dec 29, 2023 news item on the Gentoo home page for more info).
Systemd had the momentum to unify nearly all the init distribution process and it did it, but without aligned with the right unix approach, the evolved versions confirm that, getting more and more from the reliying Linux distribution.
Broad view betweeen BSD ecosystem offers that this wasn’t a good way. I still want to see a good alternative from that point of view…
> You got nearly every forum and HN thread that even tangently mention systemd, flooded with people proclaiming systemd to be the spawn of the devil "forced down their throat".
There's a large cohort of Linux users whose entire personality is "I'M A CoNtRaRiAn!" and argued against systemd because Red Hat was pushing it. Reddit was filled with such a minority of loud anti-systemd trolls. Pushed for reasons for their disdain you'd get non-sensical or baseless replies. The best ones were known bugs that had been closed for months.
It would be cool to have a repo with suggested hardening for common services, since there's so many different hardening options. One of the things you might notice from lots of users using common suggestions, is that the permissions often need to be opened up more than you'd think, to support edge cases.
> It would be cool to have a repo with suggested hardening for common services
From packaging stuff for nixpkgs, a distro that often is without upstream support, it is usually very useful to look at how mainstream distro package services.
Those hardening steps also tend to be well tested even if sometimes a bit lax. If you want to find out how, e. G., postgresql can be hardened, consider looking at the Debian, Ubuntu and/or RHEL packages as a starting point.
Distros don't usually do security hardening, unless the distro is security-specific. They slap something generic on like AppArmor or SELinux and call it a day. (This article is the proof of that... all the default services are not hardened). Usually this is a good thing, as it prioritizes usability, and lets the user harden as they wish.
Another great security feature systemd provides is credential management[1], which allows you to expose credentials to an application in a more secure way than, say, an environment variable or a file in the filesystem.
When Vault is not available, if I’m working on a side project, for example, that’s what I always go for. Even wrote a small Go package[2] to get said credentials when your application is running inside a service with that feature.
I thought the go culture was that dependencies are bad, and abstractions (i.e. any function calls) are confusing and bad, so it's better to just inline stuff like this and write it fresh each time you need it.
I was writing the same few lines for every project, so why not make it its own package?
For my projects I can just include the dependency, as I wrote it and don’t mind using it. Other people can copy it instead, since the proverb goes “a little copying is better than a little dependency”.
Use rust or typescript or something where it's socially acceptable to make small packages.
That'll also let you avoid writing the `if err != nil { return fmt.Errorf("context string something: %w", err) }` boilerplate again and again too (since you can just write '.context("context")?' each time).
If you're using Go, you're not supposed to build abstractions, small packages, or any sorta clever or good code. And be really careful using generics.
If you want to write abstractions, you're supposed to use a different language. Those are the rules.
For what it's worth this will not work properly, you have different environment variables for user and system units. Proper error handling and graceful fallback for these cases are probably worth a module here (though it could be a 10-20 liner instead too).
systemd's method does not require the service process to have access to the original creds file, or even the filesystem that contains it. In fact the original credentials file might even be an encrypted file and the key (and/or hardware) to decrypt it does not need to be accessible to the service process.
Great article. I really appreciate the list of properties and the "check the man pages, good luck" advice. systemd is really a great piece of software I would enjoy deploying on my hosts!
Nitpick and title correction: The proper spelling of systemd is systemd, not SystemD. According to their brand page:
Yes, it is written systemd, not system D or System D, or even SystemD. And it isn't system d either. Why? Because it's a system daemon, and under Unix/Linux those are in lower case, and get suffixed with a lower case d. And since systemd manages the system, it's called systemd.
Many of the more juvenile systemd haters used to deliberately spell it that way.
As an insult, it was rather less successful than the "Micro$oft" / "Slowaris" / "HP-SUX" mockery from the 1990s - but it did manage to sow enough confusion that that it still pops up regularly today, even in contexts that are otherwise neutral or positive about it.
That’s funny, I’m not even sure what the D is supposed to expand to, in that insult. What a silly and lame thing to use as an insult.
I’ve been using it because having some random letter capitalized seems to be the totally unsurprising for this sort of plumbing software. (And by plumbing, I mean: very useful and helpful boring stuff that deals with messy problems that I’m happy not to care about, just to be clear that I mean it positively, haha).
I'd hate to restrict Docker like that - depending on what you run inside of Docker, it would be very hard to narrow it down to the right security tuning settings. In that case, it's actually safer and more predictable to run it in systemd (arguably).
What would be so hard about it? Also, this is not docker--it's podman. Which has a much simpler execution model than Docker. With it, it shouldn't be any harder to narrow down what the problem is, compared to running a non-containerized service.
these Hardening variables have been discussed some years back[1].
this will not take off I'm afraid, because locking these unitfiles down is offloaded to the end-user (I've yet to see maintainers embrace shipping locked down files). Maybe they will? But this same approach hasn't worked with apparmor so why should it work with systemd? Who will do the job?
If you consider apparmor maintainers provide skeleton-templates in many cases that will make the parser stop complaining. ("look I have a profile so apparmor shuts up, but don't take too close a look OK")
Then there is firejail, which some argue[2] is snake-oil considering the high level of administrative glue compared to its massive attack-surface (also it's a setuid binary).
I didn't mention SElinux since I don't know a single person who had the joy (or pain depending on perspective) of working with it. But again, seems the expectation to implement security with it is shifted to the user.
> this will not take off I'm afraid, because locking these unitfiles down is offloaded to the end-user (I've yet to see maintainers embrace shipping locked down files).
I vaguely recall looking at the slides from a talk on OpenBSD's approach to this topic, which came down to (paraphrasing from hazy memory) "if it can be disabled, people will disable it; if it needs to be configured, people won't configure it".
> this will not take off I'm afraid, because locking these unitfiles down is offloaded to the end-user
Maybe your point is that this isn't done by the vendor in practice. And I'm sure there's room for lots of improvement. However, one of the great things about how systemd units can be provided by the vendor and seamlessly tweaked by the administrator is that the vendor (i.e. packager and/or distro) can set these up easily.
There definitely are packages that ship with locked-down files. Tor and powerdns (pdns) are two off the top of my head.
→ Overall exposure level for pdns.service: 1.9 OK
→ Overall exposure level for tor.service: 7.1 MEDIUM
I think it should be done by the maintainer of the software not by the distro. My concern is that these features are available since at least 5 years and it has not yet caught on (regardless of what this blog article recommends).
It would be great to see it implemented but for now at least on Debian/sid the situation is as follows:
UNIT EXPOSURE PREDICATE
ModemManager.service 6.3 MEDIUM
NetworkManager.service 7.8 EXPOSED
alsa-state.service 9.6 UNSAFE
anacron.service 9.6 UNSAFE
atop.service 9.6 UNSAFE
atopacct.service 9.6 UNSAFE
avahi-daemon.service 9.6 UNSAFE
blueman-mechanism.service 9.6 UNSAFE
bluetooth.service 6.0 MEDIUM
cron.service 9.6 UNSAFE
dbus.service 9.3 UNSAFE
dictd.service 9.6 UNSAFE
dm-event.service 9.5 UNSAFE
dnscrypt-proxy.service 8.1 EXPOSED
emergency.service 9.5 UNSAFE
exim4.service 6.9 MEDIUM
getty@tty1.service 9.6 UNSAFE
irqbalance.service 1.2 OK
lvm2-lvmpolld.service 9.5 UNSAFE
polkit.service 1.2 OK
rc-local.service 9.6 UNSAFE
rescue.service 9.5 UNSAFE
rtkit-daemon.service 7.2 MEDIUM
smartmontools.service 9.6 UNSAFE
systemd-ask-password-console.service 9.4 UNSAFE
systemd-ask-password-wall.service 9.4 UNSAFE
systemd-bsod.service 9.5 UNSAFE
systemd-hostnamed.service 1.7 OK
systemd-journald.service 4.9 OK
systemd-logind.service 2.8 OK
systemd-networkd.service 2.9 OK
systemd-timesyncd.service 2.1 OK
systemd-udevd.service 7.1 MEDIUM
tor@default.service 6.6 MEDIUM
udisks2.service 9.6 UNSAFE
upower.service 2.4 OK
user@1000.service 9.4 UNSAFE
wpa_supplicant.service 9.6 UNSAFE
> I think it should be done by the maintainer of the software not by the distro
Why would you say that? I would agree that the developer likely has better insight into what the software needs. But the security boundary exists at the interface of the application and the system, so I think that both application devs and system devs (i.e. distros) have something to contribute here.
And because systemd allows for composition of these settings, it doesn't have to be a one-or-the other situation--a distro can do some basic locking down (e.g. limiting SUID, DynamicUser, etc.), and then the application dev can do syscall filtering.
In any case, I agree that I'd like to see things get even more locked down. But it's worth remembering that, before systemd, there was basically no easy-to-use least-privilege stuff available beyond Unix users and filesystem permissions. The closest you had (afaik) was apparmor and selinux. In both of those cases, the distro basically had to do all the work to create the security policy.
Also, n.b., that pdns.service I noted is provided by PowerDNS themselves.
It would be nice to be possible to do the hardening of services via allowlisting instead. E.g. AllowNothing=true and then start adding what is allowed to make the service function.
The unreasonable effectiveness of writing a security scanner. People will do anything it takes to make the scanner give a perfect score, regardless of whether it makes sense.
At some point, if you have to write articles about the proper spelling of the name, maybe you should just accept the alternative names as well. Also looking at you Datadog and Cloudflare. (The employees of the second one are especially allergic to CloudFlare for some reason)
We tried very hard to convert FastMail to Fastmail and... it's been about 90% successful but there's definitely a bunch of things out there spelled the old way. We just joke about BIG M occasionally.
Could you please expand on the "very low quality" claim? What's missing for you here? How would you raise a bar on this piece to make it more of a high quality, security focused content?
Genuine question.
My response was a joke to a low-effort comment, but in general - systemd is complex because it solves the complex problem of booting up a system, complete with error handling, logging, etc. Many of the alternatives simply ignore part of the problem space, making the simple case simpler, but the complex case impossible.
Automatic systemd service hardening guided by strace profiling
https://github.com/desbma/shh
A nice thing I found is that if you do (which I see they did not in the examples)
you can do And essentially just including the binary and the path you want available. ProtectSystem= is currently not compatible with this behavior.EDIT: More info here: https://github.com/systemd/systemd/issues/33688
Seems that might be an issue for something that wants to e.g. send an e-mail when an error occurs?
Much better article with very real tips about what options to try than yesterday's (weirdly flagged/dead?) post on the topic. Which while I really enjoyed lacked substance; I was in the comments trying to provide a more useful basis with some real examples, but this is an exemplary list of awesome ways systemd can easily quickly readily provide aassive boost to isolation & security. Great write up!
Yesterday's, just in case: https://us.jlcarveth.dev/post/hardening-systemd.md https://news.ycombinator.com/item?id=44928504
Maybe fix the certificate issue on the site. Some browser doesnt event let one go forward with a bad cert.
Thanks for sharing this. It looks like you can also use "systemd-analyze" with the "--user" flag to inspect systemd user units as well ("systemd-analyze --user security"). I've started using systemd more now that I've transitioned my containers to Podman, and this will be a helpful utility to improve the security of my systemd unit/container services.
Why don't distros flip more of these switches? Are there cons of being more aggressive with these settings? It's really a lot for many people to tinker with.
> Are there cons of being more aggressive with these settings?
well, the con is you might unknowingly break some setups. take NetworkManager: after tightening it down, did you check both IPv4 and IPv6 connectivity? did you check that both the `dns=systemd-resolved` and `dns=default` modes of operation (i.e. who manages /etc/resolv.conf) work? did you check its ModemManager integrations, that it can still manage cellular connections? did you check that the openvpn and cisco anyconnect plugins still work? what about the NetworkManager-dispatcher hooks?
> Why don't distros flip more of these switches?
besides the bit of "how many distro maintainers actually understand the thing they're maintaining well enough to know which switches can be flipped without breaking more than 0.01% of user setups", there's the bit of "should these flags be owned by the distro, or by the upstream package?" if the distro manages these, they'll get more regressions on the next upstream release. if the upstream sets these, they can't be as aggressive without breaking one or two of their downstreams.
This question is sorta similar to "Why don't distros enable restrictive MAC policies by default"
Maintainers _could_ take the time to lock down sshd and limit the damage it can do if exploited, but there are costs associated with that:
You could extend this argument and say that distros shouldn't bother with _any_ security features, but part of the job of a distro maintainer is to strike a balance here, and similar to SELinux / AppArmor / whatever, most mainstream desktop distro maintainers probably don't think the juice is worth the squeeze.Because they/we don't have sufficient integration tests to verify that the core system services are working after tightening down each parameter.
From https://news.ycombinator.com/item?id=29995566 :
> Which distro has the best out-of-the-box output for?:
desbma/shh generates SyscallFilter and other systems unit rules from straces similar to how audit2allow generates SELinux policies by grepping for AVC denials in permissive mode (given kernel parameters `enforcing=0 selinux=1`), but should strace be installed in production?:desbma/shh: https://github.com/desbma/shh
I don't know but I would imagine the settings are sometimes new. Not everyone is a systemd nerd, and even if they were you may actually break stuff for old versions of systemd if you try to turn them on.
We could also ask why nobody seems to use SELinux or AppArmor, or any other random security feature. Most distros have these things available but most developers and users are not familiar, don't truly need it, etc.
And that's something that's impossible to do with old init scripts, that are all unique in their way and not uniform at all.
You can ofcourse achieve all these things in your init scripts which are unique in their way and not uniform at all, just to give credit where credit is due. But systemd makes it practical to use our beloved kernel and it's features in an uniform and standard way... :)
I started my Linux journey so late I can't imagine living without systemd, the few systems I've encountered without systemd are such a major PITA to use.
I recently discovered "unshare" which I could use to remount entire /nix RW for some hardlinking shenanigans without affecting other processes.
systemd is so good, warty UX when interacting with it but the alternative is honestly Windows in my case.
systemd is very complex software. Alternative is very simple software with complex scripting which will reimplement parts of systemd in a buggy way (and that's not necessarily a bad thing). systemd probably is inspired by Windows and other service managers, while old sysv init is just a tiny launcher for script tree.
Just an example of systemd limitation is that systemd does not support musl, so if you want to build a tiny embedded sysroot, you already have some limitations.
>Just an example of systemd limitation is that systemd does not support musl, so if you want to build a tiny embedded sysroot, you already have some limitations.
OpenEmbedded has carried a patchset to build systemd against musl for use in Yocto for a long time.
postmarketOS already got approval from Poettering to make a musl-linked systemd more officially supported.
So you might say that... any sufficiently complicated init system contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of systemd?
I don't know about half, but some part of it - definitely.
I'd also add, that there are some non-trivial requirements for good server daemon programs, like fork, detach from terminal, may be fork again, umask, chdir, may be close some descriptors, maintain PID file, output to syslog, drop privileges and so on. And a lot of those things are implemented in systemd, so basically you can just write very dumb server which will work properly under systemd. So some part of systemd have to be implemented in every server daemon program.
systemd is heavily inspired by macOS' launchd.
More information at https://news.ycombinator.com/item?id=2565780
Not really, AIUI pottering just thought launchd's socket activation/inetd like functionality was neat: https://0pointer.de/blog/projects/systemd.html. Upstart is more of a direct ancestor:
> Why didn't you just add this to Upstart, why did you invent something new?
> Well, the point of the part about Upstart above was to show that the core design of Upstart is flawed, in our opinion. Starting completely from scratch suggests itself if the existing solution appears flawed in its core. However, note that we took a lot of inspiration from Upstart's code-base otherwise.
> If you love Apple launchd so much, why not adopt that?
> launchd is a great invention, but I am not convinced that it would fit well into Linux, nor that it is suitable for a system like Linux with its immense scalability and flexibility to numerous purposes and uses.
launchd is horrible though, the folks complaining about systemd would be up in arms if they had to write poorly typed XML key/value files
And Solaris SMF. There basically seem to be ~three generations of unix init:
1. Agglutination of shell scripts
2. "Oh wow, this is getting annoying"-phase: Wrapper for scripts (SRC SMC openrc etc pp)
3. Service supervision daemons (SMF, launchd, systemd)
Probably. I don't think systemd is a mere "Service supervision daemon", but I'm not in the mood for a can of worms today.
Yeah, I'd probably call systemd something like "an event- and graph-based orchestrator."
Aside, I'm quite surprise that systemd won out, given all the hate for so many years. You got nearly every forum and HN thread that even tangently mention systemd, flooded with people proclaiming systemd to be the spawn of the devil "forced down their throat". At the same time I noticed that a lot of regular Linux users (like developers that just want to deploy stuff) seemed to like systemd unit files. I guess all those complainers, however vocal, were neither the majority, nor the people who actually maintained distributions.
This feeling is particular striking for me, because I once worked on a Linux project with the aim of improving packaging and software distribution. We also got a lot of hate, mainly for not being .deb or.rpm, and it looked to me as if the hate was a large reason for the failure of the project.
Systemd won because for all the deserved hate it solves real problems that nobody else was really trying (or their attempts were worse). There are some really good points to systemd - you just have to takes the bad with it.
Systemd won because RedHat pushed it and made it a dependency of Gnome
My feeling, without evidence, is that a lot of the hate is vocal people who don't like this thing that does stuff differently to what they're used to. I get the feeling, I still understand /etc/init.d and runlevel symlinks and /var/log better than systemd, but that's because I have many more years of experience interacting with it, breaking it, fixing it. Whenever I have to do stuff with systemd it's a bit annoying to have to go learn new stuff, but when I do I find it reasonably straightforward. Just different, and most likely better.
A lot of the hate was because the main developers interactions wiht just about everyone was horrible, the code was opaque at best most of the time and constantly changed, scope creep, and the feeling that it was a massive power grab by Red Hat.
It can be argued that it didn't solve very many problems and added a huge amount of complexity.
The problem with the complaints is that they're all made up. Well, except maybe developer interactions - assholes can be assholes.
systemd isn't opaque, it's open-source. systemd is objectively less opaque than init scripts, because it's very well documented. Init scripts are not.
Sure, you can read them. But then you'd realize that glued together init scripts just re-implement systemd but buggier and slower, at which point you might as well just read the systemd source. Or, better yet, the documentation.
systemd ALSO does not constantly change. The init system has been virtually untouched in a decade, save for bug fixes and a few new features. Your unit files will "just work", across many years and many distros. Yes, systemd is more portable than init scripts.
systemd ALSO does not have any scope creep. Here, people get confused between systemd-init, and systemd the project.
systemd-init is just an init system. Nothing more, nothing less, for a long time now, and forever. There is no scope creep, the unix principle is safe, yadda yadda yadda.
systemd coincidentally is also the name of a project which includes many binaries. All of those binaries are optional. They aren't statically linked, they're not even dynamically linked - they communicate over IPC like every other program on your computer.
systemd is also not complex, it's remarkably simple. systemd unit files are flat, declarative config files. Bash is a turing-complete scripting language with more footguns than C++. Which sounds more complex?
> systemd[-init] ALSO does not constantly change. The init system has been virtually untouched in a decade...
Sure, I'll bite. It'll be more interesting than watching some stupid Twitch streamer.
Gentoo Linux was using OpenRC back in 2002. Looking at the copyright notice in relevant source files, it looks like OpenRC is a project that has been under development since 1999, so I'd expect it was in use back in 1999. However, I will use 2002 as the start date for this discussion because that's when I started using it.
The simple OpenRC service file I mention in footnote 3 in [0] is syntactically identical to the syslogd service file added in to the OpenRC repo back in this commit in late 2007 [1]. The commit that appears to add support for 'command_args' and friends is earlier that day.
So, four years before SystemD's experimental release, the minimal OpenRC service file (that I talk about in [0]) was no more complicated than what would become the minimal SystemD service file no less than four years later. What's more, the more-verbose syntax for service files written in 2002 was supported by 2007's OpenRC, and continues to be supported by 2025's OpenRC.
23 years is quite a bit longer than 15.
> systemd is also not complex, it's remarkably simple. systemd unit files are flat, declarative config files.
See above (and below).
> Here, people get confused between systemd-init, and systemd the project.
In that case, it doesn't do you credit to use "systemd-init" and "systemd" interchangably in your commentary. SystemD absolutely has scope creep. systemd-init... well, I think I remember when it wasn't possible to have it rexecute itself for a no-reboot upgrade of PID 1. And does it still have a dependency on DBus, or did they see sense and get rid of that?
[0] <https://news.ycombinator.com/item?id=44945789>
[1] <https://github.com/OpenRC/openrc/commit/3ec2cc50261f37b76e0e...>
systemd, the project, isn't a binary. Its dozens of binaries. It can't have scope creep because that's impossible - they're literally dozens of separate things.
We call that the unix principle, lol.
Saying systemd has scope creep is like saying GNU has scope creep because they have a compiler and a text editor. Makes no fucking sense.
I also don't consider a dependency on dbus "scope creep". It has to communicate over IPC - okay, don't reinvent the wheel, just use dbus. Every program ever supports dbus if it has a public API over IPC. Sorry if that bothers you.
And sure, maybe OpenRC is just as simple as systemd, but the reality is every distro chose systemd and that's that, and for MOST of them they switched from primarily scripts to unit files.
That is a HUGE reduction in complexity. HUGE.
yes, especially the scope and power creep, which is antithetical to what unix was all about. Which was doing one thing, and doing it well. What started as a neat way to start servers in parallel, as systemd handles the sockets, now can control your home directory. Like what?
One of my earliest memories of using systemd involved logs being corrupted. journalctl would just say the file is corrupted but wouldn't even print the name of the file -- I had to resort to strace. That left a real bad taste in my mouth.
My systemd grumpiness was mostly due to only just (nearly) finishing an upstart migration. The thought of another one so soon after wasn't fun, even if I liked some of the new features. Those days are over though, and I'm glad there's a mostly unified approach.
> I guess all those complainers, however vocal, were neither the majority, nor the people who actually maintained distributions.
This matched my experience: there were a few vocal haters who were very loud but tended not to be professional sysadmins or shipping binaries to other people, and they didn’t have a realistic alternative. If you distributed or managed software, you had a single, robust solution for keeping daemons running with service accounts, restarts, dependencies, etc. for Windows NT circa 1993 and macOS in 2005 so Linux not having something comparable was just this ongoing source of paper cuts which caused some Linux shops to have unexpected, highly visible downtime (e.g. multiple times I saw data center outages where all of the Windows stuff and the properly configured Upstart/SystemD stuff come up after retrying but high-profile apps using SysV init stayed down for hours because the admins had to clean it up by hand).
Anyone who packaged software was also happy to stop supporting different combinations of buggy shell scripts and utilities, too – every RPM I built went from hundreds of lines of .sh to a couple dozen lines of better systemd. systemd certainly isn’t perfect but if you had an actual job to do you were going to look at systemd as the best path to reduce that overhead.
Having the backing of RedHat certainly didn't hurt... I don't care either way, although I still think OpenRC style scripts are much easier to debug and sometimes have more elegant solutions (templates vs symbolic links).
I can't say I know what OpenRC really does, but at it's core systemd is just a glorified script runner as well? You have ExecCondition, ExecStartPre, ExecStart, ExecStartPost, ExecReload, ExecStop, ExecStopPost which are all quite self explanatory, this alongside a billion optional parameters which I wouldn't know how to implement in scripts without endless hours of dread.
I remember an old Debian machine with /etc/init.d/something [start|stop|reload|restart] but I can't recall being able to automatically restart services or monitor status easily. (I didn't speak $shell well back then either)
systemd tries to avoid scripts as much as possible.
/etc/init.d/whatever were all shell scripts, and they all had to implement all the features themselves. So `/etc/init.d/foo restart` wasn't guaranteed to work for every script, because "restart" is something that each script handled individually. And maybe this one just didn't bother implementing it.
There's no good status monitoring in sysV because it's all convenience wrappers, not a coherent system.
I've been reading through how many NixOS modules use systemd units and there's a lot of scripts being executed, the final line execs the service (if there is one, NixOS uses systemd for maintenance tasks, refreshing certificates and many more things). While NixOS doesn't speak for the broader community what I'm trying to say is that it can execute anything, if that's a script or a daemon doesn't matter as long as it works for you.
Thanks for the sysV explanation, it sounds worse to me.
> /etc/init.d/whatever were all shell scripts, and they all had to implement all the features themselves.
A minimal SystemD service file and a minimal OpenRC service file are equally complex.
Here's the OpenRC service file for ntpd (which is not a minimal service file, but is pretty close):
'depend' handles service dependency declaration and start/stop ordering (obviously).'start_pre' is a sanity check that could be removed, or reduced to calling an external script (just like -IIRC- systemd forces you to do). There are _pre and _post hooks for both start and stop.
For a service that has no dependencies on other services, backgrounds itself, and creates a pidfile automatically, the smallest OpenRC service file is four non-blank lines: the '#!/sbin/openrc-run' shebang followed by lines declaring 'pidfile', 'command', and 'command_args'. A program that runs only in the foreground adds one more line, which tells OpenRC to handle daemonizing the thing and writing its pidfile: 'command_background="true"'. See [3] for an example of one such service file.
If you want service supervision, it's as simple as adding 'supervisor=supervise-daemon', and ensuring that your program starts in the foreground. If it doesn't foreground itself automatically, then adding 'command_args_foreground=<Program Foregrounding Args>' will do the trick.
If you're interested in more information about OpenRC service file syntax, check out the guide for them at [0], and for a lot more information, the manual for openrc-run at [1]. For supervision, check out the supervision guide at [2].
[0] <https://github.com/OpenRC/openrc/blob/master/service-script-...>
[1] <https://man.uex.se/8/openrc-run>
[2] <https://github.com/OpenRC/openrc/blob/master/supervise-daemo...>
[3] The OpenRC service file for the 'cups-browsed' service (which is a program that does not daemonize itself) is this:
So on one hand, yes, that's a vast improvement over SysV.
On the other hand, no sir, I still don't like it. It looks very much like Bash. I'm not very fond of Bash to start with and it might not even be actual Bash? Can't tell from the manpage.
But scrolling down to the bottom of the manpage I see a pretty long sample script, and that's exactly what I want to see completely gone. I don't want to look at a 3-way way merge of a service during an upgrade ever again and try and figure out what's all that jank doing. IMO if any of that shell scripting has any reason to be in a service file, it's a bug to be fixed.
My ideal is the simple systemd services: description, dependencies, one command to start, done. No jank with cleaning up temp files, or signals, or pid files (can they please die already), or any of that.
And one of the nice things about systemd services not being a script is that overrides are straightforward and there's never any diffs involved.
> ...at it's core systemd is just a glorified script runner as well?
Yep. And it has a ton of accidental complexity in it. [0] At $DAYJOB, we ran into a production-down incident related to inscrutable SystemD failures once a year. It was always the case that the documentation indicated that our configuration and usage was A-OK. If there ever was a bug report filed, it was always the case that the SystemD maintainers either said words to the effect of "Despite the fact that the docs say that should work, that's an unsupported use case." or "Wow. Weird. Yeah, I guess that behavior is wrong, and it's true that the docs don't warn you about that.", and then go on to do nothing.
SystemD is -IME- like (again, IME) PulseAudio and NetworkManager... it's really great until you hit a show-stopping bug, and then you're just turbofucked because the folks who built and maintain it it want to treat it like it's a black box that works perfectly.
[0] NOTE: I am absolutely not opposed to complex things. I'm opposed to needlessly complex things, and very much opposed to things whose accidental complexity causes production issues, and the system's maintainers' reply to the bug report and minimal repro is "Wow, that's weird. I don't want to fix that. Maybe we should document that that doesn't work." and then go on to do absolutely nothing.
I use runit for some non-system-level stuff. It's extremely simple, possibly too simple. It doesn't manage load order - if a dependency hasn't loaded yet, you just exit the script and the service manager tries again in 2 seconds. Service scripts are just shell scripts.
There are two ways to design a system: so simple that it has obviously no bugs, and so complex that it has no obvious bugs.
That's because systemd knew who the target users of it were: people making distributions, and professional users with little desire to be woken up at 3 AM to troubleshoot a stuck PID file.
Most of the complainers weren't really relevant. They weren't making the decisions on what goes in a distro, and an init system is overall a fairly minor component most users don't use all that often anyway.
> This feeling is particular striking for me, because I once worked on a Linux project with the aim of improving packaging and software distribution. We also got a lot of hate, mainly for not being .deb or.rpm, and it looked to me as if the hate was a large reason for the failure of the project.
I think that's a good deal trickier because packaging is something a Linux user does get involved with quite often, and packaging systems don't mix well. A RPM based distro with some additional packager grafted on top is a recipe for disaster.
Still, I think it's also a case of the same thing: sell it to the right people. Find people making new distros suffering problems with DEB and RPM and tell them your tool can save them a lot of pain. The users can come in later.
> Still, I think it's also a case of the same thing: sell it to the right people. Find people making new distros suffering problems with DEB and RPM and tell them your tool can save them a lot of pain.
To quote one of my favorite Clone Wars episodes: Fifty tried, fifty died [1].
There have been so, so many attempts at solving the "how to ship binary builds for Linux" question... both deb and rpm have their good and their bad, and on top of that you got `alien`, flatpak, Docker images, the sledgehammer aka shipping everything as a fully static binary (e.g. UT2004 did this) or outright banning prebuilt binaries (the Gentoo and buildroot way). But that's not the actual problem that needs solving.
The actual problem is dependency hell. You might be lucky to be able to transplant a Debian deb into an Ubuntu installation and vice versa, or a SLES rpm to RHEL, but only if the host-side shared libraries that the package depends on are compatible enough on a binary level with what the package expects.
That suddenly drives up the complexity requirements for shipping software even for a single Linux distribution massively. In contrast to Windows, where Microsoft still invests significant financial resources into API-side backwards compatibility, this is not a thing in any Linux distribution. Even if you're focusing just on Debian and Ubuntu, you have to compile your software at least four different times (one each for Debian Stable, Debian Testing, Ubuntu <current rolling release> and Ubuntu <current LTS>), simply because of different versions of dependencies. Oh and in the worst case you might need different codepaths to account for API changes between these different dependency versions.
And even if you had some sort of DSL that generated the respective package manager control files to build packages for the most common combinations of package manager, distributions and actively supported releases of these, there's so, so much work involved in setting up and maintaining the repositories. Add in actually submitting your packages to upstream (which is only possible for reasonably-ish open source packages in the first place), and the process becomes even more of a nightmare.
And that's all before digging into the topics of autotools, vendoring (hello nodejs/php/python ecosystems), digital signature keyrings, desktop manager ecosystems and god knows what else. Oh, and distribution bureaucracy is even more of a nightmare... because you now have to deal with quirks in other people's software too, and in the worst case with a time span of many years of your own releases plus the distribution release cadence!
Shipping software that's not fully OSS on Linux sucks, shipping closed source software for Linux sucks even more. Windows has had that sort of developer experience figured out from day one. Even if you didn't want to pirate or pay up for InstallShield, it was and is trivial to just write an executable, compile it and it will run everywhere.
[1] https://starwars.fandom.com/wiki/Mystery_of_a_Thousand_Moons
IMO, binary compatibility on Linux isn't really solvable. There's just a thousand tiny projects that make up the Linux base that aren't on the same page, and that's not about to change.
I do think packaging can be improved. I hate almost everything about how dpkg works, it's amazing. So I'm squarely in the RPM camp because I find the tooling a lot more tolerable, but still surely further improvements can be made.
Anyway, the ecosystem stays heathy because of code contributions. So what’s the point of binary compatibility (from the point of view of the people actually making Linux work: Open Source developers and repo maintainers)?
> So what’s the point of binary compatibility (from the point of view of the people actually making Linux work: Open Source developers and repo maintainers)?
Want to see Linux on the desktop actually happen? Then allow a hassle free way for commercial software that is not "pray that WINE works good enough" aka use win32 as an ABI layer.
Of course we can stay on our high horses and demand that everything be open source and that life for closed source developers be made as difficult as possible (the Linux kernel is particularly and egregiously bad in that perspective), but then we don't get to whine about why Linux on the desktop hasn't fucking happened yet.
I don’t really know what the point of this “Linux on the desktop” event would be, or even what it is. (Clearly it isn’t just Linux on desktops, because that’s been working fine forever).
The whole point of my comment was to keep in mind the incentives of different sub-groups. If “Linux on the desktop” doesn’t benefit the people that make Linux work, I don’t see what the big deal is.
> I don’t really know what the point of this “Linux on the desktop” event would be, or even what it is.
Getting Linux adopted in F500 companies as the default desktop OS. That is the actual litmus test, because (large) companies need an OS that can be centrally managed with ease, doesn't generate a flood of DPU (Dumbest Possible User) support demand and can run the proprietary software that's vital to the company's needs in addition to the various spyware required by cybersecurity insurances and auditors these days.
At the moment, Linux just Is Not There. Windows has GPOs and AD (that, in addition, ties into Office 365 perfectly fine), Mac has JAMF and a few other MDM solutions. Many a corporate software doesn't even run properly under WINE (not surprising, the focus of Proton and, by it, WINE is gaming), there's a myriad ways of doing central management, and good luck trying to re-educate employees that have been at the company so long they grew roots into their chairs.
I only started packaging relatively recently. Using OBS definitely made things easier, but it's crazy how much nicer RPM is than dpkg. So much better to have more-or-less everything inside a spec file with macros, versus dpgk's mess of static, purpose-specific files.
> ...or outright banning prebuilt binaries (the Gentoo and buildroot way).
You, uh, haven't used Gentoo in like twenty years, have you? You've been able to host your own prebuilt binaries (or use the prebuilts of others who bothered sharing them) for as long as I can remember (FWIW, I started using Gentoo in 2002 or 2004). The Gentoo folks decided to set up official binary package servers at the end of 2023 (look at the Dec 29, 2023 news item on the Gentoo home page for more info).
Systemd had the momentum to unify nearly all the init distribution process and it did it, but without aligned with the right unix approach, the evolved versions confirm that, getting more and more from the reliying Linux distribution.
Broad view betweeen BSD ecosystem offers that this wasn’t a good way. I still want to see a good alternative from that point of view…
> You got nearly every forum and HN thread that even tangently mention systemd, flooded with people proclaiming systemd to be the spawn of the devil "forced down their throat".
There's a large cohort of Linux users whose entire personality is "I'M A CoNtRaRiAn!" and argued against systemd because Red Hat was pushing it. Reddit was filled with such a minority of loud anti-systemd trolls. Pushed for reasons for their disdain you'd get non-sensical or baseless replies. The best ones were known bugs that had been closed for months.
It would be cool to have a repo with suggested hardening for common services, since there's so many different hardening options. One of the things you might notice from lots of users using common suggestions, is that the permissions often need to be opened up more than you'd think, to support edge cases.
> It would be cool to have a repo with suggested hardening for common services
From packaging stuff for nixpkgs, a distro that often is without upstream support, it is usually very useful to look at how mainstream distro package services.
Those hardening steps also tend to be well tested even if sometimes a bit lax. If you want to find out how, e. G., postgresql can be hardened, consider looking at the Debian, Ubuntu and/or RHEL packages as a starting point.
Distros don't usually do security hardening, unless the distro is security-specific. They slap something generic on like AppArmor or SELinux and call it a day. (This article is the proof of that... all the default services are not hardened). Usually this is a good thing, as it prioritizes usability, and lets the user harden as they wish.
Another great security feature systemd provides is credential management[1], which allows you to expose credentials to an application in a more secure way than, say, an environment variable or a file in the filesystem.
When Vault is not available, if I’m working on a side project, for example, that’s what I always go for. Even wrote a small Go package[2] to get said credentials when your application is running inside a service with that feature.
[1]: https://systemd.io/CREDENTIALS/
[2]: https://sr.ht/~jamesponddotco/credential-go/
I see we've decided to take the path of nodejs and npm, where 2-liners deserve their own package.
That's less complexity than left-pad.I thought the go culture was that dependencies are bad, and abstractions (i.e. any function calls) are confusing and bad, so it's better to just inline stuff like this and write it fresh each time you need it.
I was writing the same few lines for every project, so why not make it its own package?
For my projects I can just include the dependency, as I wrote it and don’t mind using it. Other people can copy it instead, since the proverb goes “a little copying is better than a little dependency”.
Use rust or typescript or something where it's socially acceptable to make small packages.
That'll also let you avoid writing the `if err != nil { return fmt.Errorf("context string something: %w", err) }` boilerplate again and again too (since you can just write '.context("context")?' each time).
If you're using Go, you're not supposed to build abstractions, small packages, or any sorta clever or good code. And be really careful using generics.
If you want to write abstractions, you're supposed to use a different language. Those are the rules.
Editor macros/snippets?
Didn’t think of that ¯\_(ツ)_/¯
For what it's worth this will not work properly, you have different environment variables for user and system units. Proper error handling and graceful fallback for these cases are probably worth a module here (though it could be a 10-20 liner instead too).
> you have different environment variables for user and system units
Really? I thought the point of the environment variable was it was the same, and the directory it pointed to differed depending on the service type.
I'd love a reference since at least for every systemd version I've used, you're wrong.
> Proper error handling and graceful fallback
That's application specific, so it can't really fit in a generic library well.
> I'd love a reference since at least for every systemd version I've used, you're wrong.
I must've had it confused with StateDirectory[0]. Thank you for pointing my mistake out. That does make the library a bit less useful.
[0]: https://www.freedesktop.org/software/systemd/man/latest/syst... Table 2
StateDirectory looks like it's also always the environment variable 'STATE_DIRECTORY', regardless whether it's a user or system unit, right?
works fine with both `--system` and `--user`, so seems like that's the same too.systemd's method does not require the service process to have access to the original creds file, or even the filesystem that contains it. In fact the original credentials file might even be an encrypted file and the key (and/or hardware) to decrypt it does not need to be accessible to the service process.
You completely misunderstand.
The library the person linked is to deal with systemd credentials in go.
The two lines of code I wrote are the same, and in fact are effectively 100% of the code in the library.
Oh I see.
... how exactly does it prevent inheritance to forked children?
Great article. I really appreciate the list of properties and the "check the man pages, good luck" advice. systemd is really a great piece of software I would enjoy deploying on my hosts!
Nitpick and title correction: The proper spelling of systemd is systemd, not SystemD. According to their brand page:
Yes, it is written systemd, not system D or System D, or even SystemD. And it isn't system d either. Why? Because it's a system daemon, and under Unix/Linux those are in lower case, and get suffixed with a lower case d. And since systemd manages the system, it's called systemd.
Interesting! I’ve mostly seen it written as systemD. I wonder why that seems to be so popular…
Many of the more juvenile systemd haters used to deliberately spell it that way.
As an insult, it was rather less successful than the "Micro$oft" / "Slowaris" / "HP-SUX" mockery from the 1990s - but it did manage to sow enough confusion that that it still pops up regularly today, even in contexts that are otherwise neutral or positive about it.
That’s funny, I’m not even sure what the D is supposed to expand to, in that insult. What a silly and lame thing to use as an insult.
I’ve been using it because having some random letter capitalized seems to be the totally unsurprising for this sort of plumbing software. (And by plumbing, I mean: very useful and helpful boring stuff that deals with messy problems that I’m happy not to care about, just to be clear that I mean it positively, haha).
Nice tip on debugging syscall issues!
color me surprised to see an article mostly constructive and positive about systemd using the wrong capitalization of the project name.
Normally the rule is that people mis-capitalizing the name are usually critical of the project.
It's systemd, not SystemD
I'd hate to restrict Docker like that - depending on what you run inside of Docker, it would be very hard to narrow it down to the right security tuning settings. In that case, it's actually safer and more predictable to run it in systemd (arguably).
What would be so hard about it? Also, this is not docker--it's podman. Which has a much simpler execution model than Docker. With it, it shouldn't be any harder to narrow down what the problem is, compared to running a non-containerized service.
So, no more DEFAULT-DENY for SystemD?
these Hardening variables have been discussed some years back[1].
this will not take off I'm afraid, because locking these unitfiles down is offloaded to the end-user (I've yet to see maintainers embrace shipping locked down files). Maybe they will? But this same approach hasn't worked with apparmor so why should it work with systemd? Who will do the job?
If you consider apparmor maintainers provide skeleton-templates in many cases that will make the parser stop complaining. ("look I have a profile so apparmor shuts up, but don't take too close a look OK")
Then there is firejail, which some argue[2] is snake-oil considering the high level of administrative glue compared to its massive attack-surface (also it's a setuid binary).
I didn't mention SElinux since I don't know a single person who had the joy (or pain depending on perspective) of working with it. But again, seems the expectation to implement security with it is shifted to the user.
[1] https://news.ycombinator.com/item?id=22993304
[2] https://github.com/netblue30/firejail/issues/3046
> this will not take off I'm afraid, because locking these unitfiles down is offloaded to the end-user (I've yet to see maintainers embrace shipping locked down files).
https://fedoraproject.org/wiki/Changes/SystemdSecurityHarden...
It says the change was dropped? I guess at least they tried.
> Change is dropped because of inactivity. Owner is welcome to resubmit if the work is picked up again.
thanks for the link, this is great news.
I vaguely recall looking at the slides from a talk on OpenBSD's approach to this topic, which came down to (paraphrasing from hazy memory) "if it can be disabled, people will disable it; if it needs to be configured, people won't configure it".
> this will not take off I'm afraid, because locking these unitfiles down is offloaded to the end-user
Maybe your point is that this isn't done by the vendor in practice. And I'm sure there's room for lots of improvement. However, one of the great things about how systemd units can be provided by the vendor and seamlessly tweaked by the administrator is that the vendor (i.e. packager and/or distro) can set these up easily.
There definitely are packages that ship with locked-down files. Tor and powerdns (pdns) are two off the top of my head.
I think it should be done by the maintainer of the software not by the distro. My concern is that these features are available since at least 5 years and it has not yet caught on (regardless of what this blog article recommends).
It would be great to see it implemented but for now at least on Debian/sid the situation is as follows:
> I think it should be done by the maintainer of the software not by the distro
Why would you say that? I would agree that the developer likely has better insight into what the software needs. But the security boundary exists at the interface of the application and the system, so I think that both application devs and system devs (i.e. distros) have something to contribute here.
And because systemd allows for composition of these settings, it doesn't have to be a one-or-the other situation--a distro can do some basic locking down (e.g. limiting SUID, DynamicUser, etc.), and then the application dev can do syscall filtering.
In any case, I agree that I'd like to see things get even more locked down. But it's worth remembering that, before systemd, there was basically no easy-to-use least-privilege stuff available beyond Unix users and filesystem permissions. The closest you had (afaik) was apparmor and selinux. In both of those cases, the distro basically had to do all the work to create the security policy.
Also, n.b., that pdns.service I noted is provided by PowerDNS themselves.
imho adding a few lines to the systemd unit file is waaaaaay easier than apparmor
and really, these should be written by the developers, not distro maintainers
poking around on my Ubuntu machine, a few daemons have some hardening, chronyd looks pretty good
It would be nice to be possible to do the hardening of services via allowlisting instead. E.g. AllowNothing=true and then start adding what is allowed to make the service function.
I think that pledge[0] offers that functionality
[0] https://github.com/jart/pledge
The unreasonable effectiveness of writing a security scanner. People will do anything it takes to make the scanner give a perfect score, regardless of whether it makes sense.
Quoting https://brand.systemd.io/#:~:text=Yes,%20it%20is%20written%2...
"Yes, it is written systemd, not system D or System D, or even SystemD. And it isn't system d either. [...]"
At some point, if you have to write articles about the proper spelling of the name, maybe you should just accept the alternative names as well. Also looking at you Datadog and Cloudflare. (The employees of the second one are especially allergic to CloudFlare for some reason)
We tried very hard to convert FastMail to Fastmail and... it's been about 90% successful but there's definitely a bunch of things out there spelled the old way. We just joke about BIG M occasionally.
[TIL - it's not even as old as me!] https://australianfoodtimeline.com.au/1978-launch-of-big-m/
And I tried telling people "news-teller" (or "news teller") is a better word than "herald", and I got told off, so I dunno.
By the way, I love that the OP used the Text Fragment (Scroll-to-Text Fragment) feature. I hope it is going to catch on more, quite helpful / useful.
I guess you could say the same about pronouns.
Anyway, why would someone use the spelling with upper D beats me. It's not proper English.
> I guess you could say the same about pronouns.
Entirely different thing. Software/things are not the same as people.
> why would someone use the spelling with upper D
SystemV, Zyklon B, Vampire hunter D, Plan B, Model T, Type O, ... It's extremely common.
[dead]
this is ... very low quality, and very low density. why did someone feel it was useful to post to HN?
Could you please expand on the "very low quality" claim? What's missing for you here? How would you raise a bar on this piece to make it more of a high quality, security focused content? Genuine question.
The security issues with Linux will only be solved once lennart pulls the entire userland into the systemd repo
Best systemd hardening is switching to OpenRC or runit
Do you have any references for doing similar system hardening under either of those?
[dead]
No, switching to Qubes OS is the real hardening.
also true
An unbootable system is indeed harder to exploit!
/s
Why would OpenRC or runit be any less likely to boot?
My response was a joke to a low-effort comment, but in general - systemd is complex because it solves the complex problem of booting up a system, complete with error handling, logging, etc. Many of the alternatives simply ignore part of the problem space, making the simple case simpler, but the complex case impossible.