>there's far less emphasis on creating native distribution packages for third-party software in 2025. Flatpaks, Snaps, and AppImage packages seem more popular with desktop-application developers these days. A lot of server-side software is now expected to be deployed as a container—or a group of containers run in Kubernetes—rather than installed as a package.
Are CLI tools or low-level, privileged software (e.g. anything that requires root) also distributed using flatpak or snap these days?
Ubuntu distributes some system daemons and even the kernel image as Snaps. The Ubuntu Server interactive installer nags you to look at a list of server software (such as nginx) to install using Snap.
Flatpak hasn't really taken the same path, it doesn't have much utility for anything other than desktop apps.
Toolbox and Distrobox are not based on Flatpak, though. They're more just nicely packaged docker-like container tools, targeted for development use cases.
There's an awful lot of back and forth in the comments there over whether it should be a specification that defines its requirements in terms of whatever systemd programs happen to do, or whether it should be a specification with its own concrete basis that systemd is held to like everything else.
It should be neither. It should be a set of rules that most people can agree on. If some of that is what systemd does, that's fine. If there are things that systemd does that most people don't agree on, something else should end up in the standard, and systemd should conform to it.
The problem is that systemd evokes some pretty visceral negative reactions in people. I still have mixed feelings about it, but, by and large, I encounter minimal real-world issues with it. Just because systemd has decided to do something that violates the older FHS3/4 standards, doesn't automatically make it a bad thing. Maybe what they're doing is a better way. Maybe not.
The irony is that systemd doesn't really follow what it prescribes in file-hierarchy(7), and expects some files in the "wrong" place. Other software has (obviously) followed suit, so now we're in a world where software follows the conventions that systemd _implements_ to maintain compatibility, rather than what it _documents_.
The most obvious example that comes to mind is /usr/lib/os-release, which file-hierarchy(7) indicates should actually be in /usr/share/os-release.
There was a long time when the Linux world held /opt in disfavour, because officially it required either a stock market ticker name or some other corporate identity to make a subdirectory legitimate. You can still see traces of this in the Solaris descendant operating systems, where pkginfo(5) talks about package names using corporate stock ticker names.
/opt/SUNW* used to be a very familiar thing to a lot of people.
Maybe enough time has passed for anti-corporate memory to fade. Maybe there's enough corporate backing in the Linux world now to resurrect the idea regardless.
Maybe /opt/RHT* is the shape of things to come. (-:
I've never over the years seen the systemd people advocate for /opt, though.
It shouldn't do anything until the user has told it where the files should end up. It's an unpackaged program, there is no sane place to put it that doesn't have a high chance of conflicting with something else.
That's only due to a lack of standardization. I think a default install to a vendor-specific directory under /opt is a sane place to put it, and there's a very low chance that would conflict with something else.
But sure, absolutely, an installer should prompt the user for an install location, and I think a default under /opt is probably among the best defaults possible, if we consider installing outside $HOME to be reasonable.
Honestly there should be no install-bs.sh and you just bind everything into the file tree as needed. At least that is how it works on Plan 9 which simplifies a lot of things like path which is just '/bin.'
Honestly? We need not a successor of FHS but of filesystems, who are intimately tied package managers and installers. Zfs timidly start the change, with IPS (Image Package System) and BE (Boot Environments, as zfs clones), and we need to go much beyond that instead of wasting resources keeping up an '80s model like some do from btrfs to stratis.
We need:
- query-able storage, because search&narrow is the current way of accessing information and collecting/transcluding data is the way to go;
- easy storage management, the "rampant layer violation" of zfs we really need;
- integration of such storage to the software stack, from the OS to single packages, it's a nonsense having to "spread" archives in a taxonomy to deploy them or downloading archives to be unpacked as well for updates when we have send-able filesystems (zfs send of snapthots) and binary diff (from a snapshot "tagged version" of a fs-package to another, sent over internet).
Unfortunately we need operation people together with devs and nowadays operation is nearly disappeared. Devs alone can't understand what we need, they can't go beyond their desktops in a mass large enough to avoid a positive evolution.
Who is "we?" I certainly don't need those things. If you need to add a bunch of complexity for your use case then feel free, but for most of us it's unnecessary.
> - easy storage management, the "rampant layer violation" of zfs we really need;
Except in zfs you have to think if you really want that device in that pool or that vdev. I use btrfs, slow and kinda unsafe, specifically because you just specify raid1c2/raid1c3/raid1c4 and it kind of survives c-1 dead disks (until you run out of disk space and everything goes to flames).
> - integration of such storage to the software stack, from the OS to single packages, it's a nonsense having to "spread" archives in a taxonomy to deploy them or downloading archives to be unpacked as well for updates when we have send-able filesystems (zfs send of snapthots) and binary diff (from a snapshot "tagged version" of a fs-package to another, sent over internet).
We (kinda, for some very generous definitions of) have that in composefs? But I still sense even with that, you still want some resemblance of sanity in your indivual layers.
> Last year, postmarketOS core developer Pablo Correa Gomez and a few others started an effort to move the FHS work under the freedesktop.org banner and create 4.0 of the standard
No. freedesktop.org is the place where standards go to die and CADT development takes place.
>there's far less emphasis on creating native distribution packages for third-party software in 2025. Flatpaks, Snaps, and AppImage packages seem more popular with desktop-application developers these days. A lot of server-side software is now expected to be deployed as a container—or a group of containers run in Kubernetes—rather than installed as a package.
Are CLI tools or low-level, privileged software (e.g. anything that requires root) also distributed using flatpak or snap these days?
Ubuntu distributes some system daemons and even the kernel image as Snaps. The Ubuntu Server interactive installer nags you to look at a list of server software (such as nginx) to install using Snap.
Flatpak hasn't really taken the same path, it doesn't have much utility for anything other than desktop apps.
There is toolbox, see https://github.com/containers/toolbox.
Toolbox and Distrobox are not based on Flatpak, though. They're more just nicely packaged docker-like container tools, targeted for development use cases.
There's an awful lot of back and forth in the comments there over whether it should be a specification that defines its requirements in terms of whatever systemd programs happen to do, or whether it should be a specification with its own concrete basis that systemd is held to like everything else.
It should be neither. It should be a set of rules that most people can agree on. If some of that is what systemd does, that's fine. If there are things that systemd does that most people don't agree on, something else should end up in the standard, and systemd should conform to it.
The problem is that systemd evokes some pretty visceral negative reactions in people. I still have mixed feelings about it, but, by and large, I encounter minimal real-world issues with it. Just because systemd has decided to do something that violates the older FHS3/4 standards, doesn't automatically make it a bad thing. Maybe what they're doing is a better way. Maybe not.
The irony is that systemd doesn't really follow what it prescribes in file-hierarchy(7), and expects some files in the "wrong" place. Other software has (obviously) followed suit, so now we're in a world where software follows the conventions that systemd _implements_ to maintain compatibility, rather than what it _documents_.
The most obvious example that comes to mind is /usr/lib/os-release, which file-hierarchy(7) indicates should actually be in /usr/share/os-release.
A little surprised that the linked systemd file-hierarchy(7) manpage makes no mention of /opt
You won't find it in the hier(7) manual pages on BSDs, either.
* https://man.openbsd.org/hier
* https://man.netbsd.org/hier.7
* https://man.freebsd.org/cgi/man.cgi?query=hier&sektion=7
* https://man.dragonflybsd.org/?command=hier§ion=7
There was a long time when the Linux world held /opt in disfavour, because officially it required either a stock market ticker name or some other corporate identity to make a subdirectory legitimate. You can still see traces of this in the Solaris descendant operating systems, where pkginfo(5) talks about package names using corporate stock ticker names.
* https://illumos.org/man/5/pkginfo
/opt/SUNW* used to be a very familiar thing to a lot of people.
Maybe enough time has passed for anti-corporate memory to fade. Maybe there's enough corporate backing in the Linux world now to resurrect the idea regardless.
Maybe /opt/RHT* is the shape of things to come. (-:
I've never over the years seen the systemd people advocate for /opt, though.
/opt/ is just a dumping ground for crap nobody can find a better location for
Where else would ./install-3rdparty-software.sh write to? Should it spray files all over /usr?
It shouldn't do anything until the user has told it where the files should end up. It's an unpackaged program, there is no sane place to put it that doesn't have a high chance of conflicting with something else.
That's only due to a lack of standardization. I think a default install to a vendor-specific directory under /opt is a sane place to put it, and there's a very low chance that would conflict with something else.
But sure, absolutely, an installer should prompt the user for an install location, and I think a default under /opt is probably among the best defaults possible, if we consider installing outside $HOME to be reasonable.
Honestly there should be no install-bs.sh and you just bind everything into the file tree as needed. At least that is how it works on Plan 9 which simplifies a lot of things like path which is just '/bin.'
/opt is used for manually-installed software. Packages should never place files there, so it falls out-of-scope for file-hierarchy(7) or hier(7).
Honestly? We need not a successor of FHS but of filesystems, who are intimately tied package managers and installers. Zfs timidly start the change, with IPS (Image Package System) and BE (Boot Environments, as zfs clones), and we need to go much beyond that instead of wasting resources keeping up an '80s model like some do from btrfs to stratis.
We need:
- query-able storage, because search&narrow is the current way of accessing information and collecting/transcluding data is the way to go;
- easy storage management, the "rampant layer violation" of zfs we really need;
- integration of such storage to the software stack, from the OS to single packages, it's a nonsense having to "spread" archives in a taxonomy to deploy them or downloading archives to be unpacked as well for updates when we have send-able filesystems (zfs send of snapthots) and binary diff (from a snapshot "tagged version" of a fs-package to another, sent over internet).
Unfortunately we need operation people together with devs and nowadays operation is nearly disappeared. Devs alone can't understand what we need, they can't go beyond their desktops in a mass large enough to avoid a positive evolution.
Who is "we?" I certainly don't need those things. If you need to add a bunch of complexity for your use case then feel free, but for most of us it's unnecessary.
> - easy storage management, the "rampant layer violation" of zfs we really need;
Except in zfs you have to think if you really want that device in that pool or that vdev. I use btrfs, slow and kinda unsafe, specifically because you just specify raid1c2/raid1c3/raid1c4 and it kind of survives c-1 dead disks (until you run out of disk space and everything goes to flames).
> - integration of such storage to the software stack, from the OS to single packages, it's a nonsense having to "spread" archives in a taxonomy to deploy them or downloading archives to be unpacked as well for updates when we have send-able filesystems (zfs send of snapthots) and binary diff (from a snapshot "tagged version" of a fs-package to another, sent over internet).
We (kinda, for some very generous definitions of) have that in composefs? But I still sense even with that, you still want some resemblance of sanity in your indivual layers.
> Last year, postmarketOS core developer Pablo Correa Gomez and a few others started an effort to move the FHS work under the freedesktop.org banner and create 4.0 of the standard
No. freedesktop.org is the place where standards go to die and CADT development takes place.