kev009 3 days ago

I use Linux, FreeBSD, NetBSD, and OpenBSD all for fun, learning, and profit(the first two).

At the very least it is nice to make acquaintance with at least one BSD because it will probably expand your knowledge on Linux in ways you wont be able to anticipate.

For example, FreeBSD got me into kernel development, full system debugging, network stack development, driver development, and understanding how the whole kit fits together. Those skills transferred back and forth with reasonable fidelity to Linux, and for me jumping into Linux development cold would have been too big a leap.. especially in confidence and developing a mental model.

For my personal infrastructure, I tend to use FreeBSD because in many ways it is simpler and less surprising, especially when accounting for the passage of time. ifconfig is still ifconfig, and it works great. rc.d is all I need for my own stuff. I like the systematic effects of things like tunables and sysctl for managing hardware and kernel configuration. The man pages are forever useful to new and old users. The kernel APIs and userland APIs are extremely stable akin to commercial operating systems and unlike Linux.

There are warts. There are community frictions. The desktop story and some developer experiences will be perpetually behind Linux due to the size of the contributor base and user base. The job market for BSD is very limited compared to Linux. But I don't think it's an all or nothing affair, and ideally in a high stakes operation you would dual stack for availability and zero-day mitigation (Verisign once gave a great talk on this).

  • graemep 3 days ago

    > tend to use FreeBSD because in many ways it is simpler and less surprising, especially when accounting for the passage of time. ifconfig is still ifconfig, and it works great. rc.d is all I need for my own stuff.

    That sounds very appealing to me. I have to keep a small number of servers running, but its not my main focus and I would like to spend as little time on it as possible.

    I have started using Alpine Linux for servers (not for my desktop, yet) because it is light and simple. Maybe BSD will be the next step.

    • nucleardog 3 days ago

      The UI generally quite stable and well documented, which is awesome.

      They also have things like `rpm` in ports that you can install. Why? Because you can enable linux binary compatibility[0] and run linux binaries on it (this implements the linux kernel interface, it's not a VM/emulator). It's also backward compatible with its own binaries back to FreeBSD 4 (circa 2000).

      Though you may not need that as the ports/packages collection is pretty comprehensive.

      It also comes with some nifty tools built-in for isolation (similar to but predating cgroups/containers) as "jails". It also has a hypervisor built in (bhyve) for virtualization if you do need to run any linux VMs or anything for any reason.

      The way I usually sum up the difference to people is that FreeBSD is designed while Linux is grown. FreeBSD feels much more like a cohesive whole than Linux.

      Really, the only reason I'm not running it everywhere is that the industry has kind of settled on linux-style containers for... absolutely everything, and the current solution for that on FreeBSD is basically "run linux in a VM".

      [0] https://docs.freebsd.org/en/books/handbook/linuxemu/

      • ornornor 2 days ago

        > Really, the only reason I'm not running it everywhere is that the industry has kind of settled on linux-style containers for... absolutely everything, and the current solution for that on FreeBSD is basically "run linux in a VM".

        That, and the wifi situation on laptops.

      • graemep 2 days ago

        Thanks. I have been thinking about it for a while, but have never made the leap to using it. Mostly I am running pretty small and simple servers, and do not really need Linux style containers.

        • kev009 2 days ago

          That is the sweet spot for any of the BSDs. FreeBSD has the most of pretty much everything so it's my usual recommendation, but you could probably get along with Net and Open too which have their own charm.

    • barkingcat 2 days ago

      I've run the same freebsd system for the last 15 years just hosting a small site for some of my friends from back in university.

      I've migrated it through system upgrades and security fixes, but nothing else needed to change. usual uptime is about 3 years between major release updates.

      freebsd is an awesome server platform.

  • cientifico 3 days ago

    I completely agree! If you're looking to deeply understand how Linux works under the hood, I highly recommend trying out Linux From Scratch. It gave me invaluable insight into the system, especially when I first explored it 20 years ago. Building everything from the ground up—without relying on prepackaged distros or libc—was a game changer.

    Check it out: https://www.linuxfromscratch.org/

    • hggigg 3 days ago

      I ran through that about 10 years ago. Not difficult for a reasonably seasoned Unixy person to run through but it quite frankly scared the shit out of me on the basis of how much stuff requires patching and hacking. Nothing fits together nicely. It gave me a whole new appreciation of macOS and FreeBSD.

  • torstenvl 3 days ago

    Yeah, you know, a lot of us talk about FreeBSD being great because you aren't surprised by design choices—it mostly just makes sense.

    But I don't think we talk enough about the joy of not being surprised by updates. I'm about to do an upgrade from 13.2 to 14.1 this weekend and I am very confident that I won't have to worry about configuring a new audio subsystem, changing all my startup scripts to work with a new service manager paradigm, or half my programs suddenly being slow because they're some new quasi-container package format.

  • ab71e5 3 days ago

    What sort of projects did you do to get into network stack and driver development on BSD?

    • kev009 2 days ago

      I worked at a large service provider and had to fix bugs and improve performance in TCP/IP and Intel Ethernet drivers. I ended up really enjoying it so it became a hobby and consulting gig as well.

      • ab71e5 2 days ago

        Awesome, you are kind of describing my dream job. Any particular hobby projects you are willing to share? I'm working on something myself trying to use eBPF/XDP offloading for a particular protocol

        • kev009 2 days ago

          Lately I have been enjoying NetBSD on a EdgeRouter-4 (Octeon MIPS). I am trying to port some of the OpenBSD Octeon improvements like the MMC driver and maybe SMP support. On FreeBSD I have been pushing some changes to the 1, 2.5, and 10gbit Intel drivers that will decrease driver overhead a few percent.

    • tcmart14 2 days ago

      Just commenting to express my interest also in the answer to this question.

  • v1ne 2 days ago

    I also still use FreeBSD on my NAS. But after many years, the desktop experience was pretty sad and made me switch to Windows + Linux for my hardware tinkering. On one side, the lack of manpower shows in many places, unfortunately. I'm talking modern WiFi, GPU support, or power-save mechanisms. On the other side, many Open Source projects only support Linux and getting their projects to compile + run on FreeBSD was a pain, too.

    I mean, in addition to what kev009 mentioned, FreeBSD has so many great things to offer: For example, a full-featured "ifconfig" instead of ip + ethtool + iwconfig. Or consistent file-system snapshots since like forever on UFS (and ZFS, of course). I never understood how people in a commercial setup could run filesystem-level backups on a machine without that, like on Linux with ext4. It's just asking for trouble.

    So, I'm happy to see this thread about FreeBSD here! Maybe we can make the Open Source scene a bit more diverse again with regards to operating systems…

  • knowitnone 2 days ago

    I see the reason for dual stack but I would rather focus my efforts into securing one OS. If you buy into that, why not 3 or 4 different OSes?

    • kev009 2 days ago

      It's an engineering tradeoff and market adaptation.

      If you are a typical SaaS provider the complexity may be beyond your capabilities and budgets.. allegory to a local delivery business choosing to build a long term relationship with a single vehicle manufacturer and dealer.

      If you are a high stakes service provider, you need to start thinking about how to get out from being controlled by a single vendor and market flux and plot your own destiny.. allegory to a national shipping carrier sourcing vehicles from multiple manufacturers and developing long term relationships with them to refine the platform.

    • hansvm 2 days ago

      3-4 would be great. What would you pick for the rest in this day and age though?

      • gspencley 2 days ago

        I'm still waiting for BeOS to make a come back

        • timeon 2 days ago

          What is the situation with Haiku now?

    • eklavya 2 days ago

      Because tradeoffs and diminishing returns are a thing? For example having one backup is better than having no backups. Backups spread across earth, moon and mars would be better but probaboy not worth the cost.

  • MuffinFlavored 2 days ago

    > it will probably expand your knowledge

    It really just fragments my knowledge to be honest.

    Say "I gotta get things done".

    Get me to a terminal. You've got Mac OS command line flags, GNU, BSD. Great.

    Then it's some kind of asinine config to interact with some piece of software, all to achieve "generally the same thing", just a different way/flavor.

    I really don't see the benefits.

    • cientifico 2 days ago

      It’s like driving a car. Most people can drive without needing to understand the internals, and it’ll still get them from point A to B. But just like some people enjoy diving deep into car mechanics, others enjoy understanding the intricacies of software.

      For me, that deeper knowledge is an advantage. It helps me quickly evaluate tradeoffs between databases, debug at the OS level, or dismiss a library still relying on select if I expect heavy load. This insight saves time and increases efficiency.

      • kev009 2 days ago

        Yes, and if you are a powertrain engineer (allegory for system developer) you'd benefit from knowing a variety of technologies in the automotive industry to be able to adapt to market conditions. If you are a heavy equipment mechanic (allegory for operator), knowing Cummins, Cat, and Detroit engines would be expected. For some reason computing professionals tend to be a lot more tribal and gravitate toward monoculture.

      • fragmede 2 days ago

        And in the car analogy, it's other a diesel engine vs gasoline. The fundamentals are the same but there are some core differences, and it's just nice to know more stuff about everything.

      • MuffinFlavored 14 hours ago

        I politely disagree. I love the driving a car analogy.

        You've got 2 kinds of cars overall really. Automatic transmission or manual. Get into any of them and everything is the same. Gas pedal, brake pedal, etc.

        Objective what you need to get done? Drive from A to B.

        It is impossible to drive from A to B with FreeBSD if all you know is Linux commands/syntaxes. That's my whole argument. That it isn't as mild of a jump as driving car A or car B. You are blocked (and need to throw out what you know on Linux for BSD because it'll collide).

        Are the differences mild? Sure, but enough to make you ineffective.

thoroughburro 3 days ago

> The largest failure was with btrfs — after a reboot, a 50 TB filesystem (in mirror, for backups) simply stopped working. No more mounting possible. Data was lost, but I had further backups. The client was informed and understood the situation. Within a few days, the server was rebuilt from scratch on FreeBSD with ZFS — since then, I haven’t lost a single bit.

As someone who admins a lot of btrfs, it seems very unlikely that this was unrecoverable. btrfs gets itself into scary situations, but also gets itself out again with a little effort.

In this instance “I solve problems” meant “I blow away the problem and start fresh”. Always easier! Glad the client was so understanding.

  • lproven 3 days ago

    > As someone who admins a lot of btrfs, it seems very unlikely that this was unrecoverable.

    As someone who used it all day every day in my day job for 4 years, I find it 100% believable.

    I am not saying you're wrong: I'm saying, experiences differ widely, and your patterns of use are not be universal.

    It's the single most unreliable untrustworthy filesystem I've used in the 21st century.

    • tiberious726 2 days ago

      As someone who has used it in my day job since 2014, I find it around 5% believable. I've had nasty performance issues on old kernels, but never a single instance of unrecoverable data loss, and I've run it in plenty of pathological cases. Experiences differ

    • gosub100 2 days ago

      The first time I tried it out about 4 years old, I bricked it within a few days!! It was on a new (to me) Linux distro or maybe an existing one but I heard it was cool and the snapshots sounded neat.

      I stayed away for a while but have it again on a Garuda install. I never completely give up on a technology, I hope they get it together.

    • AnonC 2 days ago

      > It's the single most unreliable untrustworthy filesystem I've used in the 21st century.

      I think the “experiences differ widely” point makes sense with this comment too. Synology uses btrfs on the NAS systems it sells (there’s probably some option to choose another filesystem, but this is the default, AFAIK). If it were to be “the most unreliable untrustworthy filesystem” for many others too, Synology would’ve (or should’ve) chosen something else.

      • curt15 2 days ago

        Synology only uses btrfs in single-disk mode and implements RAID-1 functionality using its own patched version of mdadm to side-step the gotchas of native btrfs raid1.

        • electricant 2 days ago

          What 'gotchas' exactly?

          • tiberious726 2 days ago

            None for raid 1. They do it for raid 5/6 if you're a crazy person and want to run parity raid in 2024.

  • herzzolf 2 days ago

    FWIW, this wasn't always the case. I recall that BTRFS reliability was much different, say, 10–15 years ago. The post touched those ancient times as well, so that isn't that much of a stretch.

    Around that time, SLES made btrfs their default filesystem. It caused so many problems for users that they reversed that decision almost immediately.

  • curt15 2 days ago

    If btrfs knows the data is intact, shouldn't btrfs recover automatically?

  • sidewndr46 3 days ago

    Why do people use btrfs and similar filesystems for production use? They are by no means dumpster fires. But the internet is littered with stories of "X happened, then I realized Y & that I wasn't getting my data back"

    • badsectoracula 2 days ago

      Btrfs has some nice features - e.g. compression and snapshots, which i didn't knew i'd even like before using them. Not only they have saved me a few times from bad updates ("saved" in the sense that i was able to pretty much instantly revert, it saved time, i wouldn't lose anything even without btrfs), but they also help with things like "i'm going to run this script to process 29837894293 files - and the script might have some bugs in it, so i want to be sure i wont lose anything" (i.e. make snapshot, run script, check results, compare snapshot with current state to ensure nothing is lost, delete snapshot). Snapshots are also useful for diffing FS state, e.g. before and after installing some program.

      As for the stories, AFAICT often the reason is that the user didn't know they could get their data back - or they are stories from many years ago when btrfs was buggy, but AFAIK those issues have been long solved (i think some specific case with some RAID setup still has issues but this is hearsay and AFAICT from the same hearsay, that setup isn't really necessary with btrfs in the first place).

      Using btrfs is more complicated than using ext or something similar, especially since most tools that deal with files/filesystems are made only with ext-like features in mind - to the point where sometimes i wonder what the point is and i'm considering switching to ext3 or ext4, but then i remember snapshots and i'm like, nah :-P.

      • tiberious726 2 days ago

        Nearly all of the stories of unrecoverable data loss I've read involve someone discovering they have a problem, then trying the traditional ext/xfs recovery techniques before reading the docs, and thereby destroying their fs.

    • SubjectToChange 2 days ago

      Facebook (well, Meta, I guess) is famously a big user and developer of btrfs. It seems to work just fine for them.

      • traceroute66 2 days ago

        > Facebook (well, Meta, I guess) is famously a big user and developer of btrfs. It seems to work just fine for them

        I really, really, really wish people would STOP with the whole "it works for $SilconValleyCorp so it must work for me" or "$SiliconValleyCorp does it, so I must".

        It only leads to disappointment in the case of the former and wholly un-necessary over-engineering in the case of the latter.

            (a) You do not know *how* or *where* Facebook use BTRFS
            (b) Even in the unlikely event they use it "everywhere", they have far more redundancy on every layer than you will ever have.  So they don't care if a random BTRFS instance borks itself.
            (c) Facebook probably employ the guy who invented BTRFS and an army of kernel developers on top of that .... how much in-house support do you have for BTRFS ?
        
        As far as I am concerned, the fact that they STILL have not fixed RAID5 in BTRFS says everythng you need to know.
        • homebrewer 2 days ago

          > You do not know how or where Facebook use BTRFS

          (S)he does, their employees explained it many times. They're very public about it.

          > So they don't care if a random BTRFS instance borks itself.

          They do, according to Christ Mason (IIRC) they investigate every instance of btrfs corruption, regardless of how unimportant the machine and data were. They're not any more frequent than with any other filesystem.

          > Facebook probably employ the guy who invented BTRFS and an army of kernel developers on top of that

          Not an "army" (only a few developers), but you're correct here.

          > they STILL have not fixed RAID5 in BTRFS

          Why would they? It's a niche technology that's only interesting to a few home users. I am a home user and have no use for it (or any of the alternatives like raidz).

        • tiberious726 2 days ago

          They haven't fixed raid 5 because no one serious uses parity raid in this decade. Try recruiting an Open source dev to write something that useless...

        • SubjectToChange 2 days ago

          Look, I simply highlighted a major user of btrfs. Sorry if you have some complex emotions about them. But for some reason, I doubt you'd say the same thing when someone mentions Netflix using FreeBSD.

          >(a) You do not know how or where Facebook use BTRFS

          Their engineering team has posted a few of their use cases.

          >(c) Facebook probably employ the guy who invented BTRFS and an army of kernel developers on top of that .... how much in-house support do you have for BTRFS ?

          Uh, about as much as any other file system? Those changes and improvements are upstreamed to the kernel anyway. It's not like Facebook has some sort of special version of btrfs they are using.

          >As far as I am concerned, the fact that they STILL have not fixed RAID5 in BTRFS says everythng you need to know.

          As far as I know, the issue with RAID5 in btrfs is highly complex and it would take quite a bit of dedicated effort to make it work. I suppose it's a architectural shortcoming of btrfs. But then again, it's RAID5, a/k/a something only shoestring hobbyists really care about. Hence why no one is bothering to make it work in btrfs.

          At the end of the day, btrfs is perfectly fine for home users and workstations. ZFS beats it out on servers, that's fine. Traditional filesystems are not the end-all be-all of storage anymore. No one has made a better ZFS because the industry has moved on to things Ceph, vSAN, AzureHCI, etc.

          • traceroute66 2 days ago

            > But for some reason, I doubt you'd say the same thing when someone mentions Netflix using FreeBSD.

            Actually, I would. Not because I'm a BSD hater, because we actually use a lot of BSD at $work.

            But instead because I reckon I could safely win a bet with you that Netflix do not use the vanilla version of FreeBSD.

            Most people I know would agree with me that the secret sauce will forever stay secret.

            Sure, without a doubt Netflix contribute stuff back to FreeBSD. But I betcha it's not ALL the stuff. :)

            The same goes for other famous FreeBSD users, e.g. Juniper Networks.

            I'm happy to recommend FreeBSD to people, but if they're looking for Netflix or Juniper level network performance, they'll need to know they'll have to do the donkey work themselves because there's not a cats chance in hell they'll magically get it "out of the box".

            • kev009 2 days ago

              Bet accepted.

              Netflix runs -CURRENT and they have been very vocal about their approach to both developing and running a vanilla FreeBSD tree, i.e. https://freebsdfoundation.org/netflix-case-study/

              In my personal experience, if you reach out to them, they will likely help you where it relates to their work and expertise including collaborating on works in progress that are not ready for main. You do have to configure and use appropriate software to get the same numbers as they, i.e. sendfile and some sysctls but any local changes they have would not be material to posting similar performance numbers.

              How do I collect my winnings :)

    • 2OEH8eoCRo0 2 days ago

      The internet is littered with imbeciles who omit their own blunders.

  • guilhas a day ago

    I was pleased with my home lab btrfs, had a 12TB raid1, and the PSU rail connected to the backplane sometimes would go down under load. Many scary errors but never lost anything. Took me 2 months to debug and replace the PSU

viraptor 3 days ago

I like the idea and would like to learn more, but it looks like "migrated stuff without testing ahead of time and it turned out faster for some reason". Was it the memory allocations? Was it the disk latency? The hypervisor? Could it be replicated by other means? It was a fun read, but the reasoning/understanding was missing. I hope people investigate deeper before making changes like that.

If you look for benchmarks comparing databases on Linux/BSDs you'll find lots of nuance in practice and results going both ways depending on configuration and what's being tested.

  • draga79 3 days ago

    No, after 20 years of use and comparative testing of similar setups. Frankly, I have always placed little importance on benchmarks as I consider them extremely specific. I am interested in real-world use cases.

    The goal of the talk and the article is not to urge people to migrate all their setups, but simply to share my experience and the results achieved. To encourage the use of BSDs for their own purposes as well. It’s not to say that they are the best solution; there is no universal solution to all problems, but having a range of choices can only be positive.

vfclists 3 days ago

> “If nothing is working, what am I paying you for? If everything’s working, what am I paying you for?”

Bloke is not acquainted with Keynesian economics.

https://www.youtube.com/watch?v=9OhIdDNtSv0

https://www.youtube.com/watch?v=NO_tTnpof_o

All a man needs is food in his stomach and a place to rest at the end of the day. Everything else is vanity

What proportion of global GDP is dedicated to fulfilling our basic material needs?

It is mostly unnecessary. Inspite of the huge productivity gains made since the seventies, the current generation of young Americans are poorer than their parents and grandparents were at their age.

So what does all the IT optimization bring? Just more wealth for the owners and redundancies for their employees, including Joe Bloggs here.

It is time people in IT got to understand this. In the long term their activities are not going to improve their wealth. They are one of the few professions whose job is to optimize themselves out of a living, unless they own the means of the production their are optimizing, which they don't.

It is their employers that do.

  • grisBeik 2 days ago

    > So what does all the IT optimization bring? Just more wealth for the owners [...] It is time people in IT got to understand this

    I understand it alright, but I'm trapped. Closer to 50 than to 40, I've got a family to run. I could be interested in another profession, but our daily lives & savings would tank if I stopped working, for learning another profession. Also, there's no other profession that I could realistically learn that would let me take nearly the same amount of money home every month. If someone lives alone, they could adjust their standard of living (-> downwards, of course); how do you do that for a family?

    Furthermore, there is no switchover between "soulless software job for $$$" and "inspiring software job for $". There are only soulless jobs, only the $ varies. Work sucks absolutely everywhere; the only variable is compensation -- at best we can get one that "sucks less".

    When I was a teenager, I could have never dreamt that programming would devolve into such a cruel daily grind for me. Mid-life crisis does change how we look at things, doesn't it. We want more meaning to our work (society has extremely decoupled livelihood from meaning), but there's just no way out. Responsibilities, real or imaginary, keep us trapped. I'd love to reboot my professional life, but the risks are extreme.

    FWIW, I still appreciate interesting tasks at work; diving into the details lets me forget, at least for a while, how meaningless it all is.

  • consp 3 days ago

    The problem with the IT crowd is they think they are ahead of the optimization curve, and since everyone does, nobody is.

  • kortilla 3 days ago

    The current generation of Americans are absolutely rich enough to just get food and have shelter. The ones that struggle are the ones that want to live in popular cities precisely because of the available improvements beyond shelter and food.

    The houses of the 50s were shit tier and spread around the entire US. You can go buy them today for cheap in the 98% of locations people don’t want to live in.

  • jdbernard 3 days ago

    > I would work for myself, following my own philosophy.

    Sounds like he understood it just fine. He owns the means of production.

    • vfclists 3 days ago

      He only works for himself only if he gets a steady income stream so long as the systems he manages stay up, which I'm sure he doesn't.

      There is a reason for the "you will own nothing and you will be happy" ideology being promoted by "PTB", ie subscriptions for ink cartridges, heated seats and advanced suspensions in newer cars.

      Corporations now want a continuous income stream from the services provide by physical products they have "sold", but they don't want their employees and subcontractors from earning some of that income stream.

      Some IT administrators have been know to schedule regular "downtimes" on perfectly performing systems just to ensure their users and bosses don't take their service for granted.

      • Tor3 3 days ago

        I have kind of scratched my head a bit as to why our local VM provider has all these scheduled downtimes.. while the servers we run in-house have those maybe once or twice a year, at most. Hm..

        • quesera 3 days ago

          > why our local VM provider has all these scheduled downtimes

          Generally (but not exclusively) this is because they have more complex infrastructure, a larger number of customers, and a contractual commitment to give advance notice of outages when possible.

          It therefore makes sense to schedule downtime even if you don't need it (or even use it), so that customers can arrange their own expectations.

      • quesera 3 days ago

        > There is a reason for the "you will own nothing and you will be happy" ideology being promoted by "PTB",

        No one, and certainly not the "PTB", is promoting this idea.

        It was a thought experiment in a book/TED talk that deeply offends a certain percentage of the populace, and is ignored by everyone else. It is nothing more than that.

        It does not exist in any discussions outside the circles of those who are offended for profit, and those that listen to them.

        Spare yourself (and us), and stop listening to them.

linuxandrew 3 days ago

I've recently discovered systemd-nspawn which is an alternative to LXC, builtin and integrated into systemd. Much lighter than full VMs and it's quite similar to Solaris Zones and FreeBSD jails. One way to use it is to extract an OCI (Docker) image to a path, that way you can reuse the container tooling provided by Docker, Podman et al.

I've barely touched the BSDs and it's been a few years since I last used Solaris so I can't make much of a comparison as a user myself.

  • rtp4me 3 days ago

    Thanks for this! I have been using LXC/LXD for a long time and never knew about systemd-nspawn. Time to go learn something new!

jbverschoor 3 days ago

I switched from FreeBSD to Linux, mainly because of the bad Java support and the simple fact that Linux became way more popular, which added to the difference in software availability.

  • jsiepkes 3 days ago

    FreeBSD has pretty good Java support? Sure if there is a new LTS release it takes a couple of months before it is ported to FreeBSD but that's about it?

    • kopirgan 3 days ago

      For a long time they didn't have any. I recall from mid 90s version 4+? They were still trying to port.. or was it early 2000s lol either way it's ancient history

      • jbverschoor 3 days ago

        Yeah I started around 4.something until 6 I believe. After that it was all Debian.

        My lesson was to just use what moet people use. But then again, I always used PostgreSQL even though most were on MySQL

        • dspillett 3 days ago

          > I always used PostgreSQL even though most were on MySQL

          That far back mysql and postgres were not even close to being a like-for-like comparison. One was a proper database, with things like referential integrity and a type system that didn't count the 31st of February to be a valid date, and the other was a glorified ndbm with some structure, a SQL interface, and was very very fast at running simple single-table SELECT statements.

          • threeseed 3 days ago

            Yet for most users none of that mattered.

            Because they would just use a SQL library or ORM which hid all of these details.

            • dspillett 2 days ago

              Correct. I said it was not a valid comparison if talking about proper databases, not that mysql was irredeemably bad and no use to anyone back then.

              There were though a lot of people who probably should have used a better DB but used mysql through knowing no better or nothing else being available on cheap shared hosts. Many got lucky and got by but more than a few ended up running into problems or spending time implementing things (complete with bugs for later joy) in their BLL that really belong in the data layer. Similarly using ORMs away from their areas of core advantage is asking for problems later (though one of their core advantages _is_ to help with a quick turnaround on an MVP or other PoC, especially if you aren't much knowledgeable about DB design considerations at the time, so I can't criticise that much).

        • wkat4242 3 days ago

          > My lesson was to just use what moet people use.

          Funny, my lesson was nearly the opposite: Don't try to use something that's popular but that meets my needs the best.

          I used Mac for a long time but i got allergic to opinionated software. Tried Ubuntu for a while but hated the "not invented here" and commercial motives (eg even distributing small console stuff in snaps). I'm now very happy with FreeBSD on my primary desktop.

          But of course my needs and things I care about are different.

        • shiroiushi 3 days ago

          You don't have to use what "most people use" if what you're using is fully compatible, or at least compatible enough that it does everything you need it to do. PostgreSQL and MySQL are both SQL databases, so depending on what exactly you're doing, there's likely not that much difference in use. (Plus, these days, PostgreSQL seems to have become more popular.)

          So if you're just editing text, it doesn't matter if most people are using emacs, because you can use vim too. But I guess for running an OS, FreeBSD has had too many pain points for many people compared to just going with the Linux crowd.

          • jbverschoor 3 days ago

            You would think so, but you'd be surprised what how people (mis)use stuff and how they enjoy using vendor specific features / extensions. This is also, imo, what kind of killed the adoption of J2EE and even J2ME. I'm not talking about J2EE/JEE + Spring*, or using an embedded tomcat. I'm talking about basically what everybody's trying to create for the last 20 years in terms of deployment etc. Sun had it pretty much figured out back then

            For editing text, I only use vi.. It's available everywhere and I know my way around it.

            I'm glad postgres became popular :). The reasoning around using what many people use is that software is generally more available..

            Maybe I'll give FreeBSD another swing in the future, but I'm not sure what the state of containers is. I used jails back in the days, but I'll have to do some research on that.

    • jbverschoor 3 days ago

      There was Java, but only green threads were supported. I was running Java/J2EE apps (on resin).

      I also recall a problem with mmap() but I’m not sure if it was related to Java or something else

rbc 3 days ago

I've become a fairly loyal OpenBSD user in the last 3-4 years. The base OpenBSD load includes a substantial amount of network capabilities, and is cleanly implemented. It's almost too cleanly implemented, to the point of making me feel sort of guilty when I start to clutter up an install with a bunch of packages...

If my needs for storage were more complicated, I would probably use FreeBSD ZFS, but UFS suffices for my rather modest needs.

I use OpenBSD for desktop, web and mail services. There are some limitations, but none that are serious enough to warrant dealing with running another BSD, or Linux distribution.

tracker1 3 days ago

I would differ on a small point. For SOHO usage, I think that docker compose is perfectly viable and often simplifies backup, migration and moving to a new server. Just my own take on this. A lot of apps really only need one instance with a good backup strategy and not hot failover instances and can handle an hour of down time once a year or two as needed, which I rarely experience.

As mentioned in the article, it also serves as a decent set of instructions, assuming the actual dockerfile(s) for the services and dependencies are broadly available. You can swap out the compose instance of PostgreSQL for your dedicated server with a new account/db, relatively easily. Similar for other non-app centered services (redis, rabbitmq, etc). You can go all in, or partly in and in any case it does serve as self-documenting to a large degree.

Borg3 3 days ago

I wish he could write up a bit about XFS failure he had. Im using it from many many years and there is no issues at all.

  • Tor3 3 days ago

    I'm interested too. I'm using XFS only, and have for many years. On my own boxes, but my company also uses XFS for all the data on customer computers. We did extensive testing many years back, and XFS was the only filesystem at the time which gave a linear, constantly very high performance when writing and reading huge amounts of data (real-time data, dips in performance is a 100% no-no), and also not degrading when having huge numbers of files. We've never had a customer lose data due to XFS problems, and at this point I can't imagine how much data that would be, except that it's astronomical.

    When that's said, we had routine XFS losses on SGI boxes. That was a very well known scenario: Write constantly to a one-page text file, say, every few seconds, then power cycle the machine. The file would be empty afterwards. This doesn't happen on Linux, I vaguely recall discussing this with someone some years ago (maybe here on HN) and something was changed at some point, maybe when SGI migrated XFS to Linux, or shortly after.

  • lotharcable 3 days ago

    It hard to know the timeline with his data loss, but I am assuming it was a long time ago.

    XFS is originally from SGI Irix and was designed to run on higher end hardware. SGI donated it to Linux in 1999 and it carried a lot of its assumptions over.

    For example on SGI boxes you had "hardware raid" with cache, which essentially is a sort of embedded computer with it's own memory. That cache had a battery backup so that if the machine had a crash or sudden power loss the hardware raid would live on long enough to finish its writes. SGI had tight control over the type of hardware you could use and it was usually good quality stuff.

    In the land of commodity PC-based servers this isn't often how it worked. Instead you just had regular IDE or SATA hard drives. And those drives lied.

    On cheap hardware the firmware would report back it had finished writes when in fact it didn't because it made it seem faster in benchmarks. And consumers/enterprise types looking to save money with Linux mostly bought whatever is the cheapest and fastest looking on benchmarks.

    So that if there was a hardware failure or sudden power loss there would could be several megs of writes that were still in flight when the file system thought they were safely written to disk.

    That meant there was a distinct chance of dataloss when it came to using Linux and XFS early on.

    I experienced problems like that in early 2000s era Linux XFS.

    This was always a big benefit to sticking with Ext4. It is kinda dumb luck that Ext4 is as fast as it is when it came to hosting databases, but the real reason to use it is because it had a lot of robust recovery tools. It was designed from the ground up with the assumption that you were using the cheapest crappiest hardware you can buy (Personal PCs).

    However modern XFS is a completely different beast. It has been rewritten extensively and improved massively over what was originally ported over from SGI.

    It is different enough that a guy's experience with it from 2005 or 2010 isn't really meaningful.

    I have zero real technical knowledge on file systems except as a end-user, but from what I understand FreeBSD uses UFS that uses a "WAL" or "write ahead log".. where it records writes it is going to do before it does it. I think this is a simpler but more robust solution then the sort of journalling that XFS or Ext4 uses. The trade off is lower performance.

    As far as ZFS vs Btrfs... I really like to avoid Btrfs as much as possible. A number of distros use it by default (OpenSUSE, Fedora, etc), but I just format everything as a single partition as Ext4 or XFS on personal stuff. I use it on my personal file server, but it really simple setup with UPS. I don't use ZFS, but I strongly suspect that btrfs simply failed to rise to its level.

    One of the reasons Linux persists despite not having something up to the level of ZFS is that most of ZFS features are redundant to larger enterprise customers.

    They typically use expensive SAN or more advanced NAS that has proprietary storage solutions that provide ZFS-like features long before ZFS was a thing. So throwing something as complicated as ZFS on top of that really provides no benefit.

    Or they use one of Linux clustered file system solutions, of which there is a wide selection.

    • _huayra_ 3 days ago

      Facebook runs their entire stack using Btrfs [0]. I would encourage anyone who is stuck in the "oh btrfs is so buggy and loses data" mindset (not helped by articles like this [1] that play off btrfs as some half-baked contraption, when it's really btrfs raid that needs a LOT more time to bake) to look into things and realize that large companies (OpenSuse, Redhat, Faceboook) have poured a lot of time to get it to work well.

      I don't know about it's multi-disk story (I do use ZFS for that personally), but for single disk options it is great. You get so many of the ZFS benefits (snapshots, rollback, easily create and delete volumes, etc) with MUCH lower memory usage (at least in my own experiments to try this out).

      [0] https://lwn.net/Articles/824855/ [1] https://arstechnica.com/gadgets/2021/09/examining-btrfs-linu...

      • __turbobrew__ 3 days ago

        Facebook has stacks of thousands of spare nodes ready at any moment to replace a failed node. All essential data will be replicated across many different boxes so if a box fails you just replace it with a fresh node and replicate the data there.

        This is much different to the consumer usecase where computers are pets and not cattle. A failed filesystem the night before you need to turn in your thesis may have a much larger impact on your life.

        Another thing to consider is that Facebook runs btrfs on enterprise hardware (including SSDs with battery backups) which is going to be much more reliable than some chromebook which lives in the bottom of your backpack that you bring on transit every day.

        Finally, I will say that the copy on write features of btrfs can result is some wildly different behaviour based upon how you use it. You can get into some very bad pathological cases with write amplification, and if you run btrfs on top of LUKS it can nearly be impossible to figure out why your disk is being pegged despite very little throughput at the VFS layer.

        • nolist_policy 2 days ago

          The ChromeOS Linux dev VM uses btrfs by the way.

        • homebrewer 2 days ago

          So much FUD in this discussion. Christ Mason talked publicly that they use the cheapest SSDs they can find (even worse things than what he would be willing to put in his laptop), and that they investigate every instance of btrfs corruption. You're saying the exact opposite of the main btrfs guy at Facebook. I wonder who is right...

          • __turbobrew__ a day ago

            Who is right, one guy whos reputation relies on something not breaking or a bunch of end users who report the thing broke for them?

            I experienced issues with write amplification within the past few months in Ubuntu 22 so it isn’t like all the issues are gone. I do agree that there are less issues now than there was before, but I will still say that btrfs still breaks or behaves unexpectedly much more often than ext4 or xfs.

      • hi-v-rocknroll 3 days ago

        Meta does a lot of things that don't scale for reliable/trustworthy systems and aren't suitable for all use-cases. (I also used to work there too.)

        ZFS is only reliable where it was battled-tested: on Solaris. ZoL has been infinitely tinkered with and smashed up that it's nothing like running a Thumper as a NAS.

        XFS + mdadm on Linux is, without a doubt, far more reliable than ZoL. Ask me how I know. I have the scars to prove it.

        • ewwhite 3 days ago

          ZFS on Linux is absolutely fine in high-performance and critical computing applications.

          I also owned a Thumper and Thor running Solaris in 2009. Much prefer Linux and the hardware solutions today.

        • Gud 3 days ago

          ZFS has been plenty battle tested on FreeBSD.

          • hi-v-rocknroll 3 days ago

            Not hardly, and not in the way you think. They replaced their arguably purer ZFS port to replace it with ZoL. As such, it's nowhere near as tested and proven as existing solutions like ext4 and xfs Redhat has deployed to millions of machines for decades. ZFS has too many religious fanboys who hype it without considering that boring and reliable are less risky than betting on code that hasn't had nearly the same scale of enterprise experience.

            • Gud 3 days ago

              I am well aware of that change.

              What specific problems are there with the ZFS implementation on FreeBSD? You claim it is not battle tested, I find it to be rock solid..

              • kelnos 3 days ago

                > You claim it is not battle tested, I find it to be rock solid

                Those two things are not mutually exclusive.

                • Gud 3 days ago

                  Never said they were

        • bsdooby 3 days ago

          Did you or other XFS users try out stratis?

        • Borg3 2 days ago

          Yeah, my setup too, XFS + mdadm (+ eventually LVM2). Rock solid. It might not have HW raid performance, but in terms of stability, flexibility and recovery its absolutly unbeatable!

      • yjftsjthsd-h 3 days ago

        I am stuck in the btrfs-is-buggy mindset precisely because it managed to lose my root partition on a single disk machine. It might also have raid problems, but not exclusively.

        • lproven 3 days ago

          Me too. Repeatedly, at least once a year, on 3 different machines.

          The cause? Filling up the filesystem. Why? Because of OS snapshots.

          (Aside: why can they fill it? Because it doesn't give a straight answer to `df -h`. Why not? Because of snapshots.)

          • homebrewer 3 days ago

            That happened recently? A few years ago they added a reserved area used for emergency purposes that should solve situations like that. Can't say I've run into these problems, although I don't tend to run btrfs very heavily because performance becomes unacceptable long before that due to CoW.

            https://btrfs.readthedocs.io/en/latest/btrfs-filesystem.html

            (Look for "GlobalReserve")

      • bravetraveler 3 days ago

        > I don't know about it's multi-disk story (I do use ZFS for that personally), but for single disk options it is great.

        I can reliably, across vendors and drives, break RAID10 on BTRFS where MD+LVM are totally fine. Simply pull power. Discovered this when building out my latest workstation.

        I haven't tried other configurations; after finding this pattern I decided to leave BTRFS for single-disk configurations where I want CoW

      • jeroenhd 3 days ago

        I use btrfs a lot but I'm not sure if I'd use it for production servers. The I/O bandwidth is just a lot lower and I get weird latency problems on desktop Linux when BTRFS is very busy that I don't get on other file systems. Then again, I probably wouldn't use ZFS for anything but a NAS setup either.

        • homebrewer 3 days ago

          The default is to coalesce trim requests into large batches and issue them once per minute or so. Most other filesystems don't use online trim. This can cause latency spikes. If you'll ever decide to try it out again, try disabling online trim.

          https://btrfs.readthedocs.io/en/latest/Trim.html

      • toast0 3 days ago

        > Facebook runs their entire stack using Btrfs

        Yeah, and when I was there machines would run out of disk space at 50% usage and it took months to figure out why. In the mean time, they'd just reimage the machine and hope. I don't recall any issues with data loss, but it didn't have the air of reliability.

        But my team was weird at FB, our uptimes of 45 days were way above the average, and we ran into all sorts of things because we operated outside the norm.

      • homebrewer 3 days ago

        Last time they talked about it (that I know of -- when Fedora was contemplating using btrfs and asked Chris Mason et al for their opinion), FB were running databases on xfs and were looking for ways to place them on raw disks for maximum performance. So not the entire stack.

      • gaadd33 3 days ago

        Is the structure of 800gb btrfs containers mentioned in [0] how user data is stored? Just sharded across billions of containers?

      • djbusby 3 days ago

        Why is this grey/down? Is there something factually incorrect?

        Edit: it's less grey now.

    • jdboyd 3 days ago

      > For example on SGI boxes you had "hardware raid" with cache, which essentially is a sort of embedded computer with it's own memory. That cache had a battery backup so that if the machine had a crash or sudden power loss the hardware raid would live on long enough to finish its writes. SGI had tight control over the type of hardware you could use and it was usually good quality stuff.

      Most of the SGI machine I've used of various sizes did not have hardware raid. In my experience, you were more likely to run into hardware raid on a PC than on traditional SGI or Sun servers (I don't have much experience with AIX or HP-UX), unless the unix server was in a SAN environment.

      • Tor3 3 days ago

        Yep. The Octanes and the Challenge servers we used at work didn't have hardware raid, and, contrary to grandparent we did have regular issues on SGI with XFS (loss of data after power cycling, always), while we never had that on Linux, which surprised me. After all, it was so easily reproduced (on SGI): Write regularly to a file, power cycle, file empty afterwards. Did that on Linux, everything fine. Every time. Never ever had losses on Linux. NB: I did not test this immediately after XFS was ported to Linux, it's very likely that things were improved on shortly after, before we started testing XFS at work.

        (As for hardware raid - I didn't start to see hardware raid regularly until HP started shipping rackmount servers with Compaq raid hardware, way back. Linux boxes..)

        • lotharcable 2 days ago

          Sounds fair enough. I never used SGI personally. I was just repeating what I read while dealing with XFS issues back in the day.

          Bad old days.

    • shrubble 3 days ago

      "One of the reasons Linux persists despite not having something up to the level of ZFS is that most of ZFS features are redundant to larger enterprise customers."

      ZFS is used heavily on Linux and runs well, though there are some limitations which are being addressed over time in the OpenZFS project. It is used across all areas that Linux serves, whether laptop, desktop, home server all the way to enterprise. https://openzfs.org/wiki/Main_Page

      • jsiepkes 3 days ago

        ZFS on Linux is not really usable for most users because every kernel update can break your ZFS compatibility.

        Meaning unless you want to put in the time to manually test every kernel update and ensure your kernel version stays in-sync with OpenZFS you can very likely end up with an unbootable system.

        • aljarry 3 days ago

          Ubuntu supports ZFS, so if you can track Ubuntu's kernel, you get ZFS without risking unbootable system.

        • binkHN 3 days ago

          This is one of the main reasons Void Linux is "stuck" on kernel 6.6.

          • E39M5S62 2 days ago

            The 'linux' package on Void is just a meta package. Install whatever kernel series you want. I'm running 6.10.11, with ZFS 2.2.6 on my Void workstation.

    • toast0 3 days ago

      > I understand FreeBSD uses UFS that uses a "WAL" or "write ahead log".. where it records writes it is going to do before it does it.

      I think you're describing UFS soft-updates? I think that's more or less for meta data updates, not data data. It's been a while since I reviewed it, but it gets you nice things like snapshots and background fsck so after an unclean restart your system can get back to work immediately and clean up behind the scenes. There is some sort of journalling that's fairly new, but my experience from 10 years ago was soft-updates and background fsck just worked; and if you wanted better, ZFS was probably what you want, if you can afford copy on write.

  • hggigg 3 days ago

    I had one a few years back where we ran out of inodes on a Jenkins machine on CentOS 7 and it crashed and couldn’t remount the filesystem. I had to restore a backup which was time consuming on a 4TB volume with crazy amounts of files.

  • blipvert 3 days ago

    Used it since the late 90s on IRIX, think there were a few issues early on with the endian swap, but no issues for the best part of twenty years for me!

binkHN 3 days ago

Glad to see all the major BSDs used here; I use OpenBSD whenever it makes sense.

  • systems_glitch 3 days ago

    Indeed, most of our public-facing services are hosted on OpenBSD, and all of our routers and firewalls run it. We started managing everything with Ansible to make it easier to ignore what the host OS is for a deployment, and that has worked well both in moving things to [Open,Net]BSD for experiments, and also standing up tests on various Linux distros just to make sure we're not running into a BSD vs. GNU issue, or even a "problem only on this specific distro" issue.

    #1 reason we chose Ansible over other tools was support for the BSDs.

  • SoftTalker 3 days ago

    It's what I use at home and at work.

ksec 2 days ago

I dont think the article is even about BSDs, but generally really bad things in the tech sectors and philosophy to tech.

>my priority is solving my clients’ specific problems, not selling a predefined solution.

>It’s better to pay for everything to work than to pay to fix problems.

>computing should solve problems and provide opportunities to those who use it.

>The trend is to rush, to simplify deployments as much as possible, sweeping structural problems under the rug. The goal is to “innovate”, not necessarily improve — just as long as it’s “new” or “how everyone does it, nowadays.”

>Some people are used to thinking that the ideal solution is X — and believe that X is the only solution for their problems. Often, X is the hype of the moment

>When I ask, “Okay, but why? Who will manage it? Where will your data really be, and who will safeguard it?”, I get blank faces. They hadn’t considered these questions. No one had even mentioned them.

>We’ve lost control of the data. For many, it’s unnecessary to complicate things. And with every additional layer, we’re creating more problems.

Hopefully someday more people will wake up.

iluvcommunism 2 days ago

FreeBSD is great as a server. Wifi performance still sucks. The author praised byhve but byhve is not all it’s cracked up to be. Both Xen and Linux virtualization perform better, VMware as well. I like FreeBSD, but the other day I found it still uses sendmail. Rc.conf is simple to use, and the ports system is great…I just feel the author was pushing his “X” solution. Hardware support is important as well. I’ve used BSD for a SAN, and FW. Would I use it for a virtualization host? No.

DeathArrow 3 days ago

All is fine and dandy and BSDs can solve many use cases. Unfortunately for the solution we are working on, which imply many microservices we need Kubernetes and no BSD equivalent to Kubernetes exists.

  • kev009 3 days ago

    I've helped build two top 10 service provider networks (10s of Tbps). One on FreeBSD, and one on Linux with Kubernetes.

    I don't really see Kubernetes as being a game changer. The biggest pro, it makes it easier to onboard both development and operations personnel having a quasi-standard for how a lot of things like scheduling and application networking work.

    But it also seems to come with a magnitude of accidental and ornamental complexity. I would imply the same about microservices versus, say, figuring out your repository, language, and deployment pipelines to provide a smooth developer and operator experience. Too much of this industry is fashion and navel gazing instead of thinking about the core problems and standing behind a methodology that works for the business. Unless google moves its own infrastructure to Kubernetes, then maybe there's something to be had that couldn't reasonably be done otherwise :)

    • hggigg 3 days ago

      Agree with this on all points.

      We went from a virtualized server model to managed Kubernetes and costs have escalated considerably. The additional complexity and maintenance overheads of Kubernetes are not trivial and required additional staff hires just to keep things ticking. I think the cost so far from moving from two cages in separate datacentres running blades to AWS is approximately a 6x multiplier including staff. This was all driven on the back of "we must have microservices to scale", something we have failed entirely to do. It's a complete own goal.

      • wkat4242 3 days ago

        I think it mainly works for startups which can build it as a greenfield and are extremely sensitive to scaling (either they are exploding within 2 years or they're dead within 2 years). So they plan to scale from 0 to millions quickly. For that kinda business it makes sense because once the exploding growth comes it's super easy to scale up and even do that automatically. That's the power of Kubernetes.

        For an existing business not in a hyper-growth phase with lots of restrictions and policies and processes and legacy stuff it's just extra work with no gain. It's just a round peg in a square hole. Wrong tool for the job.

      • threeseed 3 days ago

        That's because you've moved to AWS which is an expensive cloud.

        You could have easily run Kubernetes on your virtualised servers.

        • baq 3 days ago

          IME nothing easy about maintaining your own storage cluster!

          • threeseed 2 days ago

            Nothing easy in general about running a distributed platform for highly-available applications.

            • baq 2 days ago

              True but compute nodes are basically stateless in comparison to anything storage.

    • threeseed 3 days ago

      Kubernetes has unquestionably been a game changer.

      If you look at most enterprises today you will see it deployed everywhere.

      And most of the complexity has been abstracted away by the cloud providers so all you're left with is a system that can handle all manner of different applications and deployment scenarios in a consistent way.

      • krageon 3 days ago

        > Kubernetes has unquestionably been a game changer.

        > If you look at most enterprises today you will see it deployed everywhere.

        This doesn't mean it's a game changer. It just means it has a big cargo-cult and people keep using it regardless of whether they need it.

        > most of the complexity has been abstracted away by the cloud providers

        The abstractions that people add make things more complex, not less. Unless of course you don't care to understand what you're running, which is precisely the problem with the culture around most folks that use kubernetes (and more broadly the US cloud providers as an entity).

        • lotharcable 2 days ago

          Kubernetes is one of those things that are only as complicated as you want it to be.

          The problem with Kubernetes and complexity is that because it simplifies a lot of things that are a huge PITA to accomplish on a "homemade" server/container setup is that there are a huge number of products and things you can run on kubernetes to "do stuff".

          And it is hard for a lot of people and organizations to resist the "oh shiny" aspect. Stuff like "Oh, look I can do network policies and service meshes!" or "Lets create this really complicated and big thing so we can configure all our AWS infrastructure with kubernetes objects! Who cares that a bad commit can destroy the infrastructure the cluster depends on along with our ability to manage any of it!" or "Look we can have lots of namespaces for all our internal orgs and departments, lets make a gigantic centrally managed Kubernetes cluster that will be managed by IT and that everybody will be forced to use at the same time! Putting all our eggs in one basket is a awesome idea!".

          K8s sorta removes the barrier of entry that world normally stop people from implementing those sorts of bad ideas.

          Otherwise the core vanilla Kubernetes isn't really that complicated compared to most DIY solutions that try to manage large numbers of apps on clusters of systems. Most of the time it ends up a lot more robust and simpler in the right hands.

        • threeseed 2 days ago

          > The abstractions that people add make things more complex, not less

          Everything in our industry is built on abstractions.

          By your logic we shouldn't be using operating systems, compilers, libraries etc and writing our own custom software to manually move applications between different hosts if they go down.

        • martinbaun 3 days ago

          You're probably going to be downvoted to oblivion but I 100% agree with you. It seems like they tried to remove all the complexity, and just made new complexity.

          • stackskipton 2 days ago

            Modern systems are complex generally. If it's not Kubernetes with YAML files, it's a bunch of servers in VMware which is own ball of wax followed by extensive Ansible/Puppet/Chef setup and SREs who keeping entire architecture in their head since they don't have time to write it down.

            Obviously, there is exact opposite with stuff like fly.io but they can be extremely constraining.

            • martinbaun 2 days ago

              Yeah, I really liked Heroku and I used it for so many years, but it also killed itself kinda. It could never reduce prices as all machines "was the same" and thus a reduction in price meant lose of revenue and now they're just super expensive comparably.

              Now I just dump Caddyserver and install things bare metals servers. I avoid all dependencies like the plague

    • vilunov a day ago

      Kubernetes is absolutely not an accidental complexity. Microservices are not an accidental complexity, and they are not a replacement for a proper repository.

      Kubernetes solves administration of a cluster of Linux machines, as opposed to administering a single Linux machine. It abstracts away the concept of a machine, because it automates application scheduling, scaling across different machines, rolling updates of applications, adding/removing machines to the cluster all at the same time. There are no instruments like that for applications, the closest to them are something like Spark and Hadoop for data engineering tasks (not general applications).

      Microservices are also used to solve a very specific problem – independent deployments of parts of the system. You can dance with your repository and your code directories as much as you want, if you're not in a very specific runtime (e.g. BEAM VM), you will not achieve independent deployments of parts of your service. The ability to "scale independently" (which tbh is mostly bullshit) is an accidental consequence of using HTTP RPC for microservice communication, which is also not the only way, but it allows reuse of the HTTP ecosystem.

    • kunley 3 days ago

      Apart from a sober and common-sense thinking in your comment, I like the term "ornamental complexity" :)

  • roydivision 3 days ago

    Same here, otherwise I'd be considering the BSDs.

    • draga79 3 days ago

      Things are improving...stay tuned :-)

  • INTPenis 3 days ago

    Can't you just run k8s on bsd? You might have to build and maintain your own release of it, but I'm sure someone has done it already.

    • blueflow 3 days ago

      The BSD's don't have the namespaces/unshare syscall functionality like Linux does.

      • yjftsjthsd-h 3 days ago

        FreeBSD has jails, which AFAIK are fully a match in terms of functionality, but you would need to finish implementing containerd support (maybe via runj?) and then get k8s to run on top of that. I seriously doubt there's any actual technical blockers, it's just that nobody's finished implementing all the pieces and putting them together.

      • arianvanp 3 days ago

        Use microvms instead of containers perhaps?

    • dbolgheroni 3 days ago

      No, FreeBSD has jails but k8s uses different types of runtimes for OCI containers (containerd, CRI-O, Docker Engine, etc).

theamk 3 days ago

> [users ..] began explicitly requesting “jails” instead of Docker hosts. They started using BastilleBSD to clone “template” jails and deploy them.

huh, were they running persistent docker containers and modifying them in-place? If that's the case, they were missing the best part of Docker - the Dockerfile and "container is a cattle". The power of Docker is there no ad-hoc system customization possible, it's all in Dockerfile which is source-controlled and versioned, and artifacts (like built images) are read-only.

To go from this to all-manual "use bastille edit TARGET fstab to manually update the jail mounts from 13.1 to 13.2 release path." [0] seems like a real step back. I can understand why one might want to go to BSD if they prefer this kind of workflow, but for all my projects, I am now convinced that functional-like approach (or at least IaaC-like one) is much more powerful than manually editing individual files on live hosts.

[0] https://bastille.readthedocs.io/en/latest/chapters/upgrading...

rstuart4133 3 days ago

All I can say is experiences differ. I'm a long time Debian user, and now use FreeBSD for work. Both are far better than the proprietary competition, but I'd take Debian/Linux over FreeBSD when building a random server.

To give but one example, I recently reported a bug when FreeBSD didn't boot after upgrade from 13 to 14. Worse the disk format was somehow altered so when the reboot tried to boot off 13 due to zfs bootonce flag (supposedly a failsafe), it refused to boot for the same reason. I believe it's due to a race condition in geom/cam. The same symptoms were reported 6 years ago, but the bug report has seen no activity. Making your system irrecoverable without a rescue image and console access strikes me as pretty serious. He waxes lyrical about zfs, but it's slower and more resource hungry than it's simpler competition and it's not difficult to find numerous serious zfs bug reports over the years. (But not slower than FreeBSD's UFS, oddly. It's impressively slow.) Another thing that sticks in my mind is a core zfs contributor saying it's encryption support should never have been merged.

This sounds too disparaging because the simplicity and size of FreeBSD has its own charms, but the "it's all sunshine and roses" picture he paints doesn't ring true to me. While it's probably fair to say stable versions of FreeBSD are better than the Linux kernels from kernel.org, and possibly Fedora and Ubuntu, they definitely trail behind the standard Debian stable releases.

Comparing FreeBSD to Debian makes throws up some interesting tradeoffs. On the one hand, FreeBSD's init system is a delight compared to systemd. Sure, systemd can do far, far more. But that added complexity makes for a steep learning curve and a lot of weird and difficult to debug problems, and as FreeBSD's drop dead simple /etc/rc.conf system proves most of the time the complexity isn't needed to get the job done. FreeBSD's jails just make more intuitive sense to me than Linux's equivalent which is built on control groups. FreeBSD's source is a joy to read compared to most I've seen elsewhere. I don't know who's responsible for that - but they deserve a medal.

On the downside - what were they thinking when they came up with the naming scheme under /dev for block devices? (Who thought withering device names was a good idea, so that /dev no longer reflects the state of attached hardware?) And a piece of free advice - just copy how Linux has does it's booting. Loading a kernel + initramfs is both simpler and far more flexible then the FreeBSD loader scheme. Hell, it's so flexible you can replace a BIOS with it.

The combination of the best parts Linux and the BSD's would make for an wonderful world. But having a healthy selection of choices is probably more important, and yes - I agree with him that if you are building an appliance that has an OS embedded in it, the simplicity of FreeBSD does give it an edge.

  • mijoharas 3 days ago

    > Who thought withering device names was a good idea, so that /dev no longer reflects the state of attached hardware?

    Sorry, could you clarify what this means? I'm not super familiar with freebsd and don't understand what withering means here.

    • rstuart4133 2 days ago

      Withering is the name the give to removing block devices under /dev that have file systems mounted on them. Contrast this to Linux, where /dev reflects something simpler - the hardware detected the kernel.

      The /dev naming is about are how FreeBSD handles block device aliases. Like Linux, FreeBSD creates a number of aliases based on the block devices labels and uuid. My favourite Linux alias is missing on FreeBSD - bus path (which how you unambiguously get to the you just connected at a cable). On Linux these aliases are just symlinks to the real device, which means all it takes is "ls -l" to see the relationship between devices and aliases. Simple, elegant and it means all devices have one true name, so in error logs and so on you always know what device it's talking about.

      Under FreeBSD these aliases are device nodes, so there is no single true name. The real device an alias maps to is not at all obvious. Worse, it's not the same device major or minor, and worse still the aliases behave differently. So for example, it the OS mounts as CDROM using it's label alias (which would be /dev/iso9660/label on FreeBSD) you can't eject it because the alias device doesn't understand the eject ioctl. But the you may not be able to get to it at all, because it's been withered away.

      Complicating the issue still further is zfs. It wants to take over the roles of /etc/fstab and /sbin/mount. This gets particularly interesting when you boot off zfs, so there is no /dev, so there are no aliases, so it has no obvious way of figuring out what those path names you gave to zpool meant. They kludged their way around that somehow, but it doesn't always work - which I think is the trigger behind the boot failures I mentioned earlier. They took me days to figure out a work around. It was to turn off some of the aliasing.

  • binkHN 3 days ago

    There is more than one Linux distribution that's designed to work without systemd.

    • globular-toast 3 days ago

      Notably Gentoo. Any others?

      • Tor3 3 days ago

        Devuan, which is basically Debian without systemd.

      • Rediscover 3 days ago

        Slackware. It has always had a bit of *BSD flavoring in it, too.

      • tmtvl 3 days ago

        Guix uses its own custom 'Shepherd' init system.

      • nosioptar 2 days ago

        PCLinuxOS (a Mandrake/Mandriva fork) uses SysV.

sangnoir 2 days ago

I love it when the answer to a question posed in a headline is provided in the second sentence of the article.

> I’m the founder and Barista of the BSD Cafe, a community of *BSD enthusiasts

Did the original article change it's title (currently "I Solve Problems"), or did the submitter editorialize it?

knowitnone 2 days ago

Running 3 different BSDs is not my idea of solving problems.

  • KronisLV 2 days ago

    They might be really good for specific tasks, but someone else will also need to maintain the setups, which will make things harder when most of the folks in the job market have experience with the various Linux distros, but finding BSD experience might be a tad more difficult.

    I am personally on board with using the various BSDs when it makes sense (though maybe just pick FreeBSD and stick with it, as opposed to fragmenting the install base, the same way how I've settled on Ubuntu LTS wherever possible; it's not ideal but it works), except the thing is that most job ads and such call for Linux experience in particular, same with tooling like Kubernetes and OCI/Docker containers and such. Ergo, that's where most of my time goes, I want to remain employable and also produce solutions that will be understandable to most people, instead of getting my DMs pinged whenever someone is confused by what they're seeing.

nikisweeting 2 days ago

I'm surprised no one is talking about FreeNAS / TrueNAS and their interesting history in this area in the comments.

There's probably more collective writing about the various tradeoffs between Debian and FreeBSD in their forums and communities than anywhere else on the internet.

Personally I love ZFS and ZFS on root so much I can never go back to not having it. It's a shame more cloud providers like DigitalOcean/AWS/etc. don't offer it natively.

pjmlp 3 days ago

Fidonet, it was been a while since I saw that.

  • 8fingerlouie 3 days ago

    I still remember discovering FidoNet sometime in the 90s.

    It was a time where sending regular mail to different countries could take weeks, and cross country phonecalls would cost between $2 and $20 per minute, and here was FidoNet that promised to allow communications across the globe with only 1-4 days delay and basically for free.

    My 15-18 year old self was instantly sold. I spent countless hours reading through the "forums" on there. So much knowledge just at the tip of my fingers.

    Of course some time later it was more or less replaced (for me) by email, usenet and IRC, but the memory still remains.

kopirgan 3 days ago

One of the website providers I use (Pair) for 20+ years used to be exclusively FreeBSD. I believe they use a lot of Linux now. Not sure why.

  • pilingual 3 days ago

    Ah, pair. Reminds me of this gem:

    http://www.archub.org/ycoutage.txt

    • mmooss 3 days ago

      Seems reasonable to me:

      ... This site and this directory were driving up server load averages, which causes instability for all users on this server. Our block on this site appears to have blocked over 150,000 connections in about half an hour. This is simply more than is appropriate for a shared server.

      You may want to review your domain's traffic logs to see what kind of traffic this site is getting. The logs for today will be distributed at midnight EST tonight. If the traffic to this site is legitimate, you should look into a dedicated or a virtual private server. If the traffic is not legitimate, you should block it. I can provide assistance with that if you need it.

      This domain and this directory will need to be disabled until traffic dies down, traffic is blocked, or your account is upgraded from shared hosting. If you need access to the directory so that you can make any necessary changes, please let me know.

      • inopinatus 3 days ago

        Sure, it's reasonable, if your desired outcome is to lose a customer by

        a) demonstrating a lack of burst or headroom capacity and/or graceful degradation/priority scheduling capability, and

        b) being a dick about it.

        Alternatively one may intervene with a positive approach and framing viz. "we have temporarily [provisioned additional capacity / relocated your storage to another spindle / mirrored your content to a less congested host / etc] in order to handle your traffic, can we call you tomorrow to move you to a service plan that can handle your growth". Even just throttling the account in some appropriate fashion would've been better than the blunt "we dropped you like a hot potato" that was actually communicated. This was in 2010, by which time any ISP that wasn't operating as amateur hour had the tools for all these things.

        • oarsinsync 3 days ago

          > This was in 2010, by which time any ISP that wasn't operating as amateur hour had the tools for all these things.

          Agreed. Also consider that shared hosting packages are very much providers scraping the bottom of the barrel for some marginal extra revenue. Shared hosting customers are not the highest priority of customers, unless your entire business is based around shared hosting (in which case, you are likely operating as amateur hour anyway)

        • taskforcegemini 3 days ago

          depends on what product/service plan they had. a 5$ per month is not worth jumping through the hoops, because usually an individual solution is needed, but this doesn't scale. if the site is important to the owner, they need a more fitting service plan with higher headroom. that being said, a good provider at least tries to filter out what could be a (d)dos

          • stackskipton 3 days ago

            In 2010, those tools were not easily available outside massive Enterprise solutions like Akamai. Cloudflare just launched in 2009 and on-premise hardware was generally cost prohibitive.

            • inopinatus 2 days ago

              I was building ISPs from the 1990s. The idea that after decades of innovation and field experience, tools for managing bursty/high-growth customers were somehow "not easily available", or that hardware be some kind of unicorn resource, just smells to me like rank incompetence. In this case coupled to an equally questionable customer retention failure.

        • ajb 3 days ago

          In many industries it's standard practice to cause pain to one customer to avoid causing pain to all of them. If you've ever had work done on your house, a small guy who overruns will often bail out on your job ( and reschedule) rather than slip all the other customers he had lined up. so I can see why this guy decided to disable a directory rather than all the other customers in the server have slow service

          The issue is that the economics are different: a roofer doesn't earn more by doing one big job than 4 small ones in the same time. But a big user is a better economic prospect to a service provider than the small ones. That's why this was clearly a dumb move

      • kelnos 3 days ago

        That's not even remotely reasonable.

        They should have been monitoring load and notified the customer before it got to the point that they were worried about stability.

        Shutting down a customer's site without warning when you could have notified them of a problem in the near future... well, that's a great way to lose a customer. I really hope YC promptly moved to another provider after that incident.

    • kopirgan 3 days ago

      If this was their first and only mail on that yeah it's not the best way to handle. But then my sites don't generate anywhere near the traffic as YC even YC of 2010

  • hi-v-rocknroll 3 days ago

    I used them for shared personal website hosting until the Obama era when they were mostly/all FreeBSD. I moved off around the time AWS came about.

sylware 3 days ago

I really wanted to give a try to FreeBSD... because I thought it was linux from "better" times... and then I saw they are tied to the tons of similar gcc extensions... and then I say to myself "why bother since it has the same major compiler dependency issue", better try to fix linux code base or start from there.

  • ComputerGuru 2 days ago

    It doesn’t, though? Gcc doesn’t even ship oob, the project itself uses clang, and almost all packages are also compiled with the system c/c++ toolchain.

    • sylware 2 days ago

      llvm is not less worse than gcc... come on...

jwildeboer 3 days ago

"As an experiment, I decided to migrate two hosts (each with about 10 VMs) of a client — where I had full control—without telling them, over a weekend." And that's where I draw the line. Abusing the trust of your customers is an absolute no-no in my book.

  • draga79 3 days ago

    Not an abuse at all. I've a contract with those clients, and I can move the VMs, change the services, etc. freely as long as it doesn't cost more than the amount we've previously set.

    Otherwise, I'd never dare to do something like that.

    • viraptor 3 days ago

      It's still something that's weird to do without notifying the customers. What if things were slower? What if bsd introduced some slight change in behaviour that messed up their data but they didn't know when/why things changed? Full control doesn't mean unexpected YOLO changes are welcome.

      • draga79 3 days ago

        As specified in several parts, the tests were conducted while maintaining and using BSD-based infrastructures for over 20 years. In some cases, Linux was used for various reasons (commercial, ideological, because they were inherited infrastructures managed by others, etc.), but the results were anticipated. I did not expect a performance degradation, and in any case, having set up the systems in a mirrored environment, there was always the possibility to revert in a few minutes.

        • viraptor 3 days ago

          Maybe it was all done properly. I hope so. But the post really doesn't show that which I guess is what a few people here notice. You can't say you tested something for today for over 20 years. Things change - if it wasn't tested for that specific migration for that specific customer, then it wasn't really tested. I see people doing yolo changes that way and thought it's worth mentioning explicitly.

    • blueflow 3 days ago

      If it is infrastructure that is critical to your company, you do not want your hoster to run experiments on it.

      Its also a legal nightmare for the hoster if something goes wrong.

  • appendix-rock 3 days ago

    What!? Changing implementation details is not “abusing trust”. Where would you even draw the line with this attitude!? Should I be informing my customers whenever I update the version of left-pad I have installed!?

    • draga79 3 days ago

      They're paying for a service. For example, if their Nextcloud is working and is stable, they don't care if it's running on Linux, FreeBSD, OmniOS, etc. It's in the contract we have - and they're fine with it.

    • Vegenoid 3 days ago

      It’s generally about the probability of issues occurring and the expected magnitude of those potential issues. For most people and setups, moving the infrastructure to a new operating system would score about as highly as possible in both of those metrics.

      As has been said, it varies case-by-case, and the OP believes they have a relationship with their client such that they didn’t need to provide notice for this, and they’re probably right. But most people doing this would send out a “maintenance is occurring on this date and some downtime may occur” email.

  • blenderob 3 days ago

    > Abusing the trust of your customers is an absolute no-no in my book.

    How do people on the Internet come to such random conclusions when there is no way you could have known the full terms of the contract between the author and their client?

  • Neil44 3 days ago

    Abusing trust is a bit strong, customers pay for a service and beyond a certain level of abstraction these obscure technical details (from their perspective) are not their concern. They're paying to have that abstracted.

  • rcbdev 3 days ago

    > Abusing the trust of your customers

    Yes. I also always let my customers sign off when I change the libraries I use. Completely sane approach.

  • bigfatkitten 3 days ago

    How is it an abuse? As long as the customer continues to receive the service they paid for, who cares?

    The major providers such as GCP, AWS etc share very few details about their underlying infrastructure with their customers. They change all sorts of things all the time.

  • lazyant 3 days ago

    I wouldn't call it abuse of trust but it's a bad idea to do a migration or any operation that can fail and cause downtime without warning the clients. Come Monday and no servers are online, what do you say, "oops, I tried to change something that didn't work"? that is fine only if they knew there was a migration over the weekend. On my end this situation would fireable offense or close to it.

bzmrgonz 2 days ago

Awesome writeup, thanks for that, it really puts the bsd's in perspective of today's tech industry. What's the BSD version of k8s? You mention BSDs instead of k8s in the article.

nottorp 3 days ago

If I want to run a BSD as just a file server - so I guess zfs + samba + bonjour or whatever the discovery protocol is these days - which BSD should I try?

  • nanolith 3 days ago

    I'd recommend using FreeBSD as your first BSD. It has a more recent version of ZFS integrated in the kernel than NetBSD. OpenBSD does not have ZFS support; it's a direction they chose not to take for security and simplicity reasons.

phendrenad2 2 days ago

Is BSD really significantly more efficient than Linux? The anecdotes here seem almost unbelievable.

froh 3 days ago

the title of the talk is "Why (and how) we’re migrating many of our servers from Linux to the BSDs"

and that should be the title of this post too.

I like that the blog post shares the slides, not just the video.

  • dang 3 days ago

    Ok, done. Thanks!

riiii 3 days ago

[flagged]

  • dang 3 days ago

    Please don't do this here. Not that we don't appreciate a good old-fashioned flame, but that the long-term costs outweigh the benefits, and we want this site to survive long-term.

    • riiii 2 days ago

      That was sarcasm not flame. But it's always 60/60 chance sarcasm will just blow up.

cyberax 3 days ago

> As an experiment, I decided to migrate two hosts (each with about 10 VMs) of a client — where I had full control—without telling them, over a weekend.

Yeah. That guy should not be allowed anywhere near the production workloads. "I solve problems", my ass.

  • draga79 3 days ago

    I've a contract with those clients, and I can move the VMs, change the services, etc. freely as long as it doesn't cost more than the amount we've previously set.

    Otherwise, I'd never dare to do something like that.

    And I'm not so crazy as to do such an operation without the appropriate tests and foundations. Of course, when I started, I had all the conditions to be able to do it, and I had already conducted all possible tests. :-)

  • codezero 3 days ago

    The client is paying for the VM. The underlying system is an abstraction. As long as service agreements weren’t interrupted I don’t see the problem. It sounds shady to say “without telling them,” because saying so implies they should have. I do a lot of optimizations for my customers without telling them, it’s not usually worth mentioning. I assume what they intended to convey was that this change caused no interruption of service so there was no need to contact or warn the customer.

    • kiwijamo 3 days ago

      This is similar to AWS S3 object storage -- AWS has over the years changed how they store their S3 data -- however as long as the API responds the same way every time it's all good. Personally I would probably do some A/B testing -- migrate half the workload and compare A to B to see if the new system is performing better before migrating the other half.

      • cyberax 3 days ago

        No, it's not. S3 has a very well defined API with easily measureable performance parameters. So AWS updates can make sure they don't make things worse.

        This is not possible with a client's workload unless you can actually test it. That's why AWS will warn you multiple times if they need to migrate your EC2 instance onto a different hardware node. Even if it is technically "better".

        Of course, the fact that clients trust their workloads to this guy probably means that there was nothing important there.

        • hedora 3 days ago

          The author also converted some of these VMs to jails, so I assume they have root on the VMs (and the customers want them to admin the host). That means they should be able to see the application level performance metrics.

          • cyberax 3 days ago

            Yeah, so he got surprised when his customer mentioned how their workloads became faster.

    • cyberax 3 days ago

      > The client is paying for the VM. The underlying system is an abstraction.

      The VM change was sufficient enough to alter the runtime of a task by several times. This is NOT a small inconsequential change.

      You _have_ to warn your clients when you do stuff like this.

      • hedora 3 days ago

        The workload got several times faster. The customer’s only concern was that they might be accidentally running on a more expensive instance.

        In every system I’ve worked on, the agreement is in terms of an SLO. We never gave our customers any sort of expectation (or guarantee) that we wouldn’t suddenly wildly beat our SLO targets (and, in fact, we often did, due to routine upgrades).

        Having said that, certain customers dictate production freezes during launches, or only want to run stuff that’s been baked in production elsewhere for 3-6 months. Upgrading those customers behind their backs would be unacceptable, especially because they pay extra for a crappier but more stable setup.

        • cyberax 3 days ago

          > The workload got several times faster.

          And he found it out only by talking to the customer, indicating approximately zero testing from his side.

          It might have gone the other way easily.

        • tbrownaw 3 days ago

          If your customer panics at you over something you did, you might have done goofed.

erros 3 days ago

Ladies and gentlemen, this person solves problems. Let it be known.

  • knowitnone 2 days ago

    he gave himself a pat on the back