There’s a lot of comments in here about desktops, but IMO why even discuss Linux on the desktop… 99.9999% of Linux deployments are not Arch installs on old Thinkpads. Immutable distros *are* becoming a de-facto standard for server deployments, IoT devices, etc. They improve security, enable easy rollbacks, validation of a single non-moving target for systems/hardware developers…
There’s also been a ton of very advanced development in the space. You can now take bootable containers and use them to reimage machines and perform upgrades. Extend your operating system using a Dockerfile as you would your app images:
Another opinion: immutability is required to guarantee software integrity, but there is no need to make whole system or "apps" immutable units. NixOS also consists of "immutable units", but its "granularity" is similar to packages of traditional Linux distros, each unit (Nix store item) representing single program, library or config file. This provides a better tradeoff, allowing to change system relatively easily (much easier than in immutable distros described here, and in many cases as easy as in traditional Linux distros) while having advantages of immutability.
You don’t understand what immutable distros are for. Imagine you need to upgrade 500k machines and your options are either run an agent that has to make the same changes 500k times and hopefully converges onto the same working state no matter the previous state of the machines its running on, or you pull a well tested image that can be immediately rolled back to the previous image if something goes wrong.
Saying it’s just about integrity is like saying docker images are just about integrity… they absolutely are not. They give you atomic units of deployment, the ability to run the same thing in prod as you do in dev. Many other benifits.
I think the point they’re getting at, is that there are typically a lot of delta states in between pre-upgrade and post-upgrade states when using package managers. With immutable distros, the upgrade becomes more of an atomic operation than what is offered by more incremental package manager updates.
It also means you can completely leave out the package manager from the target machines, as it’s only used to bootstrap creation of the single deployable unit. Implementing that bootstrapping step is where nix and friends are helpful in this setup.
This sort of atomic change should be something the filesystem provides. I think it’s crazy that databases have had mechanisms for transactions and rollbacks since the 70s and they’re still considered a weird feature on a filesystem.
There’s all sorts of ways a feature like that could provide value. Adding atomicity to system package managers would be a large, obvious win.
I totally see the advantages of immutable distros, particularly in a professional or cloud environment. Even as a hobbist, I feel tempted to use immutable distros if it were not because of:
- Learning. Figuring out how to migrate a setup even to the most mainstream-like immutable distro (fedora silverblue) can take a while, and to niche distros like talos even longer. However, a k8s-friendly setup with low customization requirements would help to speed up the migration (but it requires more powerful machines).
- Long term support. Regular distros like Debian and AlmaLinux offer free 5 and 10 year support cycles which means maintenance can be done every 1 or 2 years. On the other hand, immutable distros would require much more frequent maintenance, once every 6 months. A weekend every 6 months is a sizeable part of my time budget for hobbies.
One aspect in which immutables distros have improved a lot is in resource usage. They used to require significantly more disk space and have slightly higher minimum requirements than regular distros, but that doesn't seem to be the case anymore.
Intuitively, this seems opposite, because you could obviously 'mutate' (or mutilate) your Debian system until the updates break. Isolating user changes should make updates easier, not harder. Also MacOS uses a 'sealed' system volume and updates are like butter there.
> Also MacOS uses a 'sealed' system volume and updates are like butter there.
Smooth as in "no data loss", sure. Smooth as in "supports the software I buy and use for long periods of time" is most certainly not true, even despite half the software for Mac being statically linked. Windows and Linux arguably do better at keeping system functionality across updates even with their fundamental disadvantages.
While true, this isn't even slightly related to the os being "immutable" or not. Immutable-OS upgrades can and do break things - that's the reason it's even a thing. They just give you a reliable rollback.
> Long term support. Regular distros like Debian and AlmaLinux offer free 5 and 10 year support cycles which means maintenance can be done every 1 or 2 years.
What's maintenance in the context if immutable distros? Running "ujust upgrade"? That's done automatically in the background for my Aurora installation.
Yes, system upgrade is the main maintenance task. With some monitoring, security updates can be automated but after system upgrades I must check manually that everything is working. E.g. incompatible configuration files, changes in 3rd party repos, errors that surface one week after the upgrade, ...
There are also smaller maintenance tasks that are tipically ad-hoc solutions to unsolved problems or responses to monitoring alerts. One of this ad-hoc routines was checking that logs do not grow too large, which used to be a problem in my first systemd centos, although not anymore.
PD: thanks for the bluefin read, it made me discover devpod/devcontainer as an interesting alternative to compose files
You’re missing the whole point of an immutable distro. If you have a hobby project on a regular distro, you run apt-get update or whatever, it installs 200 packages and half of them run scripts that do some script specific thing to your machine. If something goes wrong you just bought yourself a week’s worth of debugging to figure out how to roll back the state.
If you update using an immutable distro, you rebase back on to your previous deployment or adjust a pin and you’re done. Immutable distros save you tons of time handling system upgrades, and the best part is you can experimentally change to a beta or even alpha version of your distro without any fear at all.
I dont see how it helps in a cloud environment? With correct permissions users aren't making changes to live servers or even logging in and if you want to roll out upgrades you can do it with OS images already?
In some aspects, I'd hope that there are potential benefits on the security side of things as well. Since the host FS is generally read only in these type of distros, there is the potential to make some security teams happy.
Exactly, and if it's immutable, you know they aren't. Not through SSH, and not through a vulnerability either. I assume there's something you can hash to determine prove that you haven't been hacked, as well.
I found Fedora is terrible at documentation, or at least around rpm-ostree they are. It has made learning more of a struggle than necessary. I think the basics are that there is some sort of container image builder that can work from a manifest, then some way to create a distro out of a container image. All of the content I can find is fragmented across many sites and not complete enough to actually use. Extremely frustrating.
I worked a while with Silverblue, it is great, but they should use Distrobox instead of Toolbox. In Distrobox one can also encapsulate the home folder and one can export a link to a software running in a box to the outer system. The last one is pleasant for example with VS Code, which will only work properly when installed in a box.
I don't use Snap on my Ubuntu Desktop systems because I don't like apps secretly updating without my awareness and also for the immense amount of additional disk space used by Snap.
Having said that, no, I don't see any usage of immutable Linux in my future.
I was about to ask why openSUSE Aeon when the normal Tumbleweed supports immutable mode where / is mounted read only, when I realized that they actually removed it in https://bugzilla.opensuse.org/show_bug.cgi?id=1221742
But I'll share my experience: I think an immutable / really is the way forward. Just the ability to roll back and boot using an older snapshot is great: I have had an update break the boot, but I have the option of running a single command to roll back while I investigate the issue. At the time the issue happened I was busy with life and I simply rolled back and used that version for three months before I had time to investigate.
Strictly speaking this does not require the current / to be mounted read only, but merely requires periodic bootable snapshots be taken and these are available to be used as a read-only /.
I want full control over my system. Immutability means leaving part of that to the OS developer. Definitely don't want that. Even though it's ostensibly better for security (though it's only really making one step in the kill chain harder, which is establishing persistence).
First, you don’t have full control of your system. Your system is running an unknown amount of code as binary firmware blobs even if you’re using a completely open source kernel. Hopefully you’re compiling every package yourself and not using pre-compiled binaries from your distribution’s repositories.
Second, immutable distros are primarily a distribution and update mechanism that vastly improves the current model of running X number of package updates and scriplets on every machine and hoping it works. There’s nothing that stops you from remounting a filesystem as rw at least on any of the distributions that I know of. There’s also plently of stateful, rw holes for data and configuration on “immutable” distros.
I run Fedora Kinoite full time on my primary machine, and it's great. Obviously a bit of a learning curve, but if your workflow can be achieved using Flatpaks and Toolbox, it's fine. You can (and I do) layer packages but I have only 3 or so I need to layer (asusctl, supergfxctl and asusctl-rog-gui).
My only real gripe is that Firefox still ships as an rpm in the base image. I understand that they want to include a working web browser at all costs, and I don't think they can distribute the Flatpak version with the base image, but it's annoying that I have to mess with the image (removing Firefox) to then re-install the (more up to date) Flatpak.
And if you have an nvidia card and want to use cuda, Bazzite offers the same experience as Kinoite, but with nvidia drivers preinstalled out of the box.
I've been on Manjaro (arch based) for the past four years. It's mostly been fine but I've had to recover it from a botched Grub update once (an update randomly self destructed its configuration), which wasn't fun. But after four years it's in good shape, everything works, I run the latest kernel, etc. I have zero reason to wipe its installation and reinstall it again. Most other Linux distributions never lasted four years until I found a need to reinstall them or install some newer version.
And it's Linux so regardless of the distribution you'll be dealing with some amount of weird shit on regular basis. Has been true since I cycled home with a stack of slackware floppies almost thirty years ago. There's always configuration files to fiddle with, weird shit to install, etc.
But an immutable base OS makes a lot of sense and it's not mutually exclusive with that being updated regularly. Containerization is the norm for a lot of server side stuff. Effectively, I've been using immutable server operating systems for almost a decade. It's fine. All the stuff I care about runs in a container. And that container can run on anything that can run containers. Which is literally almost anything these days. I generally don't care much about the base OS aside from just running my containers hassle free on a server.
Containerization would make sense for a lot of end user software as well. IMHO things like flatpak and snap would be fine if they weren't so anal/flaky about "security". Because they are protecting a mutable OS from the evil foreign software. Running a bit of software that needs a GPU isn't a security problem, it's the main FFing reason I'm using the computer at all. Or own a GPU. This needs to be easy, not hard. And it shouldn't need a lot of manual overrides.
If I run a browser or things like Dartable, I usually have no reason to run them in crippled/unaccelerated mode. Sorry that's not a thing. It's the main reason I bypass flatpak on Manjaro for both packages. And I bypass PAC as well because I trust Firefox to have a good release process. So, I use the tar ball and it self updates without unnecessary delay. Which considering a lot of its updates are about security is exactly what I want.
Same with development tools. I use vs code and intellij. Both can self update. I have no need for a third party package manager second guessing those updates or dragging their heels getting those updates to me.
Your GNU/Linux distribution and its package manager acts like a shield against unwanted updates. If you rely on auto updates of VS Code or IntelliJ, you open yourself up to immediate damages inflicted by them. No maintainer with any kind of idea or vision stands between you and whatever MS and other tech giants push onto you.
What I like about the notion of an immutable OS is getting package maintainers to do their thing before it reaches my laptop in immutable form. Just put it in the next version of the immutable image and I'll get that when I next reboot. All the stuff that just needs to work should be tested and integrated before it hits my laptop. And it being immutable means no package manager can break it.
For the stuff I care about and use every day I like the direct connection to the developers. Mostly repackaging adds very little value. If somebody finds a bug, they should be reporting it upstream; not providing some workaround. Most mature projects are pretty good about releasing, packaging and testing their software. The only reason linux package managers exist is the gazillion ways there are to package things up for different distributions.
I still use containers for all that stuff that is not yet suitable for flatpaks (or perhaps never will be), just via distrobox or toolbox while leaving the host OS untouched
Been running Kinoite for a good bit (~1 year). I'm a bit over it. Love the idea of immutability, but rebooting every time I get a new system image via rpm-ostree, which is often, is tiresome. Of course, I could update less frequently; alas, habits formed from years of using rolling releases.
I switched to EndeavourOS. Between flatpak and brew and mise, I have relatively well sandboxed applications. This gives me most of the benefits of the immutable OSes, although nowhere near as rigorous, obviously. For a technologist, though, it's fine.
The whole point of ostree is that your systems image has a minimal amount of stuff in it, a la you’re only doing upgrades when there is a kernel update (which is essentially impossible to avoid rebooting for no matter what OS you’re using, even SerpentOS the other commenter linked can’t do kexec updates).
You use something like distrobox to use a rolling release with regular package updates on the atomic core.
I understand the point of it. I’m enthusiastic about it. It was my home daily drive for more than a year. In the end, the pain didn’t outweigh the benefits for me.
They just hit their first alpha release, but it has been under development for years already. They focus on rust-based tooling, so even their coreutils are the rust versions instead of GNU. I read the alpha announcement yesterday, and might give it a spin later next year.
So far I've been very happy with Kinoite. I upgrade the base system once a week, but everything is installed in my Arch based container, so updates are fast and do not require a reboot.
On my workstation I use the Aurora Linux, a spin of Kinoite with extra tools such as tailscale added to the base image. On that machine I haven't needed to use rpm-ostree at all.
But NixOS is immutable in a very different way to all the mention distos, which are focused on containers, isolation, and layers; Maybe the author doesn't consider it to be in the same category?
Personally, I've decided that NixOS is not for me. The concept is great, but the actual experience seems to be held back by Nix (the language and the tool) being hard to understand and debug.
It uses SquashFS images and layer them on each other. You can choose to save your modifications in a new image, or discard them. E.g. you can run a Puppy Linux from a CD-R (one time writeable), by appending all your changes.
I think that's a great model to be immutable, but AFAIK Puppy Linux doesn't have the convenient tools to manage these snapshots, switch between them, roll back and such, and they don't seem to go in that direction. (I used Puppy Linux as my default system for a while, but I lost touch with them and I don't know how are they doing now.)
I think technically NixOS is considered an atomic distro rather than immutable. You could mount the store rw and modify it, though you really shouldn't except in extreme cases.
Is this how embedded folks make sure that a device starts with exactly the same installation every time a machine is booted?
I wonder why embedded products like Nvidia Jetson do not come with an immutable Linux (and instead are based on Ubuntu which updates itself on every opportunity via apt and snap and whatnot).
It’s common for hardware vendors to provide a working system for demonstration purposes so you can evaluate the hardware without having to learn an immutable OS toolkit. Then when you pick hardware you also do the bring up work to get the kernel compiling from source and integrated with your userspace of choice. At that point you’ll switch to an immutable system.
Hardware vendors in this space can’t be trusted, so you need to make sure the board is actually fit for purpose. Outside of the hobbyist space you have to be really careful. There are often business objectives that rely on the board working a certain way.
This is nice in theory but the amount of vendor-specific libraries can be quite large (e.g. Nvidia's CUDA, libcudnn etc.), which you then have to get working on your new OS.
There are lots of companies using NixOS for this, BalenaOS (Yocto + Docker), or building their own bespoke tooling on top of a minimal Linux setup.
Although many places start with Ubuntu or Debian in my experience it’s common to invest a lot of time and energy in getting out of that unmanaged setup once the company scales.
The hardware usually comes with vendor-specific libraries (e.g. cuda in the case of nvidia) which are based on a specific version of libc, so then you will have to build your entire alternative OS around that version also.
A new breed of distros for sure but how immutable is it, really? What I'm interested in knowing is the mechanisms and techniques in place for making sure no one can change any core components of the system. It's just like randomness. At first, it sounds super secure but we all know nothing is truly random
Around 2000 I made a firewall-oriented Linux distro that made use of immutable bits and SELinux and various other security hardening. The bulk of the filesystem was immutable, and the system was then put into multi-user mode, where the kernel enforced that the filesystem couldn't go back to mutable.
During boot time, a directory was checked for update packages, and if the public key signature of the package matched, the updates would be applied before the filesystem went into immutable mode. This update directory was one of the few mutable directories on the system.
I've been running OpenWRT on my home router since ca 2017, and I found LuCI both quite intuitive, and immensely powerful. Simple things are simple, complex or difficult things are possible, with just clicking around.
Unfortunately if something can't be done with LuCI, you're pretty much on your own - the documentation for the internals is scarce and expects you to be an expert/developer.
I've been meaning to fully commit to GNU Guix one of these days, now that Plasma has fully landed. I've tried Fedora Kinoite in the past, but I can't handle Plasma without Oxygen. I know that Kinoite has some kind of a way to force packages to be installed into the base system, but it kinda feels like it defeats the purpose.
Configuration as code has come as long way too along with these immutable OSs. For example I do not miss messing with preseed or kickstart files (I preferred working with kickstart files). Ignition / butane I find is much easier to work with and is a core part of configuring the OS.
I like immutable distros. What i do not like is that developers and maintainers do not give possibilities to admins and powerusers to build a immutable core themselves. This removes choice and learning experiences for the customer/user/admin.
For CoreOS, you can create immutable images as easily as you can create Docker containers: https://coreos.github.io/rpm-ostree/container/
You can later just point the installer to your OCI image and it will just work
My first immutable distro was Illumos-based SmartOS. Everything the system needs is read from a read-only USB stick and run from RAM. I wish more distros worked that way. A recent submission on here gives me hope: https://news.ycombinator.com/item?id=42428722
I suppose TinyCore Linux in its default configuration also counts.
I’m honestly surprised Immutable distros is so controversial. I get why people choose not to do it, but I don’t know why I see so much hate towards them in a lot of Linux communities.
SteamOS is immutable and incredibly successful. macOS (not Linux of course) is also immutable and very successful.
As long as the OS’s have a concept of overlays, an immutable system rarely gives up much in the way of flexibility either.
macOS hides its immutability pretty well, with fine grained image immutability and keeping the behavior mostly unchanged.
Immutable Linux distros, esp. NixOS pull all kinds of shenanigans (ELF patching to begin with) to achieve what they want to achieve with a complete disregard how they change the behavior and structure of the system. What you get is something which resembles a Linux distro, but with monkey patches everywhere and a thousand paper cuts.
When a Linux distro can become transparently immutable, then we can talk about end user adoption en masse. Other than that, immutable distros are just glorified containers for the cloud and enthusiast applications, from my perspective.
> but I think that’s true of any aspect of a distro.
That's true, but these problems are worked on for a quite long time, and the core ethos of a Linux distribution is being able to be on both sides of the fence (i.e. as a user and as an administrator which can do anything).
For example, in macOS, you can't customize the core of the operating system since the eternity, and it's now sent you as an binary delta image for the OS, and you have no chance to built these layers or tweak them. This increases the ergonomics a ton, because you're not allowed to touch that part to begin with.
However, with Linux, you need to be able to change anything and everything on the immutable part, and this requires a new philosophy and set of tools. Adding esoteric targets like "I really need to be able to install two slightly different compilations of the same library at the exact same version" creates hard problems.
When these needs and the mentality of "This is old, let's bring it down with sledgehammers. They didn't know anything, they're old and wrinkly people" meets, we have reinventions of wheels and returning to the tried and working mechanisms (e.g.: Oh, dynamic linking is a neat idea. Maybe we should try that!)
Immutable systems are valuable, but we need less hype and more sane and down-to-earth development. I believe if someone can sit down and design something immutable with consideration to how a POSIX system works and what is reasonable and what's not, a good immutable system can be built. Yes it won't be able to do that one weird trick, but it'd work for 99% of the scenarios where an immutable system would make sense.
> However, with Linux, you need to be able to change anything and everything on the immutable part, and this requires a new philosophy and set of tools.
Taking macOS as a North Star as a successful mutable Os,
most people don’t need to, and shouldn’t be, touching the immutable parts. I know the assumption is that you need to do that on Linux, but I don’t think a successful distro should require most casual users to go anywhere near that level of access.
If they do for some reason on Linux specifically, why would overlays not be sufficient? They achieve the same results, with minor overhead and significant reliability gains.
> I believe if someone can sit down and design something immutable with consideration to how a POSIX system works and what is reasonable and what's not, a good immutable system can be built.
But someone has done that. macOS is POSIX. SteamOS just works for most people.
I think there are a couple of problems with taking macOS as the so-called North Star of the immutable OSes.
First of all, macOS doesn't have package management at the core OS. You can't update anything brought in as the part of the OS from libraries to utilities like zsh, perl, even cp. .pkg files bring applications in to mutable parts of the operating system, and can be removed. However, OS updates are always brought in as images and applied as deltas.
In Linux, you need a way to modify this immutable state in an atomic way, package by package, file by file way. That'd work if you "unseal, update, seal" while using EXT4 FS. If you want revisions, you need the heavier BTRFS, instead, which is not actually designed for single disk systems to begin with.
You can also use overlays, but considering the lifetime of a normal Linux installation is closer to a decade (for Debian, it can be even eternal), overlays will consume tons of space in the long run. So you need to be able to flatten the disk at some point.
On the other hand, NixOS and Guix are obsessed with reproducibility (which is not wrong, but they're not the only and true ways to achieve that), and make things much more complicated. I have no experience with RPM-OSTree approach.
So, if you ask me we need another, simpler approach to immutable Linux systems, which doesn't monkey patch tons of things to make things work.
> But someone has done that. macOS is POSIX.
Yes, but it's shipped as a set in stone monolithic item, with binary delta updates which transform it to another set in stone item. You can't customize the core OS. Can you? Even SIP sits below you, and has the capability to stop you with an "Access denied." error even if you're root on that system. It's a vertically integrated silicon to OS level monolithic structure.
> SteamOS just works for most people.
SteamOS is as general purpose as a PlayStation OS or car entertainment system. It's just an "installable embedded OS". Their requirements are different. Building and maintaining a mutable distro is a nightmare fuel already, making it immutable yet user customizable is a challenge on another level.
It's not impossible, it's valuable, but we're not there yet. We didn't find the solutions, heck even can't agree on requirements and opinions yet.
Even though macOS won’t let you replace the actual binaries that ship with the OS, you can still replace the binaries that get resolved when they’re called via brew/nix/macports etc.
I again disagree with your assertions that you need to replace the OS contents. You just need to be able to reflow them to other ones. That’s the macOS way, the flatpak way etc..
I think the issue is that you are claiming that you MUST be able to do these things on Linux, and I’d push back and say no you do not.
And your comparison of SteamOS to a console or in car entertainment is flat out incorrect. Have you never booted it into desktop mode? For 90% of users, what are you expecting they need to do in that mode that it’s not “general purpose” enough to do?
Yes, it’s not impossible. It’s been done multiple times , successfully as well.
For a couple of years now, macOS has an RO System volume image, and then mounts an RW data volume on top of that, similar to how overlayfs works on Linux. That system volume isn't modifiable. If it is, then the system won't boot.
So I'd say it's a little bit immutable.
Uh... yeah it is. Have you ever switched it into desktop mode? I haven't pushed my Steam Deck as hard as my daily driver Linux system, but I've done all sorts of fun things on it like run a web server and write new Python scripts directly on the device. You can hook up a keyboard and mouse and monitor and use it like any other desktop Linux environment. It's basically just an Arch distro with KDE and some extra stuff on top to make it easy for people to run games.
/ being mounted ro with /etc and /home being mutable... kind of ruins the point? like you can still mutate whatever, you just have to install it as a user, and if you have overlays then what gain is there?
The advantage is you always have a core system that you can revert into by removing any problematic overlays, and you can always quickly verify that a system is in a verified and unmodified state.
This is how macOS works with SIP, and how it handles rapid response updates for example.
It greatly reduces the ability for user space to compromise the system.
For a long time Windows has dominated PCs, and all the software that run on it come packaged in their own little installers and with their own little updaters to manage versions, leaving users free to not care and focus on just using them. For us old desktop users, Linux's "enforced" centralization of software packaging and distribution is just too divergent of a concept to get immediately used to. Immutable distros take the restrictions even further, and they make one think you might as well just have an Android/Chromebook at that point.
I switched to Ubuntu after witnessing Windows 11 and am seeing there's now yet another confusing delivery channel (snap) added on top of what was already an overcomplicated system (apt). At least it still allows single installers (.deb files) so that works for now.
yeah, it have lots of advantages, and that's why it was the default decades ago for everything. (windows, bsd, etc)
then people would have lots of trouble installing different software or updating security issues. so we invented package managers and took all the time in the world to make the base as small as possible.
it's advantages still make sense in some places, like modems with old flash memory. openwrt is static base with overlays. but again, it still carries the same downsides but because of the different aspects of the hardware it makes sense.
it would make sense for tech illiterate end users (hence android, ios, chrome os, wii, macos to a degree, etc) and containers (which already have infinite ways to convert from package to static). but anything else will literary harm the distro ability to evolve and adapt to software changes. imagine every change like systemd or a new browser or wm being atomic.
now people forgot decades of history. and it's so tiring.
I don't think I understand any of your objections.
When was Windows ever immutable in the sense of current immutable Linux distros? I wasn't able to find any reference to this ever being the case.
What do package managers and making the base as small as possible have to do with immutable distros? Package managers still exist, and the base is pretty much the same size as the non-immutable version of the same distro.
Why do immutable distros make more sense on modems with old flash memory?
How does being immutable harm the distros ability to evolve?
Either I'm not understanding your position at all, or you have a very different understanding of "immutable" than I do (after using Kinoite as my daily driver for a year).
I am now running EndlessOS for a while and i love it: Its a bit like going back to the home computer days when the OS was residing in a ROM and you didn't really have to care.
The term "stability" should not be used outside of the major Linux distributions such as Debian and Fedora. For a distribution to be stable over the long term it needs a large enough community, a stable governance model, and a reasonable build system where one maintainer cannot take unilateral action without it being discovered.
A cute name and a university student somewhere does not constitute stability, no matter how good the intentions. It's not a bad thing, but you have to know what you get yourself into. Most of the distributions listed in the article belong to the latter category.
Immutable systems are great for embedded, network equipment, appliances and industrial applications, and specialized distributions for those applications have largely been immutable for a long time already. Nobody really wants an immutable system for their main desktop, because working is all about mutating state. You write may write documents, save bookmarks, install plugins, or try new software. Those are the things really immutable systems like kiosks wants to disallow.
So in order to make for usable software these desktops generally split your system into a mutable user part and an immutable system part. That's basically how unix-like desktops have worked since forever. Stuff in /bin and /sbin is only changed by the package manager. So the fit is quite good, but it also means it really isn't as useful as it's made out to be. That's why most people don't use them.
The use case is mostly for rolling back updates, not really running from readonly filesystems or preventing change in other ways, but most distributions already do that. You can roll back updates with both dnf and apt. It's not perfect and doesn't always work, but mainly from a lack of testing. With snapshots it's pretty much infallible though.
My recommendation if you really want something that "just works" is to install one of the major and time tested distributions. Pick Debian if you don't know what to choose. And then learn how to use it. Anything these tiny experimental distributions offer, such as running off read only filesystems or rebuilding it for your brand of cpu, or testing a new desktop environment, is likely possible in Debian too. With the added benefit of it being around in 20 years. And the core distribution is less likely to break in some way because some maintainer found inspiration for something. As long as you don't run untrusted stuff as root, stay out of the system files, and generally let the package manager do its job, you're going to be fine.
What I would like to see a desktop distribution work on is basically the same things as 20 years ago which still isn't really done outside some exploratory work (probably because it's actually hard):
- Packages on a user level where it is easy to install new stuff without touching the system area. More tricky in practice than in theory because of state changes to configuration files, saved file formats etc. But some should be easier than others.
- Desktop software service accounts, just like we do for server software. Mostly relevant for larger packages such as Firefox, Libre Office, movie players.
- Integration with popular third party package managers from the language ecosystems. Most language packages are anemic. All the powers that a package manager gives, reporting, listing untracked files, listing changes, rolling back updates, should be available for them by integrating directly with them. Package definitions should be able to be imported without manual work.
- Package managers should have at least some knowledge of an application's access patterns to help with application confinement. Still today things like selinux policies are packaged are separate entities and managed with external tools, which brings a lot of complexity since all possible configurations must be supported there. A package manager knows more about the system and could handle these files. Confining desktop software is a usability problem more than a technical one, but it is clear that desktop environments needs something to build on to make it practical.
$ apt search btrfs apt
Sorting... Done
Full Text Search... Done
apt-btrfs-snapshot/noble,noble 3.5.7 all
Automatically create snapshot on apt operations
There’s a lot of comments in here about desktops, but IMO why even discuss Linux on the desktop… 99.9999% of Linux deployments are not Arch installs on old Thinkpads. Immutable distros *are* becoming a de-facto standard for server deployments, IoT devices, etc. They improve security, enable easy rollbacks, validation of a single non-moving target for systems/hardware developers…
There’s also been a ton of very advanced development in the space. You can now take bootable containers and use them to reimage machines and perform upgrades. Extend your operating system using a Dockerfile as you would your app images:
https://github.com/containers/bootc
Another opinion: immutability is required to guarantee software integrity, but there is no need to make whole system or "apps" immutable units. NixOS also consists of "immutable units", but its "granularity" is similar to packages of traditional Linux distros, each unit (Nix store item) representing single program, library or config file. This provides a better tradeoff, allowing to change system relatively easily (much easier than in immutable distros described here, and in many cases as easy as in traditional Linux distros) while having advantages of immutability.
Immutable distros are a good fit for very mature Infrastructure as Code setups. They make drift from the original config impossible.
You don’t understand what immutable distros are for. Imagine you need to upgrade 500k machines and your options are either run an agent that has to make the same changes 500k times and hopefully converges onto the same working state no matter the previous state of the machines its running on, or you pull a well tested image that can be immediately rolled back to the previous image if something goes wrong.
Saying it’s just about integrity is like saying docker images are just about integrity… they absolutely are not. They give you atomic units of deployment, the ability to run the same thing in prod as you do in dev. Many other benifits.
Is immutability's benefits not the "integrity" of a system? This seems pedantic.
The comment was reaffirming immutability's benefits, against the previous comment which said traditional packaging provides a better tradeoff.
> and hopefully converges onto the same working state no matter the previous state of the machines its running on
Isn't that exactly the point of NixOS?
I think the point they’re getting at, is that there are typically a lot of delta states in between pre-upgrade and post-upgrade states when using package managers. With immutable distros, the upgrade becomes more of an atomic operation than what is offered by more incremental package manager updates.
It also means you can completely leave out the package manager from the target machines, as it’s only used to bootstrap creation of the single deployable unit. Implementing that bootstrapping step is where nix and friends are helpful in this setup.
This sort of atomic change should be something the filesystem provides. I think it’s crazy that databases have had mechanisms for transactions and rollbacks since the 70s and they’re still considered a weird feature on a filesystem.
There’s all sorts of ways a feature like that could provide value. Adding atomicity to system package managers would be a large, obvious win.
nixos updates are completely atomic.
I totally see the advantages of immutable distros, particularly in a professional or cloud environment. Even as a hobbist, I feel tempted to use immutable distros if it were not because of:
- Learning. Figuring out how to migrate a setup even to the most mainstream-like immutable distro (fedora silverblue) can take a while, and to niche distros like talos even longer. However, a k8s-friendly setup with low customization requirements would help to speed up the migration (but it requires more powerful machines).
- Long term support. Regular distros like Debian and AlmaLinux offer free 5 and 10 year support cycles which means maintenance can be done every 1 or 2 years. On the other hand, immutable distros would require much more frequent maintenance, once every 6 months. A weekend every 6 months is a sizeable part of my time budget for hobbies.
One aspect in which immutables distros have improved a lot is in resource usage. They used to require significantly more disk space and have slightly higher minimum requirements than regular distros, but that doesn't seem to be the case anymore.
> Long term support
Intuitively, this seems opposite, because you could obviously 'mutate' (or mutilate) your Debian system until the updates break. Isolating user changes should make updates easier, not harder. Also MacOS uses a 'sealed' system volume and updates are like butter there.
> Also MacOS uses a 'sealed' system volume and updates are like butter there.
Smooth as in "no data loss", sure. Smooth as in "supports the software I buy and use for long periods of time" is most certainly not true, even despite half the software for Mac being statically linked. Windows and Linux arguably do better at keeping system functionality across updates even with their fundamental disadvantages.
While true, this isn't even slightly related to the os being "immutable" or not. Immutable-OS upgrades can and do break things - that's the reason it's even a thing. They just give you a reliable rollback.
> Long term support. Regular distros like Debian and AlmaLinux offer free 5 and 10 year support cycles which means maintenance can be done every 1 or 2 years.
What's maintenance in the context if immutable distros? Running "ujust upgrade"? That's done automatically in the background for my Aurora installation.
Also, they're working on CentOS based LTS versions of Bluefin: https://universal-blue.discourse.group/t/call-for-testing-bl...
Yes, system upgrade is the main maintenance task. With some monitoring, security updates can be automated but after system upgrades I must check manually that everything is working. E.g. incompatible configuration files, changes in 3rd party repos, errors that surface one week after the upgrade, ...
There are also smaller maintenance tasks that are tipically ad-hoc solutions to unsolved problems or responses to monitoring alerts. One of this ad-hoc routines was checking that logs do not grow too large, which used to be a problem in my first systemd centos, although not anymore.
PD: thanks for the bluefin read, it made me discover devpod/devcontainer as an interesting alternative to compose files
> the advantages of immutable distros
The high availability of ChromeOS is a good example of these advantages in a business of educational context.
You’re missing the whole point of an immutable distro. If you have a hobby project on a regular distro, you run apt-get update or whatever, it installs 200 packages and half of them run scripts that do some script specific thing to your machine. If something goes wrong you just bought yourself a week’s worth of debugging to figure out how to roll back the state.
If you update using an immutable distro, you rebase back on to your previous deployment or adjust a pin and you’re done. Immutable distros save you tons of time handling system upgrades, and the best part is you can experimentally change to a beta or even alpha version of your distro without any fear at all.
I dont see how it helps in a cloud environment? With correct permissions users aren't making changes to live servers or even logging in and if you want to roll out upgrades you can do it with OS images already?
Maybe it would help in a datacenter
In some aspects, I'd hope that there are potential benefits on the security side of things as well. Since the host FS is generally read only in these type of distros, there is the potential to make some security teams happy.
Immutable distros typically use a declarative configuration that is easier to manage with terraform
Exactly, and if it's immutable, you know they aren't. Not through SSH, and not through a vulnerability either. I assume there's something you can hash to determine prove that you haven't been hacked, as well.
I found Fedora is terrible at documentation, or at least around rpm-ostree they are. It has made learning more of a struggle than necessary. I think the basics are that there is some sort of container image builder that can work from a manifest, then some way to create a distro out of a container image. All of the content I can find is fragmented across many sites and not complete enough to actually use. Extremely frustrating.
Yea the docs on the Fedora side are rough. I would help but I don’t know enough because the learning was so hard.
I worked a while with Silverblue, it is great, but they should use Distrobox instead of Toolbox. In Distrobox one can also encapsulate the home folder and one can export a link to a software running in a box to the outer system. The last one is pleasant for example with VS Code, which will only work properly when installed in a box.
I don't use Snap on my Ubuntu Desktop systems because I don't like apps secretly updating without my awareness and also for the immense amount of additional disk space used by Snap.
Having said that, no, I don't see any usage of immutable Linux in my future.
>I don't like apps secretly updating without my awareness
Any particular reason?
> I don't use Snap on my Ubuntu Desktop systems because I don't like apps secretly updating without my awareness
https://snapcraft.io/docs/managing-updates#p-32248-pause-or-...
You can also block the updater's internet access by adding this to your /etc/hosts file:
And for other updates: Use at your own risk of course.Unfortunately that creates a choice between an app that updates in an aloof manner or allowing it to exist in an insecure, not updated state.
I was about to ask why openSUSE Aeon when the normal Tumbleweed supports immutable mode where / is mounted read only, when I realized that they actually removed it in https://bugzilla.opensuse.org/show_bug.cgi?id=1221742
But I'll share my experience: I think an immutable / really is the way forward. Just the ability to roll back and boot using an older snapshot is great: I have had an update break the boot, but I have the option of running a single command to roll back while I investigate the issue. At the time the issue happened I was busy with life and I simply rolled back and used that version for three months before I had time to investigate.
Strictly speaking this does not require the current / to be mounted read only, but merely requires periodic bootable snapshots be taken and these are available to be used as a read-only /.
For me: no.
I want full control over my system. Immutability means leaving part of that to the OS developer. Definitely don't want that. Even though it's ostensibly better for security (though it's only really making one step in the kill chain harder, which is establishing persistence).
First, you don’t have full control of your system. Your system is running an unknown amount of code as binary firmware blobs even if you’re using a completely open source kernel. Hopefully you’re compiling every package yourself and not using pre-compiled binaries from your distribution’s repositories.
Second, immutable distros are primarily a distribution and update mechanism that vastly improves the current model of running X number of package updates and scriplets on every machine and hoping it works. There’s nothing that stops you from remounting a filesystem as rw at least on any of the distributions that I know of. There’s also plently of stateful, rw holes for data and configuration on “immutable” distros.
He's talking about the management of his system, not the development of his system.
>it's only really making one step in the kill chain harder, which is establishing persistence
Yep. An attacker can just surreptitiously add a line to your .bashrc instead of modifying the base OS.
I run Fedora Kinoite full time on my primary machine, and it's great. Obviously a bit of a learning curve, but if your workflow can be achieved using Flatpaks and Toolbox, it's fine. You can (and I do) layer packages but I have only 3 or so I need to layer (asusctl, supergfxctl and asusctl-rog-gui).
My only real gripe is that Firefox still ships as an rpm in the base image. I understand that they want to include a working web browser at all costs, and I don't think they can distribute the Flatpak version with the base image, but it's annoying that I have to mess with the image (removing Firefox) to then re-install the (more up to date) Flatpak.
And if you have an nvidia card and want to use cuda, Bazzite offers the same experience as Kinoite, but with nvidia drivers preinstalled out of the box.
A cuda dev environment is a 'toolbox create' away
I've been on Manjaro (arch based) for the past four years. It's mostly been fine but I've had to recover it from a botched Grub update once (an update randomly self destructed its configuration), which wasn't fun. But after four years it's in good shape, everything works, I run the latest kernel, etc. I have zero reason to wipe its installation and reinstall it again. Most other Linux distributions never lasted four years until I found a need to reinstall them or install some newer version.
And it's Linux so regardless of the distribution you'll be dealing with some amount of weird shit on regular basis. Has been true since I cycled home with a stack of slackware floppies almost thirty years ago. There's always configuration files to fiddle with, weird shit to install, etc.
But an immutable base OS makes a lot of sense and it's not mutually exclusive with that being updated regularly. Containerization is the norm for a lot of server side stuff. Effectively, I've been using immutable server operating systems for almost a decade. It's fine. All the stuff I care about runs in a container. And that container can run on anything that can run containers. Which is literally almost anything these days. I generally don't care much about the base OS aside from just running my containers hassle free on a server.
Containerization would make sense for a lot of end user software as well. IMHO things like flatpak and snap would be fine if they weren't so anal/flaky about "security". Because they are protecting a mutable OS from the evil foreign software. Running a bit of software that needs a GPU isn't a security problem, it's the main FFing reason I'm using the computer at all. Or own a GPU. This needs to be easy, not hard. And it shouldn't need a lot of manual overrides.
If I run a browser or things like Dartable, I usually have no reason to run them in crippled/unaccelerated mode. Sorry that's not a thing. It's the main reason I bypass flatpak on Manjaro for both packages. And I bypass PAC as well because I trust Firefox to have a good release process. So, I use the tar ball and it self updates without unnecessary delay. Which considering a lot of its updates are about security is exactly what I want.
Same with development tools. I use vs code and intellij. Both can self update. I have no need for a third party package manager second guessing those updates or dragging their heels getting those updates to me.
Your GNU/Linux distribution and its package manager acts like a shield against unwanted updates. If you rely on auto updates of VS Code or IntelliJ, you open yourself up to immediate damages inflicted by them. No maintainer with any kind of idea or vision stands between you and whatever MS and other tech giants push onto you.
What I like about the notion of an immutable OS is getting package maintainers to do their thing before it reaches my laptop in immutable form. Just put it in the next version of the immutable image and I'll get that when I next reboot. All the stuff that just needs to work should be tested and integrated before it hits my laptop. And it being immutable means no package manager can break it.
For the stuff I care about and use every day I like the direct connection to the developers. Mostly repackaging adds very little value. If somebody finds a bug, they should be reporting it upstream; not providing some workaround. Most mature projects are pretty good about releasing, packaging and testing their software. The only reason linux package managers exist is the gazillion ways there are to package things up for different distributions.
I still use containers for all that stuff that is not yet suitable for flatpaks (or perhaps never will be), just via distrobox or toolbox while leaving the host OS untouched
Did you consider Qubes OS? It's the same, except more secure/isolated and better UX than containers.
Been running Kinoite for a good bit (~1 year). I'm a bit over it. Love the idea of immutability, but rebooting every time I get a new system image via rpm-ostree, which is often, is tiresome. Of course, I could update less frequently; alas, habits formed from years of using rolling releases.
I switched to EndeavourOS. Between flatpak and brew and mise, I have relatively well sandboxed applications. This gives me most of the benefits of the immutable OSes, although nowhere near as rigorous, obviously. For a technologist, though, it's fine.
The whole point of ostree is that your systems image has a minimal amount of stuff in it, a la you’re only doing upgrades when there is a kernel update (which is essentially impossible to avoid rebooting for no matter what OS you’re using, even SerpentOS the other commenter linked can’t do kexec updates).
You use something like distrobox to use a rolling release with regular package updates on the atomic core.
I understand the point of it. I’m enthusiastic about it. It was my home daily drive for more than a year. In the end, the pain didn’t outweigh the benefits for me.
You might be interested in Serpent OS, which offers immutability but without reboots after each upgrade.
https://serpentos.com/
They just hit their first alpha release, but it has been under development for years already. They focus on rust-based tooling, so even their coreutils are the rust versions instead of GNU. I read the alpha announcement yesterday, and might give it a spin later next year.
So far I've been very happy with Kinoite. I upgrade the base system once a week, but everything is installed in my Arch based container, so updates are fast and do not require a reboot.
On my workstation I use the Aurora Linux, a spin of Kinoite with extra tools such as tailscale added to the base image. On that machine I haven't needed to use rpm-ostree at all.
https://getaurora.dev/
Thanks for pointing met to Serpent!
I gave Aurora a quick spin before going back to Endeavour. Didn’t work well for me.
I think we're closer to the time where live updates are more feasible if you aren't changing the kernel although a log in/out might be required.
Isn't NixOS immutable? If so, surprised it wasn't mentioned.
I certainly consider it to be immutable.
But NixOS is immutable in a very different way to all the mention distos, which are focused on containers, isolation, and layers; Maybe the author doesn't consider it to be in the same category?
Personally, I've decided that NixOS is not for me. The concept is great, but the actual experience seems to be held back by Nix (the language and the tool) being hard to understand and debug.
Tried Guix?
I think openSuse also calls their rpm+btrfs snapshots solution immutable, but afaik it doesn't use containers.
Nor they have mentioned Puppy Linux.
It uses SquashFS images and layer them on each other. You can choose to save your modifications in a new image, or discard them. E.g. you can run a Puppy Linux from a CD-R (one time writeable), by appending all your changes.
I think that's a great model to be immutable, but AFAIK Puppy Linux doesn't have the convenient tools to manage these snapshots, switch between them, roll back and such, and they don't seem to go in that direction. (I used Puppy Linux as my default system for a while, but I lost touch with them and I don't know how are they doing now.)
It gets super immutable when the impermanence modules are used.
I think technically NixOS is considered an atomic distro rather than immutable. You could mount the store rw and modify it, though you really shouldn't except in extreme cases.
Same for fedora CoreOS. RPM-ostree is just a bunch of symlinks and hard links just like NixOS is if I recall correctly. Or at least it used to be.
Is this how embedded folks make sure that a device starts with exactly the same installation every time a machine is booted?
I wonder why embedded products like Nvidia Jetson do not come with an immutable Linux (and instead are based on Ubuntu which updates itself on every opportunity via apt and snap and whatnot).
It’s common for hardware vendors to provide a working system for demonstration purposes so you can evaluate the hardware without having to learn an immutable OS toolkit. Then when you pick hardware you also do the bring up work to get the kernel compiling from source and integrated with your userspace of choice. At that point you’ll switch to an immutable system.
Hardware vendors in this space can’t be trusted, so you need to make sure the board is actually fit for purpose. Outside of the hobbyist space you have to be really careful. There are often business objectives that rely on the board working a certain way.
This is nice in theory but the amount of vendor-specific libraries can be quite large (e.g. Nvidia's CUDA, libcudnn etc.), which you then have to get working on your new OS.
There are lots of companies using NixOS for this, BalenaOS (Yocto + Docker), or building their own bespoke tooling on top of a minimal Linux setup.
Although many places start with Ubuntu or Debian in my experience it’s common to invest a lot of time and energy in getting out of that unmanaged setup once the company scales.
The hardware usually comes with vendor-specific libraries (e.g. cuda in the case of nvidia) which are based on a specific version of libc, so then you will have to build your entire alternative OS around that version also.
I went to check and see if proxmox had any immutability proposed for it yet, and I came across this: https://github.com/ashos/ashos#proxmox
I’m not quite sure what’s going on here yet, but seems interesting
A new breed of distros for sure but how immutable is it, really? What I'm interested in knowing is the mechanisms and techniques in place for making sure no one can change any core components of the system. It's just like randomness. At first, it sounds super secure but we all know nothing is truly random
Around 2000 I made a firewall-oriented Linux distro that made use of immutable bits and SELinux and various other security hardening. The bulk of the filesystem was immutable, and the system was then put into multi-user mode, where the kernel enforced that the filesystem couldn't go back to mutable.
During boot time, a directory was checked for update packages, and if the public key signature of the package matched, the updates would be applied before the filesystem went into immutable mode. This update directory was one of the few mutable directories on the system.
OpenWRT is pretty much the oldest still running (and popular) with UCI. There's the classic nvram ones, but those are hardly manageable manually.
What is UCI?
https://openwrt.org/docs/guide-user/base-system/uci
Also the web UI counterpart, LuCI: https://openwrt.org/docs/guide-user/luci/luci.essentials
I've been running OpenWRT on my home router since ca 2017, and I found LuCI both quite intuitive, and immensely powerful. Simple things are simple, complex or difficult things are possible, with just clicking around.
Unfortunately if something can't be done with LuCI, you're pretty much on your own - the documentation for the internals is scarce and expects you to be an expert/developer.
> The abbreviation UCI stands for Unified Configuration Interface, and is a system to centralize the configuration of OpenWrt services.
> UCI is the successor to the NVRAM-based configuration found in the White Russian series of OpenWrt.
https://openwrt.org/docs/guide-user/base-system/uci
Yes. I am already running and Aeon desktop base system with gnu guix for the userland.
It's great.
They are not for me, but I am glad they exist.
This just sounds like a problem solved a long time ago in the embedded space for using squashFS for the bootable Linux image.
I've been meaning to fully commit to GNU Guix one of these days, now that Plasma has fully landed. I've tried Fedora Kinoite in the past, but I can't handle Plasma without Oxygen. I know that Kinoite has some kind of a way to force packages to be installed into the base system, but it kinda feels like it defeats the purpose.
Configuration as code has come as long way too along with these immutable OSs. For example I do not miss messing with preseed or kickstart files (I preferred working with kickstart files). Ignition / butane I find is much easier to work with and is a core part of configuring the OS.
I like immutable distros. What i do not like is that developers and maintainers do not give possibilities to admins and powerusers to build a immutable core themselves. This removes choice and learning experiences for the customer/user/admin.
Maybe this will finally change.
For CoreOS, you can create immutable images as easily as you can create Docker containers: https://coreos.github.io/rpm-ostree/container/ You can later just point the installer to your OCI image and it will just work
Could the Universal Blue image builder solve this for you?
https://github.com/ublue-os/image-template
My first immutable distro was Illumos-based SmartOS. Everything the system needs is read from a read-only USB stick and run from RAM. I wish more distros worked that way. A recent submission on here gives me hope: https://news.ycombinator.com/item?id=42428722
I suppose TinyCore Linux in its default configuration also counts.
I’m honestly surprised Immutable distros is so controversial. I get why people choose not to do it, but I don’t know why I see so much hate towards them in a lot of Linux communities.
SteamOS is immutable and incredibly successful. macOS (not Linux of course) is also immutable and very successful.
As long as the OS’s have a concept of overlays, an immutable system rarely gives up much in the way of flexibility either.
macOS hides its immutability pretty well, with fine grained image immutability and keeping the behavior mostly unchanged.
Immutable Linux distros, esp. NixOS pull all kinds of shenanigans (ELF patching to begin with) to achieve what they want to achieve with a complete disregard how they change the behavior and structure of the system. What you get is something which resembles a Linux distro, but with monkey patches everywhere and a thousand paper cuts.
When a Linux distro can become transparently immutable, then we can talk about end user adoption en masse. Other than that, immutable distros are just glorified containers for the cloud and enthusiast applications, from my perspective.
That’s fair. I agree that the ergonomics of the immutability matter , but I think that’s true of any aspect of a distro.
I think there’s been well done immutable systems and it’s something that can be achieved with a mainstream Linux distro.
> but I think that’s true of any aspect of a distro.
That's true, but these problems are worked on for a quite long time, and the core ethos of a Linux distribution is being able to be on both sides of the fence (i.e. as a user and as an administrator which can do anything).
For example, in macOS, you can't customize the core of the operating system since the eternity, and it's now sent you as an binary delta image for the OS, and you have no chance to built these layers or tweak them. This increases the ergonomics a ton, because you're not allowed to touch that part to begin with.
However, with Linux, you need to be able to change anything and everything on the immutable part, and this requires a new philosophy and set of tools. Adding esoteric targets like "I really need to be able to install two slightly different compilations of the same library at the exact same version" creates hard problems.
When these needs and the mentality of "This is old, let's bring it down with sledgehammers. They didn't know anything, they're old and wrinkly people" meets, we have reinventions of wheels and returning to the tried and working mechanisms (e.g.: Oh, dynamic linking is a neat idea. Maybe we should try that!)
Immutable systems are valuable, but we need less hype and more sane and down-to-earth development. I believe if someone can sit down and design something immutable with consideration to how a POSIX system works and what is reasonable and what's not, a good immutable system can be built. Yes it won't be able to do that one weird trick, but it'd work for 99% of the scenarios where an immutable system would make sense.
I very much disagree with this sentence
> However, with Linux, you need to be able to change anything and everything on the immutable part, and this requires a new philosophy and set of tools.
Taking macOS as a North Star as a successful mutable Os, most people don’t need to, and shouldn’t be, touching the immutable parts. I know the assumption is that you need to do that on Linux, but I don’t think a successful distro should require most casual users to go anywhere near that level of access.
If they do for some reason on Linux specifically, why would overlays not be sufficient? They achieve the same results, with minor overhead and significant reliability gains.
> I believe if someone can sit down and design something immutable with consideration to how a POSIX system works and what is reasonable and what's not, a good immutable system can be built.
But someone has done that. macOS is POSIX. SteamOS just works for most people.
I think there are a couple of problems with taking macOS as the so-called North Star of the immutable OSes.
First of all, macOS doesn't have package management at the core OS. You can't update anything brought in as the part of the OS from libraries to utilities like zsh, perl, even cp. .pkg files bring applications in to mutable parts of the operating system, and can be removed. However, OS updates are always brought in as images and applied as deltas.
In Linux, you need a way to modify this immutable state in an atomic way, package by package, file by file way. That'd work if you "unseal, update, seal" while using EXT4 FS. If you want revisions, you need the heavier BTRFS, instead, which is not actually designed for single disk systems to begin with.
You can also use overlays, but considering the lifetime of a normal Linux installation is closer to a decade (for Debian, it can be even eternal), overlays will consume tons of space in the long run. So you need to be able to flatten the disk at some point.
On the other hand, NixOS and Guix are obsessed with reproducibility (which is not wrong, but they're not the only and true ways to achieve that), and make things much more complicated. I have no experience with RPM-OSTree approach.
So, if you ask me we need another, simpler approach to immutable Linux systems, which doesn't monkey patch tons of things to make things work.
> But someone has done that. macOS is POSIX.
Yes, but it's shipped as a set in stone monolithic item, with binary delta updates which transform it to another set in stone item. You can't customize the core OS. Can you? Even SIP sits below you, and has the capability to stop you with an "Access denied." error even if you're root on that system. It's a vertically integrated silicon to OS level monolithic structure.
> SteamOS just works for most people.
SteamOS is as general purpose as a PlayStation OS or car entertainment system. It's just an "installable embedded OS". Their requirements are different. Building and maintaining a mutable distro is a nightmare fuel already, making it immutable yet user customizable is a challenge on another level.
It's not impossible, it's valuable, but we're not there yet. We didn't find the solutions, heck even can't agree on requirements and opinions yet.
Even though macOS won’t let you replace the actual binaries that ship with the OS, you can still replace the binaries that get resolved when they’re called via brew/nix/macports etc.
I again disagree with your assertions that you need to replace the OS contents. You just need to be able to reflow them to other ones. That’s the macOS way, the flatpak way etc..
I think the issue is that you are claiming that you MUST be able to do these things on Linux, and I’d push back and say no you do not.
And your comparison of SteamOS to a console or in car entertainment is flat out incorrect. Have you never booted it into desktop mode? For 90% of users, what are you expecting they need to do in that mode that it’s not “general purpose” enough to do?
Yes, it’s not impossible. It’s been done multiple times , successfully as well.
SteamOS is not a general purpose OS, yet you mention it as if it is one.
macOS is not immutable at all.
For most users, including many developers, SteamOS absolutely can be general purpose.
There are two main "daily driver" usability issues on SteamOS by default if you need to do technical work:
- Limited software availability via the flatpak repositories.
- Not being able to install certain programs as easily without needing containerisation of some kind (if that even solves the problem in some cases).
Distrobox solves a good amount of both issues on SteamOS, for coding work at least. Slap a virtual Ubuntu on and you're off to the races.
How would you define an immutable distro that would exclude macOS with SIP?
And steamOS is totally a general purpose OS, it’s just got a non-general purpose frontend it defaults to.
For a couple of years now, macOS has an RO System volume image, and then mounts an RW data volume on top of that, similar to how overlayfs works on Linux. That system volume isn't modifiable. If it is, then the system won't boot. So I'd say it's a little bit immutable.
https://eclecticlight.co/2021/10/29/how-macos-is-more-reliab...
> SteamOS is not a general purpose OS
Uh... yeah it is. Have you ever switched it into desktop mode? I haven't pushed my Steam Deck as hard as my daily driver Linux system, but I've done all sorts of fun things on it like run a web server and write new Python scripts directly on the device. You can hook up a keyboard and mouse and monitor and use it like any other desktop Linux environment. It's basically just an Arch distro with KDE and some extra stuff on top to make it easy for people to run games.
/ being mounted ro with /etc and /home being mutable... kind of ruins the point? like you can still mutate whatever, you just have to install it as a user, and if you have overlays then what gain is there?
The advantage is you always have a core system that you can revert into by removing any problematic overlays, and you can always quickly verify that a system is in a verified and unmodified state.
This is how macOS works with SIP, and how it handles rapid response updates for example.
It greatly reduces the ability for user space to compromise the system.
For a long time Windows has dominated PCs, and all the software that run on it come packaged in their own little installers and with their own little updaters to manage versions, leaving users free to not care and focus on just using them. For us old desktop users, Linux's "enforced" centralization of software packaging and distribution is just too divergent of a concept to get immediately used to. Immutable distros take the restrictions even further, and they make one think you might as well just have an Android/Chromebook at that point.
I switched to Ubuntu after witnessing Windows 11 and am seeing there's now yet another confusing delivery channel (snap) added on top of what was already an overcomplicated system (apt). At least it still allows single installers (.deb files) so that works for now.
I'm curious which part of apt you see as being overcomplicated.
because it's silly. period.
yeah, it have lots of advantages, and that's why it was the default decades ago for everything. (windows, bsd, etc)
then people would have lots of trouble installing different software or updating security issues. so we invented package managers and took all the time in the world to make the base as small as possible.
it's advantages still make sense in some places, like modems with old flash memory. openwrt is static base with overlays. but again, it still carries the same downsides but because of the different aspects of the hardware it makes sense.
it would make sense for tech illiterate end users (hence android, ios, chrome os, wii, macos to a degree, etc) and containers (which already have infinite ways to convert from package to static). but anything else will literary harm the distro ability to evolve and adapt to software changes. imagine every change like systemd or a new browser or wm being atomic.
now people forgot decades of history. and it's so tiring.
I don't think I understand any of your objections.
When was Windows ever immutable in the sense of current immutable Linux distros? I wasn't able to find any reference to this ever being the case.
What do package managers and making the base as small as possible have to do with immutable distros? Package managers still exist, and the base is pretty much the same size as the non-immutable version of the same distro.
Why do immutable distros make more sense on modems with old flash memory?
How does being immutable harm the distros ability to evolve?
Either I'm not understanding your position at all, or you have a very different understanding of "immutable" than I do (after using Kinoite as my daily driver for a year).
If you also need determinism and full source bootstrapping (you care about supply chain security) check out https://codeberg.org/stagex/stagex
I am now running EndlessOS for a while and i love it: Its a bit like going back to the home computer days when the OS was residing in a ROM and you didn't really have to care.
The term "stability" should not be used outside of the major Linux distributions such as Debian and Fedora. For a distribution to be stable over the long term it needs a large enough community, a stable governance model, and a reasonable build system where one maintainer cannot take unilateral action without it being discovered.
A cute name and a university student somewhere does not constitute stability, no matter how good the intentions. It's not a bad thing, but you have to know what you get yourself into. Most of the distributions listed in the article belong to the latter category.
Immutable systems are great for embedded, network equipment, appliances and industrial applications, and specialized distributions for those applications have largely been immutable for a long time already. Nobody really wants an immutable system for their main desktop, because working is all about mutating state. You write may write documents, save bookmarks, install plugins, or try new software. Those are the things really immutable systems like kiosks wants to disallow.
So in order to make for usable software these desktops generally split your system into a mutable user part and an immutable system part. That's basically how unix-like desktops have worked since forever. Stuff in /bin and /sbin is only changed by the package manager. So the fit is quite good, but it also means it really isn't as useful as it's made out to be. That's why most people don't use them.
The use case is mostly for rolling back updates, not really running from readonly filesystems or preventing change in other ways, but most distributions already do that. You can roll back updates with both dnf and apt. It's not perfect and doesn't always work, but mainly from a lack of testing. With snapshots it's pretty much infallible though.
My recommendation if you really want something that "just works" is to install one of the major and time tested distributions. Pick Debian if you don't know what to choose. And then learn how to use it. Anything these tiny experimental distributions offer, such as running off read only filesystems or rebuilding it for your brand of cpu, or testing a new desktop environment, is likely possible in Debian too. With the added benefit of it being around in 20 years. And the core distribution is less likely to break in some way because some maintainer found inspiration for something. As long as you don't run untrusted stuff as root, stay out of the system files, and generally let the package manager do its job, you're going to be fine.
What I would like to see a desktop distribution work on is basically the same things as 20 years ago which still isn't really done outside some exploratory work (probably because it's actually hard):
- Packages on a user level where it is easy to install new stuff without touching the system area. More tricky in practice than in theory because of state changes to configuration files, saved file formats etc. But some should be easier than others.
- Desktop software service accounts, just like we do for server software. Mostly relevant for larger packages such as Firefox, Libre Office, movie players.
- Integration with popular third party package managers from the language ecosystems. Most language packages are anemic. All the powers that a package manager gives, reporting, listing untracked files, listing changes, rolling back updates, should be available for them by integrating directly with them. Package definitions should be able to be imported without manual work.
- Package managers should have at least some knowledge of an application's access patterns to help with application confinement. Still today things like selinux policies are packaged are separate entities and managed with external tools, which brings a lot of complexity since all possible configurations must be supported there. A package manager knows more about the system and could handle these files. Confining desktop software is a usability problem more than a technical one, but it is clear that desktop environments needs something to build on to make it practical.
You don't need a whole new distro