167 comments

  • Arnavion 13 hours ago ago

    Distributions optimize for software in their repos. Software in their repos is compiled against libraries in their repos, so dynamic linking has no downsides and has the upside of reducing disk usage and runtime memory usage (sharing pages).

    Your problem with "libFlac.8.so is missing" happens with using software not from the distro repos. Feel free to statically link it, or run it via AppImage or Flatpak or Podman or whatever you want that provides the environment it *was* compiled for. Whether the rest of the distro is dynamically linked or not makes no difference to your ability to run this software, so there's no reason to make the rest of the distro worse.

    I personally do care about disk usage and memory usage. I also care about using software from distro repos vs Flatpak etc wherever possible, because software in the distro repos is maintained by someone whose values align with me and not the upstream software author. Eg the firefox package from distro repos enables me to load my own extensions without Mozilla's gatekeeping, the Audacity package from distro repos did not have telemetry enabled that Audacity devs added to their own builds, etc.

    • cbmuser 12 hours ago ago

      The main argument for using shared libraries isn’t memory or disk usage, but simply security.

      If you have a thousand packages linking statically against zlib, you will have to update a thousand packages in case of a vulnerability.

      With a shared zlib, you will have to update only one package.

      • pizlonator 10 hours ago ago

        That's also a good argument for shared libraries, but the memory usage argument is a big one. I have 583 processes running on my Linux box right now (I'm posting from FF running on GNOME on Pop!_OS), and it's nice that when they run the same code (like zlib but also lots of things that are bigger than zlib), they load the same shared library, which means they are using the same mapped file and so the physical memory is shared. It means that memory usage due to code scales with the amount of code (linear), not with the amount of processes times the amount of code (quadratic).

        I think that having a sophisticated desktop OS where every process had distinct physical memory for duplicated uses of the same code would be problematic at scale, especially on systems with less RAM. At least that physical memory would still be disk-backed, but that only goes so far.

        That said, the security argument is also a good argument!

        • MrDrMcCoy 4 hours ago ago

          Kernel samepage merging and zram/zswap are the answers to the memory usage issue. The only actual issues with static linking is that it's harder to ensure that bugs and security vulnerabilites get patched.

          • yjftsjthsd-h 3 hours ago ago

            I disagree:

            - KSM only works on pages that are identical; I am skeptical that it can actually identify 2 instances of a lib after it's been through an optimizing compiler+linker (mostly LTO)

            - KSM has performance overhead

            - zram/zswap only let you reduce the impact of running out of memory at the cost of performance; you're still swapping, no matter how much we improve the details, so it'll always be worse than outright deduplicating libraries

            • MrDrMcCoy 3 hours ago ago

              Fair points overall. I think KSM is still useful and worth the overhead for things like sprawling browser tabs and densely deployed microservices.

              I disagree that swap is only (or even primarily) for situations where you're running out of memory, though. It is useful for reducing I/O contention and improving memory utilization by replacing underutilized pages with file cache when it makes sense to do so. See this for more details: https://chrisdown.name/2018/01/02/in-defence-of-swap.html

      • Arnavion 11 hours ago ago

        That's not a difference of security, only of download size (*). A statically-linking distro would queue up those thousand packages to be rebuilt and you would receive them via OS update, just as you would with a dynamically-linking distro. The difference is just in whether you have to download 1000 updated packages or 1.

        And on a distribution like OpenSUSE TW, the automation is set up such that those thousand packages do get rebuilt anyway, even though they're dynamically linked.

        (*): ... and build time on the distro package builders, of course.

        • ndiddy 10 hours ago ago

          It is a difference of security. As one example, Debian used to be more lenient on allowing packages to vendor and statically link libraries. Around 20 years ago, an important zlib vulnerability was found that could be exploited through malicious compressed data. Rather than just pushing out a fixed zlib package, the package maintainers had to scan the repos for all copies and versions of the zlib source, patch them individually, and test each package. To make matters worse, some packages had modified versions of zlib that needed custom variants of the patch. This is a ton of work to ask of a small group of volunteers, and leaves users vulnerable for longer.

          • Arnavion 10 hours ago ago

            Vendored copies of libraries is a special case that's unrelated to this discussion. This discussion is about a distro whose packages dynamically link to the distro's zlib package vs a distro whose packages statically link to the distro's zlib package.

          • chrsig 10 hours ago ago

            > This is a ton of work to ask of a small group of volunteerss and leaves users vulnerable for longer.

            Perhaps consider purchasing a support contract if people's charity endangers your users.

        • singron 10 hours ago ago

          The build time is pretty significant actually. E.g. NixOS takes this to the extreme and always rebuilds packages if their dependencies change. It can take days to rebuild affected packages, and you typically can't easily update in the meantime. This is worse if the rebuild breaks something since someone will have to fix or revert that before it can succeed.

          For non-security updates, they can do the rebuild in a branch (staging) so it's non-blocking, but you will feel the pain on a security update.

          In practice, users will apply a config mitigation if available or "graft" executables against an updated lib using system.replaceDependencies, which basically search and replaced the built artifacts to use a different file path.

          • Arnavion 10 hours ago ago

            True. I'm involved in Alpine, postmarketOS and OpenSUSE packaging and I've seen builders of the first two become noticeably slow on mass rebuilds. OpenSUSE tends to be fine, but it has a lot of builders so that's probably why.

        • throw0101c 10 hours ago ago

          > That's not a difference of security, only of download size (*).

          It is a difference of security, because now when there's an issue with (e.g.) OpenSSL, instead of just the OpenSSL distro package team having to worry about it, the MySQL distro package team has to, the Postgres distro package team, Nginx, Apache, cURL, OpenSSH, etc.

          All of those teams have to coordinate to release at the same time.

          • Arnavion 10 hours ago ago

            We're talking about distro packages, not upstream software releases.

            • throw0101c 9 hours ago ago

              > We're talking about distro packages, not upstream software releases.

              Yes, by "the MySQL package team" I meant the MySQL distro package team. Edited to clarify.

              But even if applied to the upstream team that releases from (e.g.) repo.mysql.org or repo.postgres.org it would apply: they're now worrying about all their dependencies as well their own code, instead of just specifying in the .deb or .rpm file "I need libssl3" and letting the OS take care of it.

              It would become a huge duplication of effort to keep up on security updates.

              At $JOB we have a general policy to always use the distro-supplied version of a package whenever possible, as if we compile it on our own and put it into /usr/local or /opt, we now have to babysit that code for security and such ourselves instead of 'outsourcing' it to the distro.

              Ain't nobody got time for that.

        • robert_foss 10 hours ago ago

          Security + build time isn't the issue, but security + dev/testing time is.

          Maintaining a secure package of zlib takes linearly more time with more versions of it used.

          All distros are manpower limited.

          • Arnavion 10 hours ago ago

            There's only one version of zlib, the distro-packaged version, in the scenario of this discussion.

        • compsciphd 2 hours ago ago

          my /usr is 24G (restricting just to binary dirs, seems 18G). If every single time glibc had an update (perhaps a handful of times a year), I had to update my entire /usr (or with a bit more intelligence, just the binary packages), I'd be burning at least 100G (if not much more) of SSD life a year.

          in a world of SSDs that wear out, the concept of updating the entire install every single time, seems problematic.

        • trelane 7 hours ago ago

          > A statically-linking distro would queue up those thousand packages to be rebuilt and you would receive them via OS update, just as you would with a dynamically-linking distro. The difference is just in whether you have to download 1000 updated packages or 1.

          That is a large difference

          But more significantly, you're assuming the only software installed is from the distro. This isn't what OP is talking about (unless the distro screwed up their package dependencies to get the not found error discussed.

          Outside the distro-provided software, you either have to rebuild them all manually or wait for the proprietary software to do it and ship you a fix.

      • imoverclocked 11 hours ago ago

        > The main argument for using shared libraries isn’t memory or disk usage, but simply security.

        “The” main argument? In a world filled with diverse concerns, there isn’t just one argument that makes a decision. Additionally, security is one of those things where practically everything is a trade off. Eg: by having lots of things link against a single shared library, that library becomes a juicy target.

        > With a shared zlib, you will have to update only one package.

        We are back to efficiency :)

      • sunshowers 11 hours ago ago

        The solution here is to build tooling to track dependencies in statically linked binaries. There is no inherent reason that has to be tightly coupled to the dynamic dispatch model of shared objects. (In other words, the current situation is not an inherent fact about packaging. Rather, it is path-dependent.)

        For instance, many modern languages use techniques that are simply incompatible with dynamic dispatch. Some languages like Swift have focused on dynamic dispatch, but mostly because it was a fundamental requirement placed on their development teams by executives.

        While there is a place for dynamic dispatch in software, there is also no inherent justification for dynamic dispatch boundaries to be exactly at organizational ones. (For example, there is no inherent justification for the dynamic dispatch boundary to be exactly at the places a binary calls into zlib.)

        edit: I guess loading up a .so is more commonly called "dynamic binding". But it is fundamentally dynamic dispatch, ie figuring out what version of a function to call at runtime.

      • nmz 11 hours ago ago

        Does static compilation use the entire library instead of just the parts that are used? If I'm just using a single function from this library, why include everything?

      • t-3 12 hours ago ago

        If a vulnerability in a single library can cause security issues in more than one package, there are much more serious issues to consider with regards to that library than the need to recompile 1000 dependents. The monetary/energy/time savings of being able to update libraries without having to rebuild dependents are of far greater significance than the theoretical improvement in security.

    • anotherhue 12 hours ago ago

      If you're in an adversarial relationship with the OEM software developer there's not a whole lot the distro maintainers can do, probably time to find a fork/alternative. (Forks exist for both your examples).

      I say this as a casual maintainer of several apps and I'm loathe to manually patch versus upstream any fix.

      • Arnavion 12 hours ago ago

        I'm not going to switch to a firefox fork over one line in the configure script invocation. Forks have their own problems with maintenance and security. It's not useful to boil it down to one "adversarial relationship" boolean.

    • skissane 11 hours ago ago

      > I also care about using software from distro repos vs Flatpak etc wherever possible, because software in the distro repos is maintained by someone whose values align with me and not the upstream software author.

      The problem one usually finds with distro repo packages, is they are usually out of date compared to the upstream – especially if you are running a stable distro release as opposed to the latest bleeding edge. You can get in a situation where you are forced to upload your whole distro to some unstable version which may introduces lots of other issues, just because you need a newer version of some specific package. Upstream binary distributions, Flatpak/etc, generally don't have that issue.

      > the firefox package from distro repos enables me to load my own extensions without Mozilla's gatekeeping, the Audacity package from distro repos did not have telemetry enabled that Audacity devs added to their own builds, etc

      This is mainly a problem with "commercial open source", where an open source package is simultaneously a commercial product. "Community open source" – where the package is developed in people's spare time as a hobby, or even by commercial developers where the package is just some piece of platform infrastructure not a product in itself, is much less likely to have this kind of problem.

    • o11c 9 hours ago ago

      > Feel free to statically link it

      Actually, please don't. There's a nontrivial chance that you'll ship a copy with bugs that somebody wants to fix.

      Instead, to get ALL the advantages of static linking with none of the downsides, what you should do is:

      * dynamically link, shipping a copy of the library alongside the binary

      This allows the user to find a newer build of the same-version library. It's also much less likely to be illegal (not that anybody cares about laws).

      * use rpath to load the library

      Remember: rpath is NOT a security hole in general. The only time that can happen is if it is both a privileged program (setuid, setcap, etc.) and it contains a non-owner-writable path, neither of which is likely to happen for the kind of software people complain about shipping.

      * do not use dlopen, please

      `dlopen` works well for plugins, but terribly for actually-needed dependencies.

    • jauntywundrkind 11 hours ago ago

      With Debian, one can apt-pin different releases on as well. So you can run testing for example but have oldstable, stable, unstable and experimental all pinned on.

      That maximizes your chance of being able to satisfy a particular dependency like libflac.8.so. Sometimes that might not actually be practical to pull in or might involve massively changing a lot of your installed software to satisfy the dependencies, but often it can be a quick easy way to drop in more libraries.

      Sometimes libraries don't have a version number on them, so it'll keep being libflac even across major versions. Thats prohibitive because ideally you want to install old version 8 alongside newer version 12. But generally Debian is pretty good about allowing multiple major versions of packages. Here for example is libflac12, on stable and unstable both. https://packages.debian.org/search?keywords=libflac12

    • red016 12 hours ago ago

      Install Gentoo.

  • odo1242 13 hours ago ago

    Something worth noting with shared dependencies is that yes, they save on disk space, but they also save on memory. A 200MB non-shared dependency will take up 600MB across three apps, but a 200MB shared dependency can be loaded on it's own and save 400 megabytes. (Most operating systems manage this at the physical page level, by mapping multiple instances of a shared library to the same physical memory pages.)

    400 megabytes of memory usage is probably worth more than 400 megabytes of storage. It may not be a make-or break thing on it's own, but it's one of the reasons Linux can run on lower-end devices.

    • packetlost 13 hours ago ago

      When you statically compile an application, you only store text (code) of functions you actually use, generally, so unless you're using all 400Mb of code you're not going to have a 400Mb+ binary. I don't think I've ever seen a dependency that was 400Mb of compiled code if you stripped debug information and weren't embedding graphical assets, so I'm not sure how relevant this is in the first place. 400Mb of opcodes is... a lot.

      • AshamedCaptain 13 hours ago ago

        I have seen a binary that is approximately 1.1GB of text. That is without debug symbols. With debug symbols it would hit GDB address space overflow bugs all the time. You have not seen what 30 year old engineering software houses can produce. And this is not the largest software by any chance.

        Also, sibling comments argue that kernel samepage merging can help avoid the bloat of static linking. But here what you argue will make every copy of the shared libraries oh-so-slightly-different and therefore prevent KSM from working at all. Really, no one is thinking this through very well. Even distributions that do static linking in all but name (such as NixOS) do still technically use dynamic linking for the disk space and memory savings.

        • sunshowers 11 hours ago ago

          Some programs are definitely that complicated, and it's a horrible idea for them to use dynamic binding! The test matrix is absurdly large. (You are testing the software you ship as you ship it, aren't you?)

          • AshamedCaptain 11 hours ago ago

            By the same logic, you also would have to test the software with every display resolution / terminal width in existance. You are testing the software you ship as you ship it, aren't you?

            Abstractions are a thing in computer science. Abstracting at the shared library layer makes as much sense as abstracting in the RPC layer (which your software is most likely going to be obligated to do) or abstracting at the ISA level (which your software IS obligated to do). Your software has as many chances to break from a library change as it does from a display driver change or from a screen resolution change or from a processor upgrade. Why the first would bloat the "testing matrix" but not the later is over me, and already shows a bias against dynamic linking: you assume library developers are incapable of keeping an ABI but that the CPU designers are. (Anecdotally, as a CPU designer, I would rather trust the library developers..)

            • sunshowers 11 hours ago ago

              In practice, I've seen breakage from shared library updates be much more common than breakage from display resolutions.

              Many modern software development paradigms are simply not compatible with ABIs or dynamic binding. Dynamic binding also likely means you're leaving a bunch of performance on the table, since inlining across libraries isn't an option.

              • AshamedCaptain 11 hours ago ago

                > In practice, I've seen breakage from shared library updates be much more common than breakage from display resolutions.

                You'd be surprised, specially when I'm thinking 30 year old software. Again, usually I can patch around it thanks to dynamic linking...

                > Many modern software development paradigms are simply not compatible with ABIs or dynamic binding

                This is nonsense.

                > Dynamic binding also likely means you're leaving a bunch of performance on the table, since inlining across libraries isn't an option.

                Again, why set the goalpost here and not say ISA level or any other abstraction layer? I could literally make the same argument to any of these levels (e.g. "you are leaving a bunch of performance on the table" by not specializing your ISA to your software). How much are you really leaving? And how much would you pay if you remove the abstraction? What are the actual pros/cons?

                • sunshowers 11 hours ago ago

                  True! That is a completely valid argument. Why not ship your own kernel? Your own processors with your own ISA?

                  The answer to each of these questions is specific to the circumstances. You have to decide based on general principles (what do you value?), the specific facts, and ultimately judgment.

                  I think in some cases (e.g kernel or libc) using dynamic binding generally makes sense, but I happen to think forcing shared library use has many more costs than benefits.

                  You're absolutely right that everyone should ask these questions, though. I work at Oxide where we did ask these questions, and decided that to provide a high-quality cloud-like experience we need much tighter coupling between our components than is generally available to the public. So. for example, we don't use a BIOS or UEFI—we have our own firmware that is geared towards loading exactly the OS we ship.

                  > This is nonsense.

                  Monomorphization like in C++ or Rust doesn't work with dynamic binding. C macros and header-only libraries don't work with dynamic binding either.

        • packetlost 12 hours ago ago

          Your poor I$.

          The fact of the matter is outside of GPU drivers and poorly designed GUI applications, that's an extreme statistical outlier.

          • AshamedCaptain 11 hours ago ago

            Sum the GPU driver and the mandatory LLVM requirement that comes with it, and you already have almost half a gigabyte of text per exec. No wonder that your logic forces you to literally dismiss all GUI applications as "extreme statistical outliers" out of hand.

            For the record, the 1.1GB executable I'm thinking about is a popular _terminal-only_ simulation package. No GUI.

            • packetlost 9 hours ago ago

              Most applications do not use CUDA (I assume that's what you're referring to) and the ones that do only need to pull in the runtime (libcudart_static, in this case) which is nowhere near as big. You don't link all of LLVM into a CUDA application lol

            • nightowl_games 10 hours ago ago

              Can't you just tell us what it is?

          • lostmsu 12 hours ago ago

            You mean GPU libraries, right?

            Just watch how these become used in every other app over the next few years.

            And yes, the fact that you won't use those apps does not make it easier for the rest of us.

      • vvanders 13 hours ago ago

        Yes, this is lost in most discussions when it comes to DSOs. Not only do you have the complexity of versioning and vending but also you can't optimize with LTO and other techniques(which can make a significant difference in final binary size).

        If you've got 10-15+ consumers of a shared library or want to do plugins/hot-code reloading and have a solid versioning story by all means vend a DSO. If you don't however I would strongly recommend trying to keep all dependencies static and letting LTO/LTCG do its thing.

        • intelVISA 9 hours ago ago

          Indeed, I'm biased but often the arguments in favor of shared objs have fundamental misconceptions that betray a lack of lower-level tinkering that makes it hard to take seriously.

          • saagarjha 8 hours ago ago

            The same can be said for any position in this debate.

    • Lerc 13 hours ago ago

      In practice I don't think this results in memory savings. By having a shared library and shared memory use, you also have distributed the blame for the size of the application.

      It would be true that this save memory if applications did not increase their memory requirements over time, but the fast is that they do, and the rate at which they increase their memory use seems to be dictated not by how much memory they intrinsically need but how much is available to them.

      There are notable exceptions. AI models, Image manipulation programs etc. do actually require enough memory to store the relevant data.

      On the other hand I have used a machine where the volume control sitting in the system tray used almost 2% of the system RAM.

      Static linking enables the cause of memory use to be more clearly identified, That enables people to see who is wasting resources. When people can see who is wasting resources, there is a higher incentive to not waste them.

      • pessimizer 12 hours ago ago

        This is a law of averages argument. There is no rational argument for bloat in order to protect software from bloat. This is like saying that it doesn't matter that we waste money, because we're going to spend the entire budget anyway.

        • Lerc 12 hours ago ago

          I think it's more like,

          If we pay money to someone to audit our books, we are more likely to achieve more within our budget.

    • bbatha 13 hours ago ago

      Only if you don’t have LTO on. If you have LTO on you’re likely to use a fraction of the shared dependency size even across multiple apps.

      • odo1242 13 hours ago ago

        That is a good point.

    • pradn 13 hours ago ago

      It's possible for the OS to recognize that several pages have the same content (ie: doing a hash) and then de-duplicate them. This can happen across multiple applications. It's easiest for read-only pages, but you can swing it for mutable pages as well. You just have to copy the page on the first write (ie: copy-on-write).

      I don't know which OSs do this, but I know hypervisors certainly do this across multiple VMs.

      • slabity 13 hours ago ago

        Even if the OS could perfectly deduplicate pages based on their contents, static linking doesn't guarantee identical pages across applications. Programs may include different subsets of library functions and the linker can throw out unused ones. Library code isn't necessarily aligned consistently across programs or the pages. And if you're doing any sort of LTO then that can change function behavior, inlining, and code layout.

        It's unlikely for the OS to effectively deduplicate memory pages from statically linked libraries across different applications.

        • pradn 7 hours ago ago

          Ah, good to know! Thank you for explaining.

          I guess much of this is why its hard to use shared libraries in the first place.

      • ChocolateGod 13 hours ago ago

        Correct me if I'm wrong Linux only supports KSM (memory-deduping) between processes when doing it between VMs, as QEMU provides information to the kernel to perform it.

        • yjftsjthsd-h 11 hours ago ago

          https://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.ht...

          > KSM was originally developed for use with KVM (where it was known as Kernel Shared Memory), to fit more virtual machines into physical memory, by sharing the data common between them. But it can be useful to any application which generates many instances of the same data

          Although...

          > KSM only operates on those areas of address space which an application has advised to be likely candidates for merging, by using the madvise(2) system call

          • pradn 7 hours ago ago

            I wonder if you could just madvise the entire address space. My hunch is that it's for performance reasons only - fewer pages to scan, hash, and de-duplicate.

            • MrDrMcCoy 4 hours ago ago

              You can. There are custom kernel builds that do this, as well as shim loaders that do that madvise call. However, those are considerably less practical than the recent systemd feature that allows you to do it per unit. If applied to the system scope, it effectively covers the whole system. However, there are caveats:

              1. You need a much better newer systemd than your distro likely packages.

              2. KSM dedupe scans aren't free. Your system can spend its time doing that scan or it can spend its doing work with duplicate pages. Only relatively idle or highly homogenous systems would be free of the penalty.

              3. For applications, especially statically linked ones, the duplicated code is not super likely to fall along page boundaries, thus the actual detectable duplication will be relatively low.

              That said, it's still great for densely deploying a high traffic microservice that's more memory than CPU bound.

      • 13 hours ago ago
        [deleted]
    • indigodaddy 13 hours ago ago

      I remember many years ago VMWare developed a technology to take advantage of these shared library savings across VMs as well.

      https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsp...

      • abhinavk 13 hours ago ago

        Linux’s KVM has it too. It’s called KSM.

        • indigodaddy 13 hours ago ago

          ah right, forgot about that as well

    • tdtd 11 hours ago ago

      This is certainly true on Windows, where loaded DLLs share the base addresses across processes even with ASLR enabled, but is it the case on Linux, where ASLR forces randomization of .so base addresses per process, so relocations will make the data in their pages distinct? Or is it the case that on modern architectures with IP-relative addressing (like x64) that relocations are so uncommon that most library pages contain none?

      • saagarjha 10 hours ago ago

        Code pages are intentionally designed to stay clean; relocations are applied elsewhere in pages that are different for each process.

    • whatshisface 13 hours ago ago

      Isn't 200MB a little large for a dependency? The Linux kernel is ~30MB.

      • odo1242 13 hours ago ago

        It's around the size of OpenCV, to be specific. I do see your argument though.

    • intelVISA 9 hours ago ago

      Please don't soapbox under false pretenses - in no world does this "200mb non-shared dependency" comparison exist: that would imply static linkage is just appending a shared object into the executable at compile-time.

    • anotherhue 12 hours ago ago

      Assuming it is worth it with modern memory sizes (I think not), could this present a negative on NUMA systems? Forcing contention when a simple copy would have been sufficient?

      I assume there's something optimising that away but I'm not well versed.

    • 839302 10 hours ago ago

      Hola. No

  • tannhaeuser 12 hours ago ago

    > Sure, we now have PipeWire and Wayland. We enjoy many modern advances and yet, the practical use for me is worse than it was 10 years ago.

    That's what has made me leave Linux for Mac OS, and I'm talking about things like the touchpad (libinput) causing physical pain and crashing every couple minutes while the "desktop" wants to appeal to a hypothetical casual tablet user even Microsoft left long behind in the Windows 8.1 era (!). Mind, Mac OS is far from perfect and also regressing (hello window focus management, SIP, refactored-into-uselessness expose, etc, etc) but Mac OS has at least a wealth of new desktop apps created in this millenium to make up for it, unlike the Linux desktop struggling to keep the same old apps running as it's on a refactoring spree to fix self-inflicted problems (like glibc, ld.so) and then still not attracting new developers. I wish I could say, like the author, that containers are the solution, but the canonical example of browser updates also is a case of unwarranted and rampant complexity piling up without the slightest actual benefit for the user as the web is dying.

    • imiric 11 hours ago ago

      I agree to an extent, but Linux is actually very usable if you stick to common quality hardware, and a collection of carefully curated robust and simple software. I realize this is impractical for most people, but having a limited ecosystem is what essentially allows Apple to deliver the exceptional user experience they're known for.

      The alternative of supporting an insane amount of hardware, and making it work with every combination of software, both legacy and cutting-edge, is much, much harder to achieve. It's a miracle of engineering that Linux works as well as it does, while also being developed in a globally distributed way with no central company driving the effort. Microsoft can also pull this off arguably better, but with a completely different development and business model, and all the resources in the world, so it's hardly comparable.

      The sad part is that there is really no alternative to Linux if you want full control over your devices, while also having a decent user experience. Windows and macOS are walled gardens, and on the opposite spectrum BSDs and other niche OSs are nowhere near as usable. So the best we can do is pick a good Linux distro, something we're thankfully spoiled for choice, and customize it to our needs, which unfortunately does take a lot of time and effort. I still prefer this over the alternatives, though.

    • desumeku 11 hours ago ago

      > while the "desktop" wants to appeal to a hypothetical casual tablet user even Microsoft left long behind in the Windows 8.1 era This sounds like a GNOME issue, not a Linux one.

    • jwells89 8 hours ago ago

      In my opinion, what desktop Linux needs to attract the the sort of developers that create the high-polish third party software that macOS is known is a combination DE and UI toolkit that provides the same breadth and depth that's found in AppKit.

      With AppKit, one can build just about any kind of app imaginable by importing a handful of stock frameworks. The need to use third party libraries is the lowest I've seen on any platform, with many apps not needing any at all. That's huge for lowering the amount of activation energy and friction involved in building things, as well as for reducing ongoing maintenance burden (no fear of the developer of libfoo suddenly disappearing, requiring you to chase down the next best fork or to fork it yourself).

      The other thing is not being afraid of opinionated design choices, which is also a quality of AppKit. This is going to chafe some developers, but I see it as a necessary evil because it's what allows frameworks to have "happy paths" for any given task that are well-supported, thoroughly tested, and most importantly work as expected.

      GTK and Qt are probably the closest to this ideal but aren't quite there.

      • MrDrMcCoy 4 hours ago ago

        I've been meaning to get into Qt dev for a while now, as it seems to be a pretty big, portable framework with batteries included, which would be a big step up from the little headless Python tools I've been writing. What would you say is missing from Qt and the related KDE frameworks compared with AppKit?

        • jwells89 3 hours ago ago

          With a disclaimer that I'm not an expert on Qt:

          - Qt Widgets has a selection of controls that falls a bit short of that of AppKit and doesn't receive much attention any more

          - Qt Quick/QML seems more mobile-oriented and web-styled, being barebones if you don't pull in a library of third party widgets (e.g. Kirigami)

          - Neither go as deep as AppKit with functionality and require more third party library usage

          - Practically speaking, developers are limited to C++ or Python for Widgets and JavaScript for QML

          - Distribution of Qt apps is messy and error-prone

          These aren't really dealbreakers on their own but pile up to pull the overall experience down.

          • MrDrMcCoy 3 hours ago ago

            Do you think there are any better frameworks for open source development of cross-platform GUI apps? I keep seeing mutterings that Godot or some other game engine could become that, but have yet to see anything materialize in that space.

    • shiroiushi 8 hours ago ago

      >while the "desktop" wants to appeal to a hypothetical casual tablet user even Microsoft left long behind in the Windows 8.1 era (!)

      Using GNOME is a personal choice that you made. There's lots of Linux distros that use better desktop environments like KDE and XFCE.

    • nosioptar 10 hours ago ago

      I hate libinput and its lack of configuration so much I've contemplated going back to windows. I probably would have if there was a legit way to get win ltsc as an individual.

  • BeetleB 13 hours ago ago

    Been running Gentoo for over 20 years. It's as (un)stable now as it was then. It definitely has not regressed in the last N years.

    I don't see this as a GNU/Linux problem, but a distro problem.

    Regarding shared libraries: What do you do when a commonly used library has a security vulnerability? Force every single package that depends on it to be recompiled? Who's going to do that work? If I maintain a package and one of its dependencies is updated, I only need to check that the update is backward compatible and move on. You've now put a huge amount of work on my plate with static compilation.

    Finally: What does shared libraries have to do with GNU/Linux? I don't think it's a fundamental part of either. If I make a distro tomorrow that is all statically compiled, no one will come after me and tell me not to refer to it as GNU/Linux. This is an orthogonal concern.

  • ChocolateGod 13 hours ago ago

    > I am all in for AppImages or something like that. I don't care if these images are 10x bigger. Disk space now is plenty, and they solve the issue with "libFlac.8.so is missing

    They don't solve the dependency problem, they solve a distribution problem (in a bad for security way), what the AppImage provides is up to the author and once you go out of the "Debian/Ubuntu" sphere, you run into problems with distributions such as Arch and Fedora whom provide newer packages or do things slightly differently. You can have them fail to run if you're missing Qt, or your Qt version does not match the version it was compiled against, same GTK, Mesa, Curl etc.

    The moment there's an ABI incompatibility with the host system (not that uncommon), it breaks down. Meanwhile, a Flatpak produced today should run in 20 years time as long as the kernel doesn't break user-space.

    They don't run on my current distribution choice of NixOS. Meanwhile Flatpaks do.

    • AshamedCaptain 13 hours ago ago

      I really doubt a X11 Flatpak from today will run in Fedora from 10 years time, much less 20 years time. They will break XWayland (in the name of "security") much before that. They will break D-Bus much before that.

      In addition, the kernel breaks ABI all the time; sometimes this is partially workarounded thanks to dynamic linking (e.g. OSS and solutions like aoss). Other times not so much.

      I feel that everytime someone introduces a "future proof" solution for 20 years they should make the effort to run 20 year old binaries on their Linux system of today and extrapolate from it.

      • o11c 8 hours ago ago

        As someone who actually has run at-least-15-year-old binaries (though admittedly not x11 ones for my use case), I can strongly state the following advice:

        * do not use static linking for libc, and probably not for other libraries either

        C-level ABI is not the only interface between parts of a program. There are also other things like e.g. well-known filepaths, and the format of those can change. A statically-linked program has no way to work across a break, whereas a dynamically-linked one (many important libraries have not broken SOVERSION in all that time) will know how to deal with the actual modern system layout.

        During my tests, all the statically-linked programs I tried from that era crashed immediately. All the dynamically-linked ones worked with no trouble.

      • saagarjha 8 hours ago ago

        The kernel breaks ABI? Since when?

  • mfuzzey 13 hours ago ago

    I haven't seen such problems on Debian or Ubuntu guess it's par for the course with a bleeding edge distro.

    The author seems to be focusing on the disk space advantage and claiming it's not enough to justify the downsides today. I can understand that but I don't think disk space savings are the main advantage of shared dependencies, rather it's centralized security updates. If every package bundles libfoo what happens there's a security vulnerability in libfoo?

    • cbmuser 12 hours ago ago

      > If every package bundles libfoo what happens there's a security vulnerability in libfoo?

      That’s actually the key point that many people in this discussion seem to miss.

      • pmontra 11 hours ago ago

        What happens is that libfoo gets fixed, possibly by the maintainers of the distro, and all the apps using it are good to go again.

        With multiple versions bundled to multiple apps, a good number of those apps will never be updated, at least not in a timely manner, and the computer will be left vulnerable.

      • sunshowers 11 hours ago ago

        Then you get an alert that your libfoo has a vulnerability (GitHub does a pretty good job here!) and you roll out a new version with a patched libfoo.

        • kelnos 11 hours ago ago

          As a user, I don't want to assume that every single maintainer of every single app that uses (a statically linked) libfoo is keeping up to date with security issues in their dependencies and has the time and ability to promptly update their software.

          But I feel pretty safe believing that the debian libfoo package maintainer is on top of things and will quickly release an update to libfoo.so that all apps running on my system will be able to take advantage of.

          • sunshowers 10 hours ago ago

            That's fair, but the Debian maintainer could just as well update libfoo.a and kick off builds of all the reverse transitive dependencies of libfoo.a.

            • tremon 10 hours ago ago

              Specifically in the case of Debian, who is going to pay for all the additional infrastructure (build servers) that switching to dependency vendoring/static linking would require?

              • sunshowers 10 hours ago ago

                Good question. I think someone would have to run the numbers here!

  • anon291 13 hours ago ago

    The main issue is mutability, not shared objects. Shared objects are a great memory optimization for little cost. The dependency graph is trivially tracked by sophisticated build systems and execution environments like NixOS. We live in 2024. Computers should be able to track executable dependencies and keep around common shared object libraries. This is a solved technical problem.

    • tremon 10 hours ago ago

      Immutabililty actually destroys the security benefits that shared objects bring, because with every patch the location of the library changes. So you're back to the exact same situation as without dynamic linking: every dependency will need to be recompiled anyway against the new library location. And that means that even though you may have a shared object that's already patched, every other package on your system that's not yet been recompiled is still vulnerable.

      • saagarjha 8 hours ago ago

        Applications rarely link against versioned shared libraries. It is rare that a security patch would require every application to have its list of dependent libraries edited.

        • whytevuhuni 2 hours ago ago

          And yet, even a compatible change that doesn't break the ABI, will cause NixOS to rebuild all packages that depend on that library.

          That's because while tracking dependencies is a solved problem, tracking whether each dependency correctly follows semver (especially in C land), and which packages and which "minor" patches break or don't break ABI, is not a solved problem.

  • anotherhue 13 hours ago ago

    Since I switched to nixos all these articles read like people fiddling with struct-packing and optimising their application memory layout. The compiler does it well enough now that we don't have to, so it is with nix and your application filesystem.

    • __MatrixMan__ 13 hours ago ago

      I had a similar feeling during the recent crowdstrike incident. Hearing about how people couldn't operate their lathe or whatever because of an update, my initial reaction was:

      > Just boot yesterday's config and get on with your life

      But then, that's one of those NixOS things that we take for granted.

      • sshine 10 hours ago ago

        Not just NixOS. Ubuntu with ZFS creates a snapshot of the system on every `apt install` command.

        But yeah, it’s pretty great to know that if your system fails, just `git restore --staged` and redeploy.

    • thot_experiment 13 hours ago ago

      Using shared libraries is optimizing for a very different set of constraints than nixos, which iirc keeps like 90 versions of the same thing around just so everyone can have the one they want. There are still people who are space constrained. (I haven't touched nix in years so maybe i'm off base on this)

      > The compiler does it well enough now that we don't have to

      You know I see people say this and then I see some code with some nested loops running 2x as fast as code written with list comprehensions and I remember that it's actually.

      "The compiler does it well enough now that we don't have to as long as you understand the way the compiler works at a low enough level that you don't use patterns that will trip it up and even then you should still be benchmarking your perf because black magic doesn't always work the way you think it works"

      Struct packing too can still lead to speedups/space gains if you were previously badly aligned, which is absolutely something that can happen if you leave everything on auto.

      • matrss 12 hours ago ago

        > Using shared libraries is optimizing for a very different set of constraints than nixos, which iirc keeps like 90 versions of the same thing around just so everyone can have the one they want.

        This isn't really true. One version of nixpkgs (i.e. a specific commit of https://github.com/NixOS/nixpkgs) generally has one version of every package and other packages from the same nixpkgs version depending on it will use the same one as a dependency. Sometimes there are multiple versions (different major versions, different compile time options, etc.) but that is the same with other distros as well.

        In that sense, NixOS is very similar to a more traditional distribution, just that NixOS' functional package management better encapsulates the process of making changes to its package repository compared to the ad-hoc nature of a mutable set of binary packages like traditional distros and makes it possible to see and rebuild the dependency graph at every point in time while a more traditional distro doesn't give you e.g. the option to pretend that it's 10 days or months ago.

        You only really get multiple versions of the same packages if you start mixing different nixpkgs revisions, which is really only a good idea in edge cases. Old ones are also kept around for rollbacks, but those can be garbage collected.

        • cbmuser 12 hours ago ago

          Multiple versions of a shared library is a pure nightmare if you actually care about security.

          • matrss 3 hours ago ago

            Only if you don't have a principled way of rebuilding them all on an update/patch. NixOS doesn't have multiple versions of shared libraries in most cases, and where it does it is usually multiple upstream supported major versions or the same upstream version with different compile time options. The former can be treated as separate packages like in other distros, the latter still only requires updates in one place.

      • anotherhue 13 hours ago ago

        No argument, if you're perf sensitive and aren't benchmarking every change then it's a roll of a dice as to whether llvm will bless your build.

        The usual claim stands though, on a LoC basis a vanishingly small amount of code is perf sensitive (embedded likely more TBF)

      • thomastjeffery 12 hours ago ago

        The problem with Nix is that it's a single monolithic package archive. Every conceivable package must go somewhere in the nixpkgs tree, and is expected to be as vanilla as possible.

        On top of that, there is the all-packages.nix global namespace, which implicitly urges everyone to use the same dependency versions; but in practice just results in a mess of redundant names like package_version.1.12_x-feature-enabled...

        The move toward flakes only replaces this problem with intentional fragmentation. Even so, flakes will probably end up being the best option if it ever gets coherent documentation.

    • jeltz 13 hours ago ago

      The C compiler does not optimize struct packing at all. Some languages like Rust allows optimizing struct layouts but even for Rust struct layout can matter if you care about cache locality and vector operations.

    • cbmuser 12 hours ago ago

      The compiler takes care of vulnerabilities?

      There was a recent talk that discussed the security nightmare with dozens of different versions of shared libraries in NixOS and how difficult it is for the distribution maintainers to track and update them.

  • AshamedCaptain 13 hours ago ago

    If anything, I'd argue that the "abysmal state of GNU/Linux" is because programs now tend to bundle their own dependencies, and not the opposite.

    • sebastos 12 hours ago ago

      Grrr - I strongly, viscerally disagree!

      All of these new dependency bundling technologies were explicitly created to get out from under the abysmal state of packaging - from Docker (in some ways) and on to snap, flat pack, appimage, etc. This state of affairs was explained in no uncertain terms and widely repeated in the various manifestos associated with those projects. The same verbiage is probably still there if you go look. It seems crazy to act as if this recent memory is obscured by the mists of time, leaving us free to speculate on the direction of causality. We all lived this, and that’s not how it happened! Besides, in your telling, thousands of people and multiple separate organizations poured blood sweat and tears into these various bundling technologies for no good reason. Why’d they do that? I can’t help but suspect your answer is something like “it all worked fine, people just got lazy. Just simply work with the distro maintainer to get your package accepted and then … etc etc etc”. What do people have to do to communicate with the Linux people that this method of distribution is sucky and slow and excruciating? They’ve built gigantic standalone ecosystems to avoid doing it this way, yet the Linux people are still smugly telling themselves that people are just too stupid and lazy to do things The Right Way.

      • zrm 12 hours ago ago

        > thousands of people and multiple separate organizations poured blood sweat and tears into these various bundling technologies for no good reason. Why’d they do that?

        Because stability and rapid change are incompatible, and they wanted rapid change.

        Which turns into a maintenance nightmare because now every app is using a different, incompatible version of the same library and somebody has to backport bug fixes and security updates to each individual version used by each individual package. And since that's a ton of work nobody wants to do, it usually doesn't get done and things packaged that way end up full of old bugs and security vulnerabilities.

        • sunshowers 11 hours ago ago

          I'm a big believer in not getting in the way when people want to build stuff. I get GitHub alerts for vulnerabilities in the dependencies of my Rust programs, and I release new versions whenever there's a relevant vulnerability.

          My programs work, pass all tests on supported platforms, and don't have any active vulns. Forcing dynamic binding on me is probably not a good idea, and certainly not work I want to do.

          • kelnos 10 hours ago ago

            > I get GitHub alerts for vulnerabilities in the dependencies of my Rust programs, and I release new versions whenever there's a relevant vulnerability.

            That's great, but my confidence is very low that most maintainers are like you.

            • sunshowers 10 hours ago ago

              Sure! I'm an optimist at heart and think a lot of people can learn though :)

              And note that static linking doesn't prevent third-party distributors like Linux maintainers from patching software. It's just that a lot of the current tooling can't cope too well with tracking statically linked dependencies. But that's just a technical problem.

              GitHub's vulnerability tracking has been fantastic in this regard.

          • AshamedCaptain 11 hours ago ago

            Do you keep multiple branches of your programs, including one where you do not add new features but only bugfixes and such security updates?

            (And I am skeptical of claims that leaf developers can keep up with the traffic of security updates)

            • sunshowers 11 hours ago ago

              I build my programs to be append-only, such that users can always update to new versions with confidence.

              For example, I'm the primary author and maintainer of cargo-nextest [1], which is a popular alternative test runner for Rust. Through its history it has had just one regression.

              If I did ever release a new major version of nextest, I would definitely keep the old branch going for a while, and make noises about it going out of support within the next X months.

              Security updates aren't that common, at least for Rust. I get maybe 5-6 alerts a year total, and maybe 1-2 that are actually relevant.

              [1] https://nexte.st/

              • AshamedCaptain 11 hours ago ago

                > I build my programs to be append-only, such that users can always update to new versions with confidence.

                And in this wonderful world where developers are competent enough to manage this, and therefore there are no issues when libraries are updated (append-only, right?).... why do you have a problem with shared linking again? Or is this a case where you think yourself as an "above average" programmer?

                • sunshowers 11 hours ago ago

                  I think the difference is that library interfaces tend to be vastly more complex than application interfaces, partly because processes form fairly natural failure domains (if my application does something wrong it exits with a non-zero code, but if my library does something wrong my entire program is suddenly corrupted.)

                  There are also significant benefits to static linking (such as inlining and LTO) that are not relevant across process boundaries.

                  But yes, in a sense I'm pushing the problem up the stack a bit.

                  > is this a case where you think yourself as an "above average" programmer?

                  I've been very lucky in life to learn from some of the best minds in the industry.

      • AshamedCaptain 11 hours ago ago

        We have such a myriad "dependency bundling technologies", dating back over more than a decade by now, and the situation has only been made worse.

        It's way too comfortable for _developers_ to bundle dependencies. That already explains why there is pressure to do so. You yourself look at this with developer glasses. I think users couldn't care less or may even actively avoid dependency bundling. Cause my impression, as a user, is that not only they almost never work right, but they actually make compatibility _harder_, not easier. And they decrease desktop environment integration, they increase overhead in every metric, they make patching things harder, etc. etc. Can you find other reasons why all these technologies you mention are not flying at all for desktop Linux users?

        And speaking as a developer, the software I develop is usually packaged by distros (and not myself), so I'm very well aware of the "sweating" involved. And despite that, I will say: it is not as bad as the alternatives presented.

  • shams93 13 hours ago ago

    That's Fedora, its a bleeding edge experimental distro when it comes to the desktop at least. Ubuntu has become really popular because it is stable and reliable. My friends who use linux to perform live electronic music all use ubuntu studio and have been for over 15 years now.

    • yesco 13 hours ago ago

      While Ubuntu is a great distro to get things up and running, a lot of their decision making around snap has begun to make me hesitant to recommend it to people. While there is the political/open source angle in regards to how they are handling the servers for snap, my main issue is primarily the stability. Randomly I'll install something with apt and it will not give me an apt package, it will give me a non-standard snap package that becomes difficult to troubleshoot.

      In the case of firefox, it basically makes it less stable than a nightly build considering how often it crashes, and subtly breaks screen sharing in weird non-obvious ways. This experience has me guessing that Canonical probably cares more about server Ubuntu now than it does about desktop Ubuntu, which is a real shame.

      While there are workarounds, I specifically endorsed Ubuntu in the first place to many people because these kinds of workarounds used to not be necessary. It's a real bummer honestly, not sure what else to recommend in this category either.

    • shams93 13 hours ago ago

      However these days its much easier to run a linux desktop on a system that comes with it, some of these laptops have driver issues where they can work but then can run into things like thermal and display issues due to closed code with very complex non-standard low level drivers.

  • neilv 11 hours ago ago

    > Just a normal update that I do many times per week. [...] At this point, using GNU/Linux is more like a second job, and I was so stoked when this was not a case anymore in the past. This is why I feel like the last 10 years were a regression disguised as progress.

    But the author already knows the solution...

    > My best memories were always with Debian. Just pure Debian always proved to be the most stable system. I never had issue or system breaking after an update. I can't say the same for Fedora.

    You've always had the power... tap your heels together three times, and just install Debian Stable.

    • kelnos 11 hours ago ago

      Hell, these days I switch from Debian stable to testing around 6 months after each stable release, and I still have fewer issues than I've had with most other distros in the past.

  • cherryteastain 13 hours ago ago

    AppImage does not always solve these dependency issues. I've had AppImages refuse to run because of e.g. missing Qt libraries (at least on Debian + Gnome). Flatpak and Snap are much better solutions for this problem.

    As for the Nvidia issues, especially the "system refuses to boot" kind, that's on Nvidia.

    • amlib 12 hours ago ago

      As soon as nvidia was mentioned in the article a chill went down my spine reminding me of my treacherous experience trying to use ubuntu with the nvidia drivers back in 2007. Every second reboot the drivers would break and I would wind up having to reinstall it trough a VT. Re-installing the system multiple times didn't matter, following multiple different guides didn't matter, following nvidias own instructions to the teeth on a fresh system... didn't matter. Those drivers on Archilinux would also constantly break, but at least it was as a result of the system updating and not just rebooting. It took many years but I've been nvidia free for 6 years now and my system couldn't be more stable.

      I haven't seen Fedora break in the last 2 years I've been using it, aside from a beta release upgrade I was curious to test that went wrong. I really think it's silly to put all the blame on shared libraries when there is a 99% chance it's the nvidia drivers fucking up again.

  • jeltz 13 hours ago ago

    Seems the author should just stop using a experimental bleeding edge distro like Fedora and go back to Debian Stable for example.

  • kelnos 11 hours ago ago

    I feel like these sorts of articles come up every so often, but still don't buy it.

    While I agree that disk usage is no longer a driver for shared libraries, memory usage still is, to some extent. If I have 50 processes using the same libaray (and I do), that shared library's readonly data sections get loaded into RAM exactly once. That's a good thing.

    But even if that problem wasn't an issue, security is still a big one for me. When my distro releases a new version of a library package to fix a security issue, every single package that uses it gets the security fix, without its each maintainer having to rebuild each package against the fixed version. (Sure, some distros manage this more centrally and won't have to wait for individual maintainers, but not all are like that.)

    I don't have to wonder what app has been fixed and what hasn't been. I don't have to make sure every single AppImage/Flatpak/Snap on my system that depends on that library (which I may not even know) gets updated, and possibly disable or uninstall those that haven't been, until they have.

    I like shared libraries, even with the problems they sometimes (rarely!) cause.

  • kelnos 10 hours ago ago

    > How come GNU/Linux is worse than it was 10 years ago?

    This hasn't been my experience at all, as someone who's been using it for more than 20 years now. Today, for the most part I don't have to tinker with anything, or think about the inner workings of my distro. Even just a decade ago that wasn't true.

  • cogman10 13 hours ago ago

    Perhaps this is a hard/impossible problem to solve, but I feel like the issue isn't so much the shared libraries, it's the fact that you need different versions of a shared library on a system for it to function. As a result, the interface for communicating that "I have version 1, 2, 3" has basically just been linking against filenames.

    But here's the part that feels wasteful that I wish could be solved. From version 1.0 to 2.0, probably 90% of most libraries are completely unchanged. Yet still we duplicate just to solve the problem.

    What if instead of having a .so per version, we packaged together a computable shared object and had the compiler/linking system incorporate versions when making a link? From there, we could turn the requests for versions be something like "Hey, I need 1.2.3" and the linking system say "1.2.3 consists of these chunks from the shared repository". That could be manifest and cached by the OS.

    For example, imagine fooLib has functions foo, bar, baz in version 1.2.3 and foo, bar, baz, blat. in 1.4.3. You could have a small hash of each of the functions in foo and store off the hash of those functions in a key value store for foo. From there you could materialize the runtime of foo 1.2.3 when requested by a library.

    New versions would essentially be the process of sending down the new function chunks and there would be no overriding of versions. But, it would also give OS maintainers a route to say "Actually, anything that requests version 1.2.3 will get 1.2.3.1-patched because of a CVE". Or you could even hotpatch in the same function on all versions in the case of CVE having a more targeted patching system.

    I've often wondered about if we could do a really granular dependency graph like this. Mainly because I like the idea of only shipping out the smaller changes and not 1gb of stuff because of what might break.

  • dannyobrien 13 hours ago ago

    I wonder what the author thinks of Nix/Guix-type distributions? Seems like that's something that gets the best of both worlds, with a minimum (but not non-zero) amount of futzing around.

  • NotPractical 10 hours ago ago

    > Some interesting talks and videos [...] GNU is Bloated! by Luke Smith

    Strange reference because that video isn't relevant to the topic at all. It's about the difference between GNU coreutils command line options and other standards such as BSD and POSIX (the title is mostly a joke). In fact Luke Smith is vehemently against the idea of AppImages/Flatpaks/Snaps and believe they go against the spirit of GNU/Linux [1].

    [1] https://www.youtube.com/watch?v=JPXLpLwEQ_E

  • zajio1am 11 hours ago ago

    Main argument for shared libraries is not space reduction, but uniformity. I do not want in my OS ten different versions of GTK, Freetype, Guile or Lua, each with its own idiosyncrasies or bugs.

  • hi-v-rocknroll 10 hours ago ago

    This is a "Unabomber"-style prescription to the problem. The problem isn't shared libraries and the solution isn't static linking everything because that's wasteful for repetitive binary code duplicated N times consuming disk and RAM pointlessly. The problem is solved by management and cooperative sharing of shared libraries that don't rely on a fixed, singleton location or only allowing a single configuration of a shared library, but allow side-by-side installations of multiple version series and multiple configurations. Nix mostly solves these problems, but still has teething problems of getting things to work right especially for non-code shared file locations and dlopen program plugins. I think Nix is over-engineered and learning-curve user-hostile but is generally a more correct approach. There is/was a similar project from what was Opscode called Habitat that did something similar. Before that, we used Stow and symlink forests.

    • wmf 10 hours ago ago

      allow side-by-side installations of multiple version series and multiple configurations

      So now every app requires a different library version and you've achieved all the disadvantages of static linking combined with all the disadvantages of dynamic linking.

      • hi-v-rocknroll 9 hours ago ago

        No, you're being unreasonable because that's not the goal at all. The goal is to minimize variations while still allowing the possibility of variations that traditional distros like dnf/RPM Fedora/RHEL disallow without jumping through hoops such as renaming/prefixing or alternate toolsets like scl.

  • zrm 11 hours ago ago

    The old reason for shared libraries was to share memory/cache and not waste disk space. That's not gone, but maybe it's less important than it used to be.

    The modern reason is maintenance. If 100 apps are using libfoo, in practice nobody is going to maintain a hundred separate versions of libfoo. That means your choices are a) have hundreds of broken versions nobody is maintaining spread all over, or b) maintain a small number of major releases forked at the point where compatibility breaks, so that every version of 1.x.x is compatible with the latest version of 1.x.x and every version of 2.x.x is compatible with the latest version of 2.x.x, and somebody is maintaining a recent version of each series, so you can include it as a shared library that everything else links against.

    But then you need libraries to limit compatibility-breaking changes to once or twice a decade.

  • nixdev 7 hours ago ago

    Many distros avoid the problem with something like

      apt-get install libflac7 libflac8
    
    If you look in Fedora 40 for example, you'll find nodejs, nodejs18, and nodejs22. Gentoo's ebuilds (packages) can often have multiple major versions installed simultaneously. If I really need something, I can unpack another Gentoo userland, or use a container.

    AppImages, do exist. What seems to be the missing "glue" is putting a ui in front of users to choose what software they want to run. That problem has not yet been solved gracefully.

    The author says he knows how to use Linux, but observably never graduated to be a particularly advanced user.

  • ramon156 12 hours ago ago

    > This is why I am a massive proponent of AppImages

    I joined Linux late-game, and only recently discovered that AppImages were in some cases much nicer to work with. The thing that I was missing though was.. a package manager. If there was any distro that would build their package manager around AppImages, then I would gladly use it.

    • idle_zealot 11 hours ago ago

      What would that even mean? A package manager is for tracking and installing dependencies, and putting things in the right place in the filesystem. AppImages bundle their dependencies and don't expect to unpack into a filesystem, you just put them in an /apps directory. Do you just mean that you want a repository of AppImages to search? Otherwise your "package manager" is 'curl $APPIMAGE_URL > ~/apps/$APP_NAME' and 'rm ~/apps/$APP_NAME'.

    • 3eb7988a1663 8 hours ago ago

      I have no appreciation for the differences between AppImage and Flatpak, but Flatpak does at least have a package manager which will pull in newer versions for you.

  • PaulKeeble 10 hours ago ago

    The analysis I want is an idea of how shared they actually. Some of them are likely used in an awful lot of processes and represent a lot of memory and drive space savings where others don't represent much saving at all.

  • xg15 10 hours ago ago

    > The main point is that, yes these packages can be big, but if we are honest, what would a couple of additional megabytes that would include shared object libraries actually do?

    A couple hundred, not a couple...

  • cxr 10 hours ago ago

    > It would be an interesting exercise to make a prototype distribution that does not rely on shared objects, but has everything packed in AppImages.

    Frankly, I don't think people are ambitious enough—whether it be the author of this post or those posting here in this thread.

    We're at a good enough place with modern hardware—for personal/business computers, at least—that we should perhaps not even be thinking about object files at all. All software and all distros should focus on shipping software as source code; JIT everything at run time, but in such a way that JIT results aren't ephemeral—they get re-used, so by the second time (or nth time) it's the same as if all your utilities and system components had been AOT compiled. Neither the system operator, administrators, nor package maintainers should care any more about the object file format and the linking model than they care about... I dunno, the on-disk format of their browser cache for Web resources. (Quick, quick: do you know how your browser stores its cache items? Probably not, since it's not something anybody but browser developers think about.)

    Fabrice Bellard already demonstrated with TCCBOOT the viability of applying this approach to the Linux kernel with acceptable results, and that was 20 years ago.

    This would also help to ameliorate an unfortunate accident of history, where the fact that traditional software development and deployment has dealt in separate binaries and source code means there's friction for the user if they want to change how something works on their system. (This is true even if they've taken a hardline stance to ensure everything on their system is libre/open source—dealing with the build tooling on a project-to-project basis just isn't fun or tractable.)

    This would also neutralize the entire schism between Flatpak vs AppImage vs Snaps vs <whatever>.

    • cxr 9 hours ago ago

      Some other food for thought: there's an oddity with the way that we approach binary optimization in that we care about it a whole lot, but a program will never be optimized further than whatever optimizations were in place when the binary hit the user's machine prior to its first use (i.e. the optimizations that were put there by the compiler running on the author's machine or the distro build farm. Why should that be the case? Why should the optimization phase stop there?

  • Dwedit 13 hours ago ago

    How about the part where Shared Objects take a performance penalty due to being position-independent code?

    • kelnos 10 hours ago ago

      But they also can reduce memory pressure and cache misses, so maybe that evens out.

      • Dwedit 6 hours ago ago

        I know that on Windows, DLL files get optimized a lot. If a DLL file can be loaded without any relocation, then it becomes a memory mapped-file rather than an additional copy of the program. All the system DLLs have different pre-defined base addresses, so they won't conflict with each other.

        If relocation is required, then the DLL becomes a private copy in the process's address space rather than a memory-mapped file.

  • 12 hours ago ago
    [deleted]
  • DemocracyFTW2 13 hours ago ago

    > the issue with "libFlac.8.so is missing" and I have version 12 installed

    I believe this is one of the core issues here, and it is nicely illustrated by comparing what e.g. one Isaac Schlueter (isaacs of npm fame) thinks about dependencies and the critique to this offered by Rich Hickey (of Closure fame).

    Basically what Isaac insists on is that Semantic Versioning can Save Us from dependency hell if we just apply it diligently. The advancement that npm offers in this regard is that different transitive dependencies that refer to different versions of the same module can co-exist in the dependency tree, which is great.

    But sometimes, just sometimes folks, you need to have two versions of the same dependency for the same module, and this has taken a lot of effort to get into the system, because of the stubborn insistence that somehow `foo@4.1` and `foo@4.3` should be the 'same only different', and that really it makes no sense to use both `foo@3` and `foo@4` from the same piece of code, because they're just two versions of the 'same'.

    Rich Hickey[1] cuts through this and asserts that, no, if there's a single bit of difference between foo version A and foo version B, then they're—different. In fact, both pieces of software can behave in arbitrarily different ways. In the real world, they most of the time don't, it's true, but also in the real world, the part of knowledge that I can be really sure of is that if foo version A is not bit-identical to foo version B, then those are different pieces of software, potentially (and likely) with different behaviors. Where those differences lay, and whether they will impact my particular use of that software remains largely a matter of conjecture.

    Which brings me back to the OP's remark about libFlac.8.so conflicting with libFlac.12.so. I think they shouldn't conflict. I think we have to ween us off the somewhat magical thinking that we just need an agreement on what is a breaking change and what is a 'patch' and we can go on pretending that we can share libraries system-wide on a 'first-name basis' as it were, i.e. disregarding their version numbers.

    I feel I do not understand Linux deeply enough but my suspicion has been for years now that we don't have to abolish shared libraries altogether if only we would stop to see anything in libFlac.8.so that ties it particularly closely to libFlac.12.so. Well there is, probably, a lot of commonalities between the two, but in principle there need not be any, and therefore the two libraries should be treated like any two wholly independent pieces of software.

    [1] https://youtu.be/oyLBGkS5ICk?list=PLZdCLR02grLrEwKaZv-5QbUzK...

  • EdwardDiego 13 hours ago ago

    Fedora is a "move fast maybe sometimes break things" distro, e.g., early adoption of Btrfs as the default, Wayland etc.

    So yeah, these things sometimes happen.

  • pnathan 10 hours ago ago

    I mean I don't disagree. But this is a bazaar problem and a rebundling problem.

    I have used Debanoids for decades and they just work. It's very nice.

    Gentol works very nice too. I usually whack systemD and enjoy.

    I do want to get a Fedora system going but eh. Just work is nice.

  • nektro 10 hours ago ago

    sounds like OP would love nixos

  • enriquto 13 hours ago ago

    The world needs a static linux distribution more than ever.

  • einpoklum 13 hours ago ago

    > I am all in for AppImages or something like that

    WTF? On the contrary!

    > And Snaps and Flatpaks tried to solve some of these things,

    Made things worse.

    > I don't care if these images are 10x bigger.

    ... the size is just part of the problem. The duplication is another part. A system depending on a 100K different versions of libraries/utils instead of, oh, say, 5K. And there's the memory usage, as others mentioned.

    Also, there really isn't more trouble locating shared libraries today than 10 years ago; if anything, the opposite is true. Not to mention how there is even more searchable "crowd support" resources today than back then, for when you actually do have such issues.

    So...

    > How come GNU/Linux is worse than it was 10 years ago?

    I think it's actually better overall:

    * Less gotchas during installation

    * Better apps for users' basic needs (e.g. LibreOffice)

    * Less chance of newer hardware not being supported on Linux (it still happens though)

    but if you asked me what is worse, then:

    1. systemd.

    2. Further deterioration of the GNOME UI. Although TBH a lot of that sucked 10 years ago as well (e.g. the file picker)

    3. Containerization instead of developers/distributors having their act together

    but certainly not what the author is trying to push. (shrug)

    • ChocolateGod 13 hours ago ago

      > * Less chance of newer hardware not being supported on Linux (it still happens though)

      This is one area I think Linux is really bad on compared to Windows. AMD makes a new graphics card, it pushes out a driver update for Windows and it's all golden.

      On Linux? They have to spend months getting it into the kernel release cycle, then wait on distributions to test that update, trickle it down to users and if you're on some kind of LTS distribution you might as well not bother.

      • trelane 11 hours ago ago

        It's not like they only start developing the driver when the card is released. In either case, they work with the OS vendor for months or years prior to develop the driver. One often sees this in e.g. Intel drivers coming out in the Linux kernel well before the hardware is available to the consumer.

        You're not entirely wrong, either, though; it's more of a concern with LTS support. Though this is reduced somewhat by most things not actually needing custom drivers by using existing standards, e.g. HID, and/or by userspace drivers.

        And, of course, you don't have it at all if you only buy hardware with Linux pre-installed and supported by the vendor. You know, like you do with Windows and Mac.

      • nixdev 7 hours ago ago

        LTS distros have "linux-image-newer" packages for where device drivers legitimately need access to a newer kernel.

        For all other scenarios, which are most, DKMS has existed for 21 years now.

        The Linux distros' approach is superior to Microsoft's in every way. There will always be some Windows-minded types out there who will struggle, maybe they should buy a Mac.

        https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support

  • andrewstuart 13 hours ago ago

    If “gnu” gets headline billing for providing some user land tools then really “systemd/Linux” should be the current operating system title because systemd pervades the distros mentioned in this article. In many ways systemd IS the operating system outside the kernel.

    • nixdev 7 hours ago ago

      Yeah.. it's rather unfortunate. I am grateful for https://chimera-linux.org/ and https://wiki.gentoo.org/wiki/OpenRC

    • johnea 13 hours ago ago

      And in fact systemd/linux is what I consider the biggest digression in linux over the last decade.

      It pushes for a single upstream for all of userland. With IBM being that single upstream.

      After decades, I'll leave archlinux in my next migration because of this.

      • ChocolateGod 13 hours ago ago

        > It pushes for a single upstream for all of userland. With IBM being that single upstream.

        I hate to break it to you but what do you think the GNU Project is?

      • sho_hn 13 hours ago ago

        Neither of the systemd lead maintainers works for IBM (or the same company).

      • pessimizer 12 hours ago ago

        It's astounding how easily people slipped from defending Red Hat owning so much of Linux with their giant labyrinthine subsystems to literally defending IBM ownership. When Google or Facebook buy IBM, they'll be calling everyone purists for even mentioning it.

  • senzilla 13 hours ago ago

    The abysmal state of GNU/Linux is exactly why I moved to OpenBSD many years back. It's small, simple and very stable.

    The BSDs are definitely not for everyone, and they come with their own set of tradeoffs. However, it is safe to say that all BSDs are better today than 10 years ago. Small and steady improvements over time.

  • fullspectrumdev 13 hours ago ago

    It strikes me that Linux seems to have basically reinvented DLL Hell.

    • bachmeier 12 hours ago ago

      Not really. This has always been possible, and by definition, it has to be possible. If you use a stable distro and stick with your distro's repositories it's not a problem. If you want to install stuff outside the repos and you aren't willing to compile it yourself, it's absolutely going to be a problem.

  • marssaxman 13 hours ago ago

    > Shared dependencies were a mistake!

    Couldn't agree more.