Linux kernel security work

(kroah.com)

158 points | by chmaynard a day ago ago

70 comments

  • coppsilgold 12 hours ago ago

    Recently, things have been advancing which may finally allow a seamless virtualization experience on the Linux desktop (and match QubesOS in some security aspects).

    GPU drivers supporting native contexts with Mesa support.

    Wayland sharing between guest and host. It used to be somewhat sloppy (involved protocol parsing; sommilier & wayland-proxy-virtwl) but recently someone undertook a project to do it properly that may soon bear fruit: https://codeberg.org/drakulix/wl-cross-domain-proxy

    A VMM to utilize these features: https://github.com/AsahiLinux/muvm

    And a solution which ties these things together: https://git.clan.lol/clan/munix

  • tuananh 4 hours ago ago

    this is why redhat is still relevant in 2025. there's always a need for this kind of work.

    • staticassertion 2 hours ago ago

      I think you could make a stronger case for the opposite. How does Redhat know which commits to cherry when upstream explicitly won't tell you which are relevant to security?

      • invokestatic 41 minutes ago ago

        Because Red Hat pays the salaries of dozens (hundreds?) of kernel maintainers all over different subsystems. So they’re subject matter experts, and know exactly which ones are relevant to Red Hat.

  • DebugDruid 21 hours ago ago

    Sometimes I dream about a 100% secure OS. Maybe formal verification is the key, or Rust, I don’t know. But I would love to know that I can't be hacked.

    • themafia 19 hours ago ago

      > But I would love to know that I can't be hacked.

      Cool. So social engineering it is. You are your own worst enemy anyways.

      • staticassertion 2 hours ago ago

        A world in which the only way to get hacked is to be tricked would be an insane improvement over today. There are a lot of ways to solve social engineering issue with tech solutions too - FIDO2 is one example, as would be app isolation, etc.

    • pjmlp 5 hours ago ago

      This has been done multiple times in research, see Verve OS from Microsoft, even Assembly is verified, that is where Dafny came from.

      https://en.wikipedia.org/wiki/Verve_(operating_system)

      However, worse is better on the market, and quality doesn't pay off, hence why such ideas take decades into mainstream.

    • jeffbee 20 hours ago ago

      The problem is that for the overwhelming majority of use cases the isolation features that are violated by security bugs are not being used for real isolation, but for manageability and convenience. Virtualization, physical host segregation, etc are used to achieve greater isolation. People don't necessarily care about these flaws because they aren't actually exposed to the worst case preconditions. So the amount of contributor attention you could get behind a "100% secure OS" might not be as large as you are hoping. Anyway if you want to work on such things there are various OS development efforts floating around.

      • nine_k 18 hours ago ago

        Isolation is one thing, correctness is another. You may have architecturally perfect, hardware-assisted isolation, but triggering a bug would breach it. This is how a typical break out of a VM, or a container, or a privilege escalation, happens.

        There is a difference between a provably secure-by-design system, and a formally proven secure implementation, like Sel4.

      • ameliaquining 18 hours ago ago
    • sydbarrett74 11 hours ago ago

      Anything made by humans can be unmade by humans. Security is a perpetual arms race.

    • fsflover 8 hours ago ago
  • tuananh 4 hours ago ago

    if they really think that, they should have remove their CNA, no?

    • theamk an hour ago ago

      Nah, "removing CNA" = "let any security researcher decide what kernel vulnerability is"

      And unfortunately, there are plenty of security researchers who are only interested in personal CVE counts, and will try to assign highest priority to a mostly harmless bug.

  • miduil 18 hours ago ago

    > If you are forced to use encryption to report security problems, please reconsider this policy as it feels counterproductive (UK government, this means you…)

    LOL

  • anonnon 12 hours ago ago

    Meanwhile it's 2026 and Greg's own website still doesn't support TLS.

    • juliangmp 10 hours ago ago

      Honestly, until encrypted client hello has widespread support, why bother? I mean I did it for fun the first time and now with caddy its not a lot of effort. But for a personal blog, a completely static site, what benefit do you get from the encryption? Anyone monitoring the traffic will see the domain in clear text anyway. And they'd see the destination IP, which I imagine in this case being one server that has exactly one domain pointed at it.

      • swinglock 10 hours ago ago

        Men in the middle including predatory ISPs can not only spy but also enrich. Injecting JavaScript and embedding ads is the best case scenario. You don't want that.

        In addition even without bad actors TLS will prevent random corruption due to flaky infrastructure from breaking the page and even caching those broken assets, preventing a reload from fixing it. TCP/IP alone doesn't sufficiently prevent this.

        • psnehanshu 5 hours ago ago

          TCP ensures what gets sent on one side gets received on the other side. TLS just encrypts the data. So even without TLS, random corruptions won't happen unless someone does MITM attack.

          • swinglock 4 hours ago ago

            No it does not. I've had this happen in legacy systems myself. The checksums of TCP/IP are weak and will let random errors through to L7 if there are enough of them. It's not even CRC and you must bring your own verification if it's critical for your application that the data is correct. TLS does that and more, protecting not only against random corruption but also active attackers. The checks you get for free are to be seen only as an optimization, letting most but not all errors be discarded quick and easy. Just use TLS.

          • ppseafield 3 hours ago ago

            I saw myself years ago that Verizon injected marketing tracking headers into http traffic. My ISP was the MITM.

            https://www.eff.org/deeplinks/2014/11/verizon-x-uidh

      • mqus 10 hours ago ago

        Integrity. TLS does prevent man-in-the-middle attacks. For a personal blog, that may not be important but you _do_ get a benefit, even if the encryption is not necessary.

  • staticassertion 2 hours ago ago

    "A bug is a bug" lol.

    There's a massive difference between "DoS requiring root" and "I can own you from an unprivileged user with one system call". You can say "but that DoS could have been a privesc! We don't know!" but no one is arguing otherwise? The point is that we do know the impact of some bugs is strictly a superset of other bugs, and when those bugs give control or allow a violation of a defined security boundary, those are security bugs.

    This has all been explained to Greg for decades, nothing will change so it's just best to accept the state - I'm glad it's been documented clearly.

    Know this - your kernel is not patched unless you run the absolute latest version. CVEs are discouraged, vuln fixes are obfuscated, and you should operate under that knowledge.

    Attackers know how to watch the commit log for these hidden fixes btw, it's not that hard.

    edit: Years later and I'm still rate limited so I can't reply. @dang can this be fixed? I was rate limited for posting about Go like... years ago.

    To the person who replies to me:

    > This is correct for a lot of different software, probably most of it. Why is this a point that needs to be made?

    That's not true at all. You can know if you're patched for any software that discloses vulnerabilities by checking if your release is up to date. That is not true of Linux, by policy, hence this entire post by Greg and the talks he's given about suggesting you run rolling releases.

    Sorry but it's too annoying to reply further with this rate limiting, so I'll be unable to defend my points.

    • tamirzb an hour ago ago

      > Know this - your kernel is not patched unless you run the absolute latest version.

      This is correct for a lot of different software, probably most of it. Why is this a point that needs to be made?

  • badgersnake 14 hours ago ago

    And then all our customers will demand fixes for them in our docker images, because they’re that smart.

    There must be a way to ship a docker image without a kernel, since it doesn’t get used for anything anyway.

    • derkades 10 hours ago ago

      Huh, how do you unintentionally ship a Linux kernel in a container image? The common base images definitely don't contain the kernel.

      • staticassertion 2 hours ago ago

        The only thing I can imagine is that they've somehow managed to rely on kernel headers in their image? idk

  • vlovich123 16 hours ago ago

    I think the most practical reason not to flag which bugs are security bugs is to avoid helping blackhat hackers by painting a giant neon sign and that should be more than enough.

    I think all the other explanations are just double-think. Why? If "bugs are just bugs" is really a true sentiment, why is there a separate disclosure process for security bugs? What does it even mean to classify a bug as a security bug during reporting if it's no different than any other bug report? Why are fixes developed in secret & potential embargoes sometimes invoked? I guess some bugs are more equal than others?

    • fguerraz 15 hours ago ago

      As mentioned in the article, every bug is potentially a security problem to someone.

      If you know that something is a security issue to your organization, you definitely don't want to paint a target on your back by reporting the bug publicly with an email address <your_name>@<your_org>.com. In the end, it is really actually quite rare (given the size of the code base and the popularity of linux) that a bug has a very wide security impact.

      The vast majority of security issues don't affect organizations that are serious about security (yes really, SELinux eliminates or seriously reduces the impact of the vast majority of security bugs).

      • vlovich123 14 hours ago ago

        The problem with that argument is that the reports don’t necessarily come from the organization for whom it’s an issue. Security researchers unaffiliated not impacted by any such issue still report it this way (eg Project Zero reporting issues that don’t impact Google at all).

        Also Android uses SELinux and still has lots of kernel exploits. Believing SELinux solves the vast majority of security issues is fallacious, especially since it’s primarily about securing userspace, not the kernel itself .

        • suspended_state 12 hours ago ago

          > The problem with that argument is that the reports don’t necessarily come from the organization for whom it’s an issue.

          You can already say that for the majority of the bugs being fixed, and I think that's one of the points: tagging certain bugs as exploitable make it seem like the others aren't. More generally, someone's minor issue might be a major one for someone else, and not just in security. It could be anything the user cares about, data, hardware, energy, time.

          Perhaps the real problem is that security is just a view on the bigger picture. Security is important, I'm not saying the opposite, but if it's only an aspect of development, why focus on it in the development logs? Shouldn't it be instead discussed on its own, in separate documents, mailing lists, etc by those who are primarily concerned by it?

          • vlovich123 10 hours ago ago

            Are memory leak fixes described as memory leak fixes in the logs or intentionally omitted as such? Are kernel panics or hangs not described in the commit logs even if they only happen in weird scenarios? Thats clearly not what’s happening meaning security bugs are still differently recorded and described through omission.

            However you look at it, the only real justification that’s consistent with observed behaviors is that pointing out security vulnerabilities in the development log helps attackers. That explains why known exploitable bugs are reported differently before hand and described differently after the fact in the commit logs. That wouldn’t happen if “a bug is a bug” was actually a genuinely held position.

            • drysart 9 hours ago ago

              > However you look at it, the only real justification that’s consistent with observed behaviors is that pointing out security vulnerabilities in the development log helps attackers.

              And on top of your other concerns, this quoted bit smells an awful lot like 'security through obscurity' to me.

              The people we really need to worry about today, state actors, have plenty of manpower available to watch every commit going into the kernel and figure out which ones are correcting an exploitable flaw, and how; and they also have the resources to move quickly to take advantage of them before downstream distros finish their testing and integration of upstream changes into their kernels, and before responsible organizations finish their regression testing and let the kernel updates into their deployments -- especially given that the distro maintainers and sysadmins aren't going to be moving with any urgency to get a kernel containing a security-critical fix rolled out quickly because they don't know they need to because *nobody's warned them*.

              Obscuring how fixes are impactful to security isn't a step to avoid helping the bad guys, because they don't need the help. Being loud and clear about them is to help the good guys; to allow them to fast-track (or even skip) testing and deploying fixes or to take more immediate mitigations like disabling vulnerable features pending tested fix rollouts.

              • suspended_state an hour ago ago

                There are channels in place to discuss security matters in open source. I am by no mean an expert nor very interested in that topic, but just searching a bit led me to

                https://oss-security.openwall.org/wiki/mailing-lists

                The good guys are certainly monitoring these channels already.

              • vlovich123 2 hours ago ago

                There’s lot of different kinds of bad guys. This probably has marginal impact on state actors. But organized crime or malicious individuals? Probably raises the bar a little bit and part of defense in depth is employing a collection of mitigations to increase the cost of creating an exploit.

            • suspended_state 8 hours ago ago

              > Are memory leak fixes described as memory leak fixes in the logs or intentionally omitted as such? Are kernel panics or hangs not described in the commit logs even if they only happen in weird scenarios?

              I don't know nor follow kernel development well enough to answer these questions. My point was just a general reflection, and admittedly a reformulation of Linus's argument, which I think is genuinely valid.

              If you allow me, one could frame this differently though: is the memory leak the symptom or the problem?

              • vlovich123 2 hours ago ago

                No one is listing the vast number of possible symptoms a security vulnerability could be causing.

                • suspended_state 2 hours ago ago

                  Indeed nobody does that, because it would just be pointless, it doesn't expose the real issue. Is a security vulnerability a symptom, or the real issue though? Doesn't it depends on the purpose of the code containing the bug?

    • staticassertion 2 hours ago ago

      > I think the most practical reason not to flag which bugs are security bugs is to avoid helping blackhat hackers by painting a giant neon sign and that should be more than enough.

      It doesn't work. I've looked at the kernel commit log and found vulnerabilities that aren't announced/ marked. Attackers know how to do this. Not announcing is a pure negative.

  • JCattheATM a day ago ago

    Their view that security bugs are just normal bugs remains very immature and damaging. It it somewhat mitigated by Linux having so many eyes on it and so many developers, but a lot of problems in the past could have bee avoided if they adopted the stance the rest of the industry recognizes as correct.

    • tptacek a day ago ago

      From their perspective, on their project, with the constraints they operate under, bugs are just bugs. You're free to operationalize some other taxonomy of bugs in your organization; I certainly wouldn't run with "bugs are just bugs" in mine (security bugs are distinctive in that they're paired implicitly with adversaries).

      To complicate matters further, it's not as if you could rely on any more "sophisticated" taxonomy from the Linux kernel team, because they're not the originators of most Linux kernel security findings, and not all the actual originators are benevolent.

      • JCattheATM 21 hours ago ago

        > From their perspective, on their project, with the constraints they operate under, bugs are just bugs.

        That's a pretty poor justification. Their perspective is wrong, and their constraints don't prevent them from treating security bugs differently as they should.

        • ada0000 21 hours ago ago

          > almost any bugfix at the level of an operating system kernel can be a “security issue” given the issues involved (memory leaks, denial of service, information leaks, etc.)

          On the level of the Linux kernel, this does seem convincing. There is no shared user space on Linux where you know how each component will react/recover in the face of unexpected kernel behaviour, and no SKUs targeting specific use cases in which e.g. a denial of service might be a worse issue than on desktop.

          I guess CVEs provide some of this classification, but they seem to cause drama amongst kernel people.

        • samus 16 hours ago ago

          You have a pretty strongly worded stance, but you don't provide an argument for it. May I suggest you detail why exactly you think their perspective is wrong, apart from "a lot of problems in the past could have been avoided"?

          • JCattheATM an hour ago ago

            My view here isn't uncommon, even if it's minority view. I've noticed a lot of people tend to just defend and adopt the stances of projects they like or use without necessarily thinking things through, and I assume that's at least partly the case here.

            There's been a lot of criticism written on the kernel devs stance over the last, what, 20 years? One obvious problem is that without giving security bugs, i.e. vulnerabilities priority, systems stay vulnerable until the bug gets patched at whatever place in the queue it happens to be at.

      • rwmj a day ago ago

        For sure, but you don't need to file CVEs for every regular bug.

        • Skunkleton 21 hours ago ago

          In the context of the kernel, it’s hard to say when that’s true. It’s very easy to fix some bug that resulted in a kernel crash without considering that it could possibly be part of some complex exploit chain. Basically any bug could be considered a security bug.

          • SSLy 21 hours ago ago

            plainly, crash = DoS = security issue = CVE.

            QED.

            • michaelt 20 hours ago ago

              BRB, raising a CVE complaining the OOM killer exists.

              • pamcake 19 hours ago ago

                Memory leaks are usually (accurately) treated as DoS. OoM killer is a mitigation to contain them and not DoS the entire OS.

              • SSLy 6 hours ago ago

                you either get OOMed or next malloc fails and that's also going to wreck havoc

              • worthless-trash 17 hours ago ago

                I could be wrong. But operation by design isn't considered a bug.

                • samus 16 hours ago ago

                  It is if some other condition is violated that is more important. Then the design might have to be reconsidered.

                • suspended_state 12 hours ago ago

                  If it is faulty, then it's not a bug, it's a flaw.

    • jacobsenscott 20 hours ago ago

      Classifying bugs as security bugs is just theater - and any company or organization that tries to classify bugs that way is immature and hasn't put any thought into it.

      First of all "security" is undefined. Second, nearly every bug can be be exploited in a malicious way, but that way is usually not easy to find. So should every bug be classified as a security bug?

      Or should only bugs where a person can think of a way on the spot during triage to exploit that bug as a security bug? In that case only a small subset of your "security" bugs are classified as such.

      It is meaningless in all cases.

      • therealrootuser 19 hours ago ago

        > nearly every bug can be be exploited in a malicious way This is a bit contextually dependent. "This widget is the wrong color" is probably not a security issue in most cases, unless the widget happens to be a traffic signal, in which case it is a major safety concern.

        Even the line between "this is a bug" and "this is just a missing, incomplete, or poorly thought out feature" can get a bit blurry. At a certain point, many engineers get frustrated trying to pick apart the difference between all these ways of classifying the code they are writing and just want to get on with making the system work better.

      • ykonstant 12 hours ago ago

        > "security"

        Security is not a dirty word, Blackadder.

      • JCattheATM 20 hours ago ago

        > First of all "security" is undefined.

        Nonsense.

    • schmuckonwheels 17 hours ago ago

      Linus has been very clear on avoiding the opposite, which is the OpenBSD situation: they obsess about security so much that nothing else matters to them, which is how you end up with a mature 30 year old OS that still has a dogshit unreliable filesystem in 2026.

      To paraphrase LT, security bugs are important, but so are all the other bugs.

      • JCattheATM 17 hours ago ago

        OpenBSD doesn't really stress about security so much as they made that their identity and marketing campaign - their OS is lacking too many basic capabilities a security focused OS should have.

        > To paraphrase LT, security bugs are important, but so are all the other bugs.

        Right, this is wrong, and that's the problem. Security bugs as a class are always going to be more important than certain other classes of bugs.

        • 6r17 11 hours ago ago

          I have to disagree it's worst than you think ; open-bsd has so many mitigation in place that your computer will probably run 50% slower than a traditional OS. In reality you do not want to be playing 100% safety everywhere because this is simply expensive. You might prefer to create an isolated network on which you can set up un-mitigated servers - those will be able to run at 100% capacity.

          This can be looked upon when compiling the linux kernel, the mitigation options are rather numerous - and you'll have to also pick a sleep time ; what i'm saying is - currently linux only allows you to tune a machine to a specific requirement - it's not a spaceship on which you can change the sleep time frequency; dynamically shutdown mitigation ; and imagine that you are performing - In the same spirit, if you are holding keys on anything else than open-bsd ; I hope for you that you have properly looked up what you were installing.

        • cedws 14 hours ago ago

          And their ‘no remote holes’ is true for a base install with no packages, not necessarily a full system.

          I think the OpenBSD approach of secure coding is outdated. The goal should have always been to take human error out of the equation as much as possible. Rust and other modern memory safe languages move things in that direction, you don’t need ultra strict coding standards and a bible of compiler flags.

          • JCattheATM an hour ago ago

            > I think the OpenBSD approach of secure coding is outdated.

            I don't think it's outdated it's a core part of the puzzle. The problem with their approach is they rely on it 100%, and have not enough in place (and yes, I'm aware of all the mitigations they do have) to protect against bugs they miss. This is a lot less true now than it was 15 - 20 years ago, but it's still not great IMO.

    • akerl_ 21 hours ago ago

      This feels almost too obvious to be worth saying, but “the rest of the industry” does not in fact have a uniform shared stance on this.

    • redleader55 9 hours ago ago

      The rest of the industry relies on following a CVE list and ticking off vulnerabilities as a way to ensure "owners" are correctly assigned risk and sign it off - because there is nothing else that "owners" could do. The whole security through CVE is broken and is designed to be useful to create large "security organizations" that have the single purpose of annoying everyone with reports without solving any issues.

    • firesteelrain 21 hours ago ago

      “A bug is a bug” is about communication and prioritization, not ignoring security. Greg’s post spells that out pretty clearly.

      • JCattheATM 16 hours ago ago

        Yes, that's what I was criticizing....

    • themafia 19 hours ago ago

      > a lot of problems in the past could have bee avoided

      Such as?

    • beanjuiceII a day ago ago

      did you read it? because that's not their view at all