It's a shame that this didn't end up going anywhere. When Qualcomm was doing their press stuff prior to the Snapdragon X launch, they said that they'd be putting equal effort into supporting both Windows and Linux. If anyone here is running Linux on a Snapdragon X laptop, I'd be curious to know what the experience is like today.
I will say that Intel has kind of made the original X Elite chips irrelevant with their Lunar Lake chips. They have similar performance/battery life, and run cool (so you can use the laptop on your lap or in bed without it overheating), but have full Linux support today and you don't have to deal with x86 emulation. If anyone needs a thin & light Linux laptop today, they're probably your best option. Personally, I get 10-14 hours of real usage (not manufacturer "offline video playback with the brightness turned all the way down" numbers) on my Vivobook S14 running Fedora KDE. In the future, it'll be interesting to see how Intel's upcoming Panther Lake chips compare to Snapdragon X2.
The iGPU in Panther Lake has me pretty excited about intel for the first time in a long time. Lunar Lake proved they’re still relevant; Panther Lake will show whether they can actually compete.
I'm typing this from a snapdragon x elite HP. It's fine really but my use is fairly basic. I only use it to watch movies, read, browse, and draft word and excel, some light coding.
No gaming - and I came in knowing full well that a lot of the mainstream programs don't play well with snapdragon.
What has amazed me the most is the battery life and the seemingly no real lag or micro-stuttering that you get in some other laptops.
So, in all, fine for light use. For anything serious, use a desktop.
What is it about it that makes it unsuited for anything serious? The way you describe it, the only thing it's not suited for is gaming, which is not generally regarded as serious.
Many people including myself do serious work on a macbook, which is also ARM. What's different about this qualcomm laptop that makes it inappropriate?
> What's different about this qualcomm laptop that makes it inappropriate?
Everything else around the cpu. apple systems are entirely co-designed (cpu to work with the rest of the components and everything together to work with mac os).
While i'd love to see macbook-level quality on other brands (looking at you, lenovo) tight hardware+software co-design (and co-development) yields much better results.
Microsoft is pushing hard for UEFI + ACPI support on PC ARM boards. I believe the Snapdragon X2 is supposed to support it.
That still leaves the usual UEFI + ACPI quirks Linux has had to deal with for aeons, but it is much more manageable than (non-firmware) DeviceTree.
The dream of course would be an opensource HAL (which UEFI and ACPI effectively are). I remember that certain Asus laptops had a microstutter due to a non-timed loop doing an insane amount of polling. Someone debugged it with reverse engineering, posted it on GitHub, and it still took Asus more than a year to respond to it and fix it, only after it blew up on social media (including here). With an opensource HAL, the community could have introduced a fix in the HAL overnight.
I get the lacking Linux support, but what about Windows? Most serious work happens on Windows and their SoCs seem to have much better support there.
Apple's hardware+software design combo is nice for things like power efficiency, but so in my experience so far, a Macbook and a similarly priced Windows laptop seems to be about equal in terms of weird OS bugs and actually getting work done.
I’m getting about 2 hours with current macos on an arm macbook pro. I used to get 4-5 last year.
This is out of the box. With obvious fixes like ripping busted background services out, it gets more than a day. There’s no way normal users are going to fire up console.app and start copy pasting “nuke random apple service” commands from “is this a virus?” forums into their terminal.
Apple needs to fix their QA. I’ve never seen power management this bad under Linux.
It’s roughly on par with noughties windows laptops loaded with corporate crapware.
That's unfortunate, perhaps your particular macbook is having a hardware problem?
As a point of comparison, I daily two ARM macs (work M4 14 + personal M3 14), and I get far better battery life than that (at least 8 hours of "normal" active use on both). Also, antidotally, the legion of engineers at my office with macs are not seeing battery life issues either.
That said, I have yet to encounter anyone who is in love with macOS Tahoe and it's version of Liquid Glass.
The current issue is iOS 26.1’s wallpaper renderer crashes in a tight loop if the default wallpaper isn’t installed. It isn’t under Xcode.
I have macos crash reporting turned off, but crashreport pins the CPU for a few minutes on each ios wallpaper renderer crash. I always have the iOS simulator open, so two hours battery, max.
I killed crashreport and it spun the cpu on some other thing.
In macos 25, there’s no throttle for mds (spotlight), and running builds at a normal developer pace produces about 10x more indexing churn than the Apple silicon can handle.
Sorry, thought I had posted, but didn't get through. It's a T480 with the 72Wh and the 24Wh battery running on FreeBSD. Screen has also been replaced with a low power usage screen which helps a lot in saving battery while still giving good brightness.
Most of the time I am running StumpWM with Emacs on one workspace and Nyxt in another. So just browsing and coding mostly.
OpenBSD gets close, but FreeBSD got a slight edge battery wise. To be fair, that is on an old CPU that still has homogenous cores. More modern CPUs can probably benefit from a more heterogenous scheduler.
Or they just got one of the 'good' models and tuned linux a bit. I have a couple lenovo's and its hit/miss, but my 'good' machine has an AMD which after a bit of tuning idles with the screen on at 2-3W, and with light editing/browsing/etc is about 5W. With the 72Wh battery that is >14h, maybe over 20 if I was just reading documentation. Of course its only 4-5 if i'm running a lot of heavy compile/VMs unless I throttle them, in which case its easy over 8h.
One of my 'bad' machines is more like 10-100W and i'm lucky to get two hours.
Smaller efficient CPU + low power sleep + not a lot of background activity + big battery = very long run times.
for this to happen we would need to see a second company that controls both the hardware and the software and that's not realistic, economically. You can't just jump into that space.
You could argue that is exactly what Tuxedo is doing. In this case, they could not provide the end-user experience they wanted with this hardware so they moved on.
System76 may be an even better example as they now control their software stack more deeply (COSMIC).
when I say "control the software" what i mean is we need another company that can say "hey we are moving to architecture X because we think it's better" and within a year most developers rewrite their apps for the new arch - because it's worth it for them
there needs to be a huge healthy ecosystem/economic incentive.
it's all about the software for end users. I don't care what brand it is or OS and how much it costs. I want to have the most polished software and I want to have it on release day.
Right now, it's Apple.
Microsoft tries to do this but is held back by the need for backward compatibility (enterprise adoption), and Google cannot do this because of Android fragmentation. I don't think anyone is even near to try this with Linux.
Almost everything on regular Fedora works on Ashai Fedora out of the box on Apple Silicon.
You can get a full Ubuntu distribution for RISC-V with tens of thousands of packages working today.
Many Linux users would have little trouble changing architectures. For Linux, the issue is booting and drivers.
What you say is true for proprietary software of course. But there is FEX to run x86 software on ARM and Felix86 to run it on RISC-V. These work like Rosetta. Many Windows games run this way for example.
The majority of Android apps ship as Dalvik bytecode and should not care about the arch. Anything using native code is going to require porting though. That includes many games I imagine.
I was incredibly excited when they announced the chip alongside all kinds of promises regarding Linux support, so I pre-ordered a laptop with the intention of installing Linux later on. When reports came out that single core performance could not even match an old iPhone, alongside WSL troubles and disappointing battery life, I sent it back on arrival.
Instead I paid the premium for a nicely specced Macbook Pro, which is honestly everything I wanted, safe for Linux support. At least it's proper Unix, so I don't notice much difference in my terminal.
Equal effort is far more likely from Qualcomm than hardware docs. They don't even freely share docs with partners, and many important things are restricted even from their own engineers. I've seen military contractors less paranoid than QCOM.
I'd have to say that full hardware documentation, even under NDA, is prerequisite to claim equal effort. The expectation on a desktop platform (that is, explicitly not mobile, like phones or tablets) is that development is mostly open for those who want to, and Qualcomm's business is sort of fundamentally counter to that. So either they're going to have to change those expectations (which I would prefer not to happen), provide more to manufacturers, or expect that their market performance will be poor.
Qualcomm could've become "the Intel of the ARM PC" if they wanted to, but I suspect they see no problem with (and perhaps have a vested interest in) proprietary closed systems given how they've been doing with their smartphone SoCs.
Unfortunately, even Intel is moving in that direction whenever they're trying to be "legacy free", but I wonder if that's also because they're trying to emulate the success of smartphone SoC vendors.
The extent PCs are open is an historical accident, that most OEMs would rather not repeat, as you can see everywhere from embedded all the way to cloud systems.
If anything, Linux powered devices are a good example on how all of them end up with OEM-name Linux, with minimal contributions to upstream.
If everyone would leave Windows in droves, expect regular people to be getting Dell and HP Linux at local PC store, with the same limitations as going outside their distros with binary blobs, and pre-installed stuff.
OEMs don't care about that. It's Qualcomm in particular that sucks. If you buy a Linux PC from System76 it comes with their own flavor of Linux but it's basically Ubuntu and there is nothing stopping you from putting any other version you want on it. The ones from Dell just use common distributions.
Meanwhile Linux is getting a huge popularity boost right now from all the PCs that don't officially support Windows 11 and run Linux fine, and those are distribution-agnostic too because they didn't come with it to begin with.
Usually what is stopping us are the drivers that don't work in other distro kernels, or small utilities that might not have have been provided with source.
4% was last year, it was 5% by this summer (a significant YoY increase and about what macOS had in 2010) and the Windows 10 end of support was only last month so the numbers from that aren't even in yet.
> Usually what is stopping us are the drivers that don't work in other distro kernels, or small utilities that might not have have been provided with source.
A lot of these machines are pure Intel or AMD hardware, or 95% and then have a Realtek network controller etc., and all the drivers are in the kernel tree. Sometimes the laptops that didn't come with Linux to begin with need a blob WiFi driver but plenty of them don't and many of the ones that do will have an M.2 slot and you can install a different one. It's not at all difficult to find one with entirely open source drivers and there is no apparent reason for that to get worse if Linux becomes more popular.
Better do the math, which means 15 years to reach where macOS is nowadays, which is still largely irrelevant outside tier 1 economies, while assuming nothing else will change in the computing landscape.
I was around when everyone was supposed to switch in droves to Linux back in the Windows XP days, or was it Vista, maybe Windows 7, or Windows 8, eventually 8.1, I guess Windows 10 was the one, or Windows 10 S, nah really Windows RT, actually it was Windows 11,or maybe....
I understand, I used to have M$ on my email signature back in the 1990's, surely to be found in some USENET or mailing list archive, yet we need to face the reality without Windows, Valve would not have a business.
> Better do the math, which means 15 years to reach where macOS is nowadays
macOS nowadays is closing in on 20%. And you can only buy macOS on premium-priced hardware and by now Linux supports more games than it does. The thing holding either of them back has always been third party software compatibility, which as the web has eroded native apps has been less of a problem, which is why both macOS and Linux have been growing at the expense of Windows.
And these things have tipping points. Can your company ignore Linux when it has 0.5% market share? Sure. Can you ignore it when it has 5% market share? There is a less of a case for that, so more things support it, which allows it to get even more market share, which causes even more things to support it. It's non-linear. The market share of macOS would already be significantly higher than it is if a new Mac laptop didn't start at a thousand bucks and charge $200 extra to add 8GB of RAM. Linux isn't going to have that problem.
Now, is it going to jump from 5% to 50% in three days? Of course not. But it's probably going to be more tomorrow than it was yesterday for the foreseeable future.
> we need to face the reality without Windows, Valve would not have a business.
Valve makes money from selling games and Steam. If Linux had 70% desktop market share and Windows had 5%, what would change about how they make money?
I mean, part of that is the difference between how easy it is to build a platform in Linux vs how hard it is to get into the tree. This is actually, in my mind, a major change in the Linux development process.
Nobody expected Intel to provide employees to write support for 80386 pagetables, or Philips to write and maintain support for the I2C bus. The PC keyboard driver was not sponsored and supported by IBM. Getting the code into Linux was really easy (and it shows in a lot of the older code; Linux kernel quality standards have been rising over time), because everyone was mostly cooperating on a cool open-source project.
But at some point, this became apparently unsustainable, and the expectation is now that AMD will maintain their GPU drivers, and Qualcomm (or some other company with substantial resources) will contribute code and employees to deal with Adreno GPUs. This led to a shift in reviewer attitudes: constant back-and-forth about code or design quality is typical on the mailing lists now.
This means contributing code to the kernel is a massive chore, which any person with interest in actually making things work should prefer to avoid. What's left is language lawyers, evangelists and people who get paid to sit straight and treat it as a 9-5 job.
The Asahi and pmOS folks have been quite successful in upstreaming drivers to the kernel (even for non-trivial devices like GPU's) as enthusiast contributors with no real company backing. The whole effort on including Rust in the Linux kernel is largely about making it even easier to write future drivers.
Agreed, and I'm fairly impressed by the GPU effort. That said, it did take a very long time, even with the demonstrably extreme amount of excitement from the Linux community (Linus himself was thrilled to use a Macbook). What do you do for parts that are useful but don't get people this excited?
What really burned me on this kind of stuff was the disappearance of Xeon Phi drivers from the kernel. Intel backed it out after they discontinued the product line, and the kernel people gladly went with it ("who'll maintain this?"). Intel pulled a beautiful piece of process lawyership on it: apparently they could back it out without difficulty, because the product was never released! (Never mind it has been sold, retired and circulated in public.)
I don't know if the prospect of being the "Intel of ARM" is very appealing when you can manufacture high-margin smartphone SOCs instead. The addressable market doesn't seem to be very large; any potential competition is stifled by licensing on both Microsoft and Softbank's side.
The legend of Windows on ARM is decades old, and people have been seriously trying to make it happen for at least the past two decades. They're all bled dry. Apple is the only one who can turn a profit, courtesy of their sweetheart deal with Masayoshi Son.
Well that would have an obvious solution. Go make RISC-V CPUs for phones etc. until you get good enough at it to be competitive in laptops, at which point Microsoft gets interested in supporting you and you get to be the Intel of RISC-V without dealing with Softbank.
> I will say that Intel has kind of made the original X Elite chips irrelevant with their Lunar Lake chips.
Depends why the Snapdragon chips were relevant in the first place! I got an ARM laptop for work so that I can locally build things for ARM that we want to be able to deploy to ARM servers.
Cross compilation is a pain to set up, especially if you're relying on system libraries for anything. Even dynamically linking against glibc is a pain when cross compiling.
We do have ARM CI pipelines now, but I can only imagine what a nightmare they would have been to set up without any ability to locally debug bits that were broken for architectural reasons.
I guess you must be doing trickier things than I ever have. I've found docker's emulation via qemu pretty reliable, and I'd be pretty surprised if there was a corner case that wouldn't show on it but would show on a native system.
Not really trickier, but different stack - we’re a .NET stack with a pile of linters, analyzers, tests, etc. No emulation, everything run natively on both x86-64 and ARM64. (But prior to actually running/debugging it on arm64, had various hang-ups.)
Native is also much faster than qemu emulation - I have a personal (non-.NET) project where I moved the CI from docker/qemu for x86+arm builds to separate x86+arm runners, and it cut the runtime from 10 minutes in total to 2 minutes per runner.
It's more surprising to me that software isn't portable enough that you can develop locally on x86-64. And then have a proper pipeline that produces the official binaries.
Outside the embedded space, cross-compilation really is a fool's errand: either your software is not portable (which means it's not future-proof), or you are targeting an architecture that is not commercially viable.
> It's more surprising to me that software isn't portable enough that you can develop locally on x86-64. And then have a proper pipeline that produces the official binaries.
This is what we largely do - my entire team other than me is on x86, but setting up the ARM pipelines (on GitHub Actions runners) would have been a real pain without being able to debug issues locally.
Which doesn't mean that it's easy to use an ARM device in the way I'd want to (i.e. as a trouble-free laptop or desktop with complete upstream kernel support).
I have a couple generation back amd laptop that can 'standby' for months.. its called S4 hibernate. Although at the same time its set for S3 and can sit in S3 for a few days at least and recover in less time than it takes to open the screen. The idea that you need instant wakeup when the screen has been closed for days is sorta a niche case, even apple's machines hibernate if you leave the screen closed for too long.
That isn't to say that modern standby/s2-idle isn't super useful, because it is, but more for actual use cases where the machine can basically go to sleep with the screen on displaying something the user is interacting with.
I fully expected this. I really wanted to get the Snapdragon X Elite Ideacentre just because I wanted an ARM target to run stuff on and if I'm being honest the Mac Minis are way better price/performance with support. Apple Silicon is far faster than any other ARM processor (Ampere, Qualcomm, anything else) that's easily available with good Linux support.
I am so grateful to the Asahi Linux guys who made this whole thing work. What a tour de force! One day, we'll get the M4 Mac Mini on Asahi and that will be far superior to this Snapdragon X Elite anyway.
I remember working on a Qualcomm dev board over a decade ago and they had just the worst documentation. The hardware wouldn't even respond correctly to what you told it to do. I don't know if that's standard but without the large amount of desire there is to run Linux on Apple Silicon I didn't really anticipate support approaching what Asahi has on M1/M2.
A tour de force indeed. Asahi Linux only works as well as it does because of the massive effort put in by that team.
For all the flack Qualcomm takes, they do significantly more than Apple to get hardware support into the kernel. They are already working to mainline the X2 Elite.
The difference is that Apple only makes a few devices and there is a large community around them. It would be far less work to create a stellar Linux experience on a Lenovo X Elite laptop than on a M2 MacBook. But fewer people are lining up to do it on Lenovo. We expect Lenovo, Linaro, and Qualcomm to do it for us.
Wrong documentation is perhaps worse than no documentation. Although Apple provides little, at least it is usually accurate, and what's left you know you must reverse engineer.
Unfortunately with the main reverse engineers of the Asahi project having moved on, I very much doubt we will see versions working on more recent M-series chips.
Qualcomm doesn't bother to upstream most of their SoCs. They maintain a fork of a specific Linux kernel version for a while and when they stop updating it or new version of Android requires newer kernel then updates for all devices based on that SoC end.
They have little experience producing code that is high enough quality it would be accepted into Linux kernel. They have even less experience maintaining it for an extended period of time.
Somewhat tangent: x86-based laptops of this brand (it is new to me, I never meet Tuxedo Computers before) looks attractive, but there is no information about their screens main property: are they glossy or matt?
My wife is very sensitive to glossy screens and we have big problems to find new laptop for her, as most good ones are glossy now.
If she's ok with macOS, the new "nano-textured display" options on the MacBook Pros are very nice. I'm typing form one right now. It has the sharp color response of the glossy displays, but absolutely no noticeable glare.
It's ACPI - most laptops ship with half-broken ACPI tables, and provide support for tunables through windows drivers. It's convenient for laptop manufacturers, because Microsoft makes it very easy to update drivers via windows update, and small issues with sleep, performance, etc. can be mostly patched through a driver update.
Linux OTOH can only use the information it has from ACPI to accomplish things like CPU power states, etc. So you end up with issues like "the fans stop working after my laptop wakes from sleep" because of a broken ACPI implementation.
There are a couple of laptops with excellent battery life under linux though, and if you can find a lunar lake laptop with iGPU and IPS screen, you can idle around 3-4W and easily get 12+ hours of battery.
LG Gram user here with Debian as a daily driver. Can confirm, maybe not 15h, but I don't think about charging. Plus, it's super stable, not a single crash or hang-up over years. It just works. I hope LG will keep this up and not mess up next iterations of the hardware.
I had an LG gram before the battery in it gave out and now it won't boot with the battery plugged in. The battery life was amazing, it always slept properly, etc.
Now I have a Framework. It randomly reboots when I close the lid, the battery life is terrible, etc. I live with it since I like the idea of a repairable laptop.
What's standing in the way of doing something like NDISwrapper but for ACPI? Just that nobody with ghe required skills has spent the effort? Or something technical?
If the observable behavior is bad Linux performance, it's a Linux problem.
There's a saying in motorcycling: it's better to be alive than right. There's no upside in being correct if it leaves you worse off.
There are ways to make things better leveraging the Linux way. Make more usable tools for fixing ACPI deficiencies with hotloadable patches, ways of validating or verifying the patches for safety, ways of sharing and downloading them, and building a community around it.
Moaning that manufacturers only pay attention to where their profits come from is not a strategy at all.
Decompile your ACPI tables and then do a grep for "Linux". You are likely to find it, meaning the vendor took time to think about Linux on their hardware. Some vendors take the time to write good settings and code for the Linux ACPI paths, some dump you into no-man's land on purpose if your OSI vendor string is "Linux".
It's quite literally a vendor problem created by vendors leading anyone that doesn't run Windows astray in some cases.
If you run Linux, then dare to change your OSI vendor string to "Windows", you've entered into bespoke code land that follows different non-standard implementations for every SKU, where it's coded to work with a unique set of hardware and bespoke drivers/firmware on Windows. You also forgo any Linux forethought and optimizations that went into the "Linux" code paths.
My point is that from the Linux side, you're damned if you and damned if you don't no matter how you tackle the issue. If the layer above Linux is going to deliberately malfunction and lie on the Linux happy path, or speak some non-standard per-device driver protocol if you lie to use the Windows path, there's not much that can be done.
It's only a "Linux problem" if you're trying to run Linux on hardware that is actively hostile to it. There are plenty of vendors who supply good Linux happy paths in their firmware, using their hardware is the solution to that self-imposed problem.
I think the correct strategy in this case is to return your laptop to the store if it has linux compatibility issues, and keep trying until you find one that works.
i.e. don't support vendors whose laptops don't work in Linux.
> Does anyone know why Linux laptop battery life is so bad?
It's extremely dependent on the hardware and driver quality. On ARM and contemporary x86 that's even more true, because (among other things) laptops suspend individual devices ("suspend-to-idle" or "S0ix" or "Modern Standby"), and any one device failing to suspend properly has a disproportionate impact.
That said, to a first approximation, this is a case where different people have wildly different experiences, and people who buy high-end well-supported hardware experience a completely different world than people who install Linux on whatever random hardware they have. For instance, Linux on a ThinkPad has excellent battery life, sometimes exceeding Windows.
Newer laptops come with extra power peripherals and sensors. Some of them are in ACPI tables, some are not. Most of them are proprietary ASICs (or custom chips, nuvoton produces quite a bit of those). Linux kernel or the userspace has poor support for those. Kernel PCIe drivers require some tuning. USB stack is kind of shaky and power management features are often turned off since they get unstable as hell.
If you have a dGPU, Linux implementation of the power management or offloading actually consumes more power than Windows due to bad architectural design. Here is a talk from XDC2025 that plans to fix some of the issues: https://indico.freedesktop.org/event/10/contributions/425/
Desktop usage is a third class citizen under Linux (servers first, embedded a distant second). Phones have good battery life since SoC and ODM engineers spend months to tune them and they have first party proprietary drivers. None of the laptop ODMs do such work to support Linux. Even their Windows tooling is arcane.
Unless the users get drivers all the minute PMICs and sensors, you'll never get the battery life you can get from a clean Windows install with all the drivers. MS and especially OEMs shoot themselves in the foot by filling the base OS with so much bloat that Linux actually ends up looking better compared to stock OEM installs.
In addition to the other comments, its worth noting macOS started adding developer documentation around energy efficiency, quality of service prioritization, etc. (along with support within its OS) around 2015-2016 when the first fanless usb-c macbook came out: https://developer.apple.com/library/archive/documentation/Pe...
Think I'm arguing its both things where the OS itself can optimize things for battery life along with instilling awareness and API support for it so developers can consider it too.
On top of this, they started encouraging adoption of multithreading and polished up the APIs to make doing so easier even in the early days of OS X, since they were selling PPC G4/G5 towers with dual and eventually quad CPUs.
This meant that by the time they started pushing devs to pay attention to QoS and such, good Mac apps had already been thoroughly multithreaded for years, making it relatively easy to toss things onto lower priority queues.
My Dell XPS had pretty good battery life on linux. Probably better than on windows. But Dell sells the XPS wiht linux preinstalled. So I assume it has a lot to do with the drivers. Many notebooks have custom chips inside or some weird bios that works together with a windows program. I'd say laptops are more diverse than desktop PCs with of the shelve hardware.
Yeah, my 3-ish year old 13.4" XPS Plus is currently consuming 3.9 W with around 150 open tabs across four Firefox windows, 3 active Electron apps, Libreoffice Writer & Impress, a text editor, and a couple of terminals.
That's in an extremely vanilla Debian stable install, running in the default "Balanced" power mode, without any power-related tuning or configuration.
That compares reasonably well with my 14" M3 Macbook Pro, which seems to be drawing around 3.5 W with a similar set of apps open.
Sure, the XPS is flattered in this comparison because it has a slightly smaller screen, but even accounting for that it would still be... fine? Easily enough to get through a full day of use, which is all I care about.
There's nothing special about this XPS, and I'd expect the Thinkpad models that have explicit Linux support to be equally fine. The key point is that the vendor has put some amount of care and attention into producing a supportable system.
Install powertop, the "tunables" tab has a list of system power saving settings you can toggle through the UI. I've seen them make a pretty big difference, but YMMV of course.
It mostly just breaks things unfortunately. You can faff around for ages trying to figure out which devices work and which don’t but you end up with not much to show for it.
I ran into this problem on a Slimbook some years ago now. I found that my battery drained way too fast in standby, and I remember determining that this was some (relatively common) problem with sleep states, that some linux machines couldn't really enter/stay in a deeper sleep state, so my Slimbook's standby wasn't much of a standby at all.
A lot of people say that lightweight desktops/distros help. Probably GNOME/KDE unnecessarily use your SSD, network, GPU and other resources even when you are idle, compared to using a minimal WM and only starting the daemons you actually need.
I personally never tested it, and I can't find definite benchmarks that confirm and measure the waste.
While each of the comments here describe individual failings, on a well supported laptop it is possible to get better power efficiency than windows if your willing to spend the time manually tuning linux, the powertop/etc suggestions are fine, but fundamentally the reason some of the 'lighter' DE's save so much power is that there is a lot of 'slop' in the default KDE/GNOME and application set. You have random things waking up to regularly and polling stuff which pulls the cores out of deep sleep states. And then there are all the kernel issues with being unable to identify and prioritize/schedule for a desktop. Ex: the only thing that should be given free reign is an active forground application, grouping and suppressing background applications, running them on little cores at slow rates if they have work to do/etc. All that is a huge part of why macos does so well vs linux on the same hardware.
The comment about ACPI being the problem is slightly off base, since its a huge part of the solution to good power management on modern hardware. There isn't another specification that allows the kind of background fine grained power tuning of random busses/devices/etc by tiny management cores who's entire purpose is monitoring activity and making adjustments required of modern machines. If one goes the DT route as QC has done here, each machine needs a huge pile of custom mailbox interface drivers upstreamed into the kernel customized for every device and hardware update/change. They get away with this in the android space because each device is literally a customized OS and they don't have the upstream turnaround problem because they don't upstream any of it, but that won't scale for general purpose compute as the parent article talks about.
> We will continue to monitor developments and evaluate the X2E at the appropriate time for its Linux suitability. If it meets expectations and we can reuse a significant portion of our work on the X1E, we may resume development. How much of our groundwork can be transferred to the X2E can only be assessed after a detailed evaluation of the chip.
Apparently the Windows exclusivity period has ended, so Google will support Android and ChromeOS on Qualcomm X2-based devices in 2026, https://news.ycombinator.com/item?id=45368167
I wonder what made it so hard? I thought that Snapdragon was already providing the Linux drivers? Anyone knows? Maybe those were not OpenSource?
My guess is that it's all the same as in Linux phones that they have large blobs of drivers given by the board producer but not being open, but then... Maybe we should invest time in microkernels? Maybe Linux is a dead end because of the monolithic architecture? Because I doubt big companies will change...
What Android plus phones proves is you can get excellent performance and fantastic battery life from Linux and third party HW. This could and should be applied to Linux running on an ARM64 system but not sure why. Maybe economies of scales WRT investment on the phone driver side.
First of all the userspace is completely different, secondly Android throughout the years has been aggressively changing the ways background process work (in the context of Android activities, not bare bones UNIX), thus it isn't the same as GNU/Linux where anything goes.
No it's not and never will be. Google says every year that ChromeOS and Android are merging but it's not happening. They are just merging some components, e.g. the Bluetooth Stack. ChromeOS got a new design a few months ago so they are still putting work into it.
This feels like BAU for PC vendors - you test out a product on a new combination of hardware, and it isn't mature/stable/ready for production, so you kick it down the road to develop later - this is especially true for Linux, where a LOT of the work would be done outside of your organisation.
I mean I feel like once one of the ARM chipmakers can lend a hand on the software side it should be a landslide.
Google and Samsung managed to make very successful Chromebooks together, but IIRC there was a bunch of back and forth to make the whole thing boot quickly and sip battery power.
What’s the primary need for ARM? Is it because Apple silicon showed a big breakthrough in performance to power with reduced instruction set? While it’s amazing on paper I barely notice a difference on my day to day use between an Intel Ultra and a M2 in performance. Battery life is where they are miles apart.
I’m guessing for most people it doesn’t much matter. Most people aren’t writing assembly. They do love an all day battery.
I think the competition really helps keep these companies honest.
Hardware companies generally start working on a laptop before a SOC is released, not after. They also need to secure manufacturer support, in this case Qualcomm to be able to deliver in time.
I mean updating it. Often the update are just windows only..
For example I've had this dell Elitebook where I've installed Debian wiping out Win. While on windows system prompts Bios update practically every week but been years in Linux on same bios. IIRC updates were win only or jump thru some complex rings of fire. Haven't bothered looking up in a while..
I've also had to disable some protection such as security before I could install Debian though I guess there's a way if I research hard enough.
If it's Dell, they're one of the most prolific, if not the most, on LVFS. All of my Dell hardware gets firmware updates via fwupd. Dell is conservative with marking their BIOS updates, though, and you might have to enable the testing LVFS repository for regularly updated BIOS.
If you mean HP EliteBooks, it doesn't have to be any more complicated than 1) extract BIOS archive 2) drag & drop BIOS file 3) reboot.
You download the BIOS update .exe, run 7zip on it, and take the BIOS.BIN file and either stick it in the root of your EFI partition and it will install automatically on boot, or just run `fwupdtool install-blob BIOS.BIN` and it will install automatically on reboot.
HP publishes updates to LVFS regularly for both their laptops and their thunderbolt docks(!)
I believe Elitebooks are Ubuntu Certified, which I would imagine gets their firmware updates pushed to LVFS.
Personally, with the Thunderbolt Dock 4 and an HP ZBook G10, I have gotten timely (<30 days of release) automatic updates from LVFS for both on Ubuntu/Fedora with 0 effort on my part.
Just an update...learning something thanks...installed fwupd and it said something about disabled service.
Ran fwupdmgr (thanks Google) and it lists a whole bunch of hardware on my laptop as "device with no update found", included in that are UEFI device firmware, UEFI dbx, System firmware etc.
I will check further, thanks again. Didn't know about this.
I have updated my HP laptop's UEFI 4-5 times now using LVFS with HP's officially published updates. Just did it 2-3 days ago, in fact, for the latest update. I got a GUI prompt that there was a firmware update for my system, I clicked install, it said it would reboot my system, and I said ok and went to go make tea. Came back to a login screen and the update installed successfully.
I have used Secure Boot with Linux for several years now, too. Microsoft signed the shim loader and most distros can do it out-of-the-box now, much like fwupdmgr (above).
I think the biggest thing is finding, for example, devices that are Ubuntu Certified. You don't have to use Ubuntu necessarily, but the whole ecosystem benefits from hardware manufacturers having a slight degree of accountability having done this.
That's a choice the vendor makes, and Tuxedo Computers is the vendor in this case. Since they control the product they're making, they should be able to provide nice firmware updates for Linux users. They just decided not to even try, I guess?
Was to be expected. Qualcomm sucks very much to support open platforms.
I was disappointed to see that no more good linux compatible XPS was available anymore because they are now based on the last snapdragon for bullshit windows "ai" reasons.
I wonder if Mediatek will try its hand as laptop oriented SoC now that their flagship mobile SoC are competitive again and Google is merging Android and Chrome OS.
Generally, they are far nicer than Qualcomm when it comes to supporting standard technology.
They already have, and they are in Chromebooks. Last week, another HN:er posted that he uses a Lenovo Chromebook with a Mediatek SoC as his daily Linux dev machine.
We can nerd it out about Linux this an S3 sleep that. How much money does the community need to raise all in, for that notebook to happen. Where's the GoFundMeAngelList platform that's a cult where I can pledge $10,000 to get this laptop of my dreams? Or are we all too busy shitposting?
> How much money does the community need to raise all in, for that notebook to happen.
> Where's the GoFundMeAngelList platform that's a cult where I can pledge $10,000 to get this laptop of my dreams?
The hard part isn't the money - it's identifying an addressable market that makes the investment worthwhile and assembling a team that can execute and deliver on it.
The market can't be a few hundred enthusiasts who want to spend $10k on a laptop. It has to be at least tens of thousands who would spend $1-2k. Even that probably won't get you to the break-even when you consider the size (and speciality) of team you need to do all this.
Here is a list of major ARM licensees, categorized by the type of license they typically hold.
1. Architectural Licensees (Most Flexible)
These companies hold an Architectural License, which allows them to design their own CPU cores (and often GPUs/NPUs) that are compatible with the ARM instruction set. This is the highest level of partnership and requires significant engineering resources.
Apple: The most famous example. They design the "A-series" and "M-series" chips (e.g., A17 Pro, M4) for iPhones, iPads, and Macs. Their cores are often industry-leading in single-core performance.
Qualcomm: Historically used ARM's core designs but has increasingly moved to its own custom "Kryo" CPU cores (which are still ARM-compatible) for its Snapdragon processors. Their recent "Oryon" cores (in the Snapdragon X Elite) are a fully custom design for PCs.
NVIDIA: Designs its own "Denver" and "Grace" CPU cores for its superchips focused on AI and data centers. They also hold a license for the full ARM architecture for their future roadmap.
Samsung: Uses a mixed strategy. For its Exynos processors, some generations use semi-custom "M" series cores alongside ARM's stock cores.
Amazon (Annapurna Labs): Designs the "Graviton" series of processors for its AWS cloud services, offering high performance and cost efficiency for cloud workloads.
Google: Has developed its own custom ARM-based CPU cores, expected to power future Pixel devices and Google data centers.
Microsoft: Reported to be designing its own ARM-based server and consumer chips, following the trend of major cloud providers.
2. "Cores & IP" Licensees (The Common Path)
These companies license pre-designed CPU cores, GPU designs, and other system IP from ARM. They then integrate these components into their own System-on-a-Chip (SoC) designs. This is the most common licensing model.
MediaTek: A massive player in smartphones (especially mid-range and entry-level), smart TVs, and other consumer devices.
Broadcom: Uses ARM cores in its networking chips, set-top box SoCs, and data center solutions.
Texas Instruments (TI): Uses ARM cores extensively in its popular Sitara line of microprocessors for industrial and embedded applications.
NXP Semiconductors: A leader in automotive, industrial, and IoT microcontrollers and processors, almost exclusively using ARM cores.
STMicroelectronics (STM): A major force in microcontrollers (STM32 family) and automotive, heavily reliant on ARM Cortex-M and Cortex-A cores.
Renesas: A key supplier in the automotive and industrial sectors, using ARM cores in its R-Car and RA microcontroller families.
AMD: Uses ARM cores in some of its adaptive SoCs (Xilinx) and for security processors (e.g., the Platform Security Processor or PSP in Ryzen CPUs).
Intel: While primarily an x86 company, its foundry business (IFS) is an ARM licensee to enable chip manufacturing for others, and it has used ARM cores in some products like the now-discontinued Intel XScale.
Sure, but i suspect for basically all of us (maybe Elon is surfing HN today), that literally means nothing. Few of us have the 100's of millions required to design and fab a competitive SoC, and for those that do, the arm licenses are easier to acquire than the knowledge of how to build a competitive system (see RISC-V). You might as well complain about TSMC not publishing the information on how to fab 2nm parts or the code used to generate the mask sets.
For the rest of us, what matters is whether we can open digikey/newegg/whatever and buy a few machines and whether they are open enough for us to achieve our goals and their relative costs. So that list of vendors is more appropriate because they _CAN_ sell the resulting products to us. The problem is how much of their mostly off the shelf IP they refuse to document, resulting in extra difficulties getting basic things working.
It's a shame that this didn't end up going anywhere. When Qualcomm was doing their press stuff prior to the Snapdragon X launch, they said that they'd be putting equal effort into supporting both Windows and Linux. If anyone here is running Linux on a Snapdragon X laptop, I'd be curious to know what the experience is like today.
I will say that Intel has kind of made the original X Elite chips irrelevant with their Lunar Lake chips. They have similar performance/battery life, and run cool (so you can use the laptop on your lap or in bed without it overheating), but have full Linux support today and you don't have to deal with x86 emulation. If anyone needs a thin & light Linux laptop today, they're probably your best option. Personally, I get 10-14 hours of real usage (not manufacturer "offline video playback with the brightness turned all the way down" numbers) on my Vivobook S14 running Fedora KDE. In the future, it'll be interesting to see how Intel's upcoming Panther Lake chips compare to Snapdragon X2.
The iGPU in Panther Lake has me pretty excited about intel for the first time in a long time. Lunar Lake proved they’re still relevant; Panther Lake will show whether they can actually compete.
Lunar Lake had integrated RAM, right? Given certain market realities right now, it could be a real boon for them if they keep that design.
I'm typing this from a snapdragon x elite HP. It's fine really but my use is fairly basic. I only use it to watch movies, read, browse, and draft word and excel, some light coding.
No gaming - and I came in knowing full well that a lot of the mainstream programs don't play well with snapdragon.
What has amazed me the most is the battery life and the seemingly no real lag or micro-stuttering that you get in some other laptops.
So, in all, fine for light use. For anything serious, use a desktop.
Running Linux?
WSL or Docker is the only way to run Linux on these, it seems :(
Windows 11 with all the bloatware removed isn't a terrible experience though.
What is it about it that makes it unsuited for anything serious? The way you describe it, the only thing it's not suited for is gaming, which is not generally regarded as serious.
Many people including myself do serious work on a macbook, which is also ARM. What's different about this qualcomm laptop that makes it inappropriate?
> What's different about this qualcomm laptop that makes it inappropriate?
Everything else around the cpu. apple systems are entirely co-designed (cpu to work with the rest of the components and everything together to work with mac os).
While i'd love to see macbook-level quality on other brands (looking at you, lenovo) tight hardware+software co-design (and co-development) yields much better results.
Microsoft is pushing hard for UEFI + ACPI support on PC ARM boards. I believe the Snapdragon X2 is supposed to support it.
That still leaves the usual UEFI + ACPI quirks Linux has had to deal with for aeons, but it is much more manageable than (non-firmware) DeviceTree.
The dream of course would be an opensource HAL (which UEFI and ACPI effectively are). I remember that certain Asus laptops had a microstutter due to a non-timed loop doing an insane amount of polling. Someone debugged it with reverse engineering, posted it on GitHub, and it still took Asus more than a year to respond to it and fix it, only after it blew up on social media (including here). With an opensource HAL, the community could have introduced a fix in the HAL overnight.
I get the lacking Linux support, but what about Windows? Most serious work happens on Windows and their SoCs seem to have much better support there.
Apple's hardware+software design combo is nice for things like power efficiency, but so in my experience so far, a Macbook and a similarly priced Windows laptop seems to be about equal in terms of weird OS bugs and actually getting work done.
I’m getting about 2 hours with current macos on an arm macbook pro. I used to get 4-5 last year.
This is out of the box. With obvious fixes like ripping busted background services out, it gets more than a day. There’s no way normal users are going to fire up console.app and start copy pasting “nuke random apple service” commands from “is this a virus?” forums into their terminal.
Apple needs to fix their QA. I’ve never seen power management this bad under Linux.
It’s roughly on par with noughties windows laptops loaded with corporate crapware.
That's unfortunate, perhaps your particular macbook is having a hardware problem?
As a point of comparison, I daily two ARM macs (work M4 14 + personal M3 14), and I get far better battery life than that (at least 8 hours of "normal" active use on both). Also, antidotally, the legion of engineers at my office with macs are not seeing battery life issues either.
That said, I have yet to encounter anyone who is in love with macOS Tahoe and it's version of Liquid Glass.
The current issue is iOS 26.1’s wallpaper renderer crashes in a tight loop if the default wallpaper isn’t installed. It isn’t under Xcode.
I have macos crash reporting turned off, but crashreport pins the CPU for a few minutes on each ios wallpaper renderer crash. I always have the iOS simulator open, so two hours battery, max.
I killed crashreport and it spun the cpu on some other thing.
In macos 25, there’s no throttle for mds (spotlight), and running builds at a normal developer pace produces about 10x more indexing churn than the Apple silicon can handle.
I run an old T480 with FreeBSD and get about 17 hours of battery out of it. Sure, it’s a bit thicker but gets the job done as a daily driver.
There is literally no way. Spill the beans!
Sorry, thought I had posted, but didn't get through. It's a T480 with the 72Wh and the 24Wh battery running on FreeBSD. Screen has also been replaced with a low power usage screen which helps a lot in saving battery while still giving good brightness.
Most of the time I am running StumpWM with Emacs on one workspace and Nyxt in another. So just browsing and coding mostly.
OpenBSD gets close, but FreeBSD got a slight edge battery wise. To be fair, that is on an old CPU that still has homogenous cores. More modern CPUs can probably benefit from a more heterogenous scheduler.
Probably has the extra big battery. Thinkpads have options for different sized batteries.
Or they just got one of the 'good' models and tuned linux a bit. I have a couple lenovo's and its hit/miss, but my 'good' machine has an AMD which after a bit of tuning idles with the screen on at 2-3W, and with light editing/browsing/etc is about 5W. With the 72Wh battery that is >14h, maybe over 20 if I was just reading documentation. Of course its only 4-5 if i'm running a lot of heavy compile/VMs unless I throttle them, in which case its easy over 8h.
One of my 'bad' machines is more like 10-100W and i'm lucky to get two hours.
Smaller efficient CPU + low power sleep + not a lot of background activity + big battery = very long run times.
72Wh + 24Wh battery (one swappable one internal) and running FreeBSD Current.
!!! I can get my laptop to 7.5W under web browsing with powertop tuning, but not 5. What did you do?
for this to happen we would need to see a second company that controls both the hardware and the software and that's not realistic, economically. You can't just jump into that space.
You could argue that is exactly what Tuxedo is doing. In this case, they could not provide the end-user experience they wanted with this hardware so they moved on.
System76 may be an even better example as they now control their software stack more deeply (COSMIC).
when I say "control the software" what i mean is we need another company that can say "hey we are moving to architecture X because we think it's better" and within a year most developers rewrite their apps for the new arch - because it's worth it for them
there needs to be a huge healthy ecosystem/economic incentive.
it's all about the software for end users. I don't care what brand it is or OS and how much it costs. I want to have the most polished software and I want to have it on release day.
Right now, it's Apple.
Microsoft tries to do this but is held back by the need for backward compatibility (enterprise adoption), and Google cannot do this because of Android fragmentation. I don't think anyone is even near to try this with Linux.
Open Source has a massive advantage here.
Almost everything on regular Fedora works on Ashai Fedora out of the box on Apple Silicon.
You can get a full Ubuntu distribution for RISC-V with tens of thousands of packages working today.
Many Linux users would have little trouble changing architectures. For Linux, the issue is booting and drivers.
What you say is true for proprietary software of course. But there is FEX to run x86 software on ARM and Felix86 to run it on RISC-V. These work like Rosetta. Many Windows games run this way for example.
The majority of Android apps ship as Dalvik bytecode and should not care about the arch. Anything using native code is going to require porting though. That includes many games I imagine.
we are both right in different scopes but the context of the thread is the cancellation of an ARM notebook
Microsoft with their Surface line? They don't control every part of the hardware, but neither did Apple control even the majority before the M series.
I was incredibly excited when they announced the chip alongside all kinds of promises regarding Linux support, so I pre-ordered a laptop with the intention of installing Linux later on. When reports came out that single core performance could not even match an old iPhone, alongside WSL troubles and disappointing battery life, I sent it back on arrival.
Instead I paid the premium for a nicely specced Macbook Pro, which is honestly everything I wanted, safe for Linux support. At least it's proper Unix, so I don't notice much difference in my terminal.
Forget equal effort: Start off with hardware docs.
Equal effort is far more likely from Qualcomm than hardware docs. They don't even freely share docs with partners, and many important things are restricted even from their own engineers. I've seen military contractors less paranoid than QCOM.
I'd have to say that full hardware documentation, even under NDA, is prerequisite to claim equal effort. The expectation on a desktop platform (that is, explicitly not mobile, like phones or tablets) is that development is mostly open for those who want to, and Qualcomm's business is sort of fundamentally counter to that. So either they're going to have to change those expectations (which I would prefer not to happen), provide more to manufacturers, or expect that their market performance will be poor.
If they don't provide hardware documentation for Windows either (a desktop platform), how can it be a prerequisite for equal effort?
Qualcomm could've become "the Intel of the ARM PC" if they wanted to, but I suspect they see no problem with (and perhaps have a vested interest in) proprietary closed systems given how they've been doing with their smartphone SoCs.
Unfortunately, even Intel is moving in that direction whenever they're trying to be "legacy free", but I wonder if that's also because they're trying to emulate the success of smartphone SoC vendors.
The extent PCs are open is an historical accident, that most OEMs would rather not repeat, as you can see everywhere from embedded all the way to cloud systems.
If anything, Linux powered devices are a good example on how all of them end up with OEM-name Linux, with minimal contributions to upstream.
If everyone would leave Windows in droves, expect regular people to be getting Dell and HP Linux at local PC store, with the same limitations as going outside their distros with binary blobs, and pre-installed stuff.
OEMs don't care about that. It's Qualcomm in particular that sucks. If you buy a Linux PC from System76 it comes with their own flavor of Linux but it's basically Ubuntu and there is nothing stopping you from putting any other version you want on it. The ones from Dell just use common distributions.
Meanwhile Linux is getting a huge popularity boost right now from all the PCs that don't officially support Windows 11 and run Linux fine, and those are distribution-agnostic too because they didn't come with it to begin with.
I would not call huge the 4% market share.
Usually what is stopping us are the drivers that don't work in other distro kernels, or small utilities that might not have have been provided with source.
> I would not call huge the 4% market share.
4% was last year, it was 5% by this summer (a significant YoY increase and about what macOS had in 2010) and the Windows 10 end of support was only last month so the numbers from that aren't even in yet.
> Usually what is stopping us are the drivers that don't work in other distro kernels, or small utilities that might not have have been provided with source.
A lot of these machines are pure Intel or AMD hardware, or 95% and then have a Realtek network controller etc., and all the drivers are in the kernel tree. Sometimes the laptops that didn't come with Linux to begin with need a blob WiFi driver but plenty of them don't and many of the ones that do will have an M.2 slot and you can install a different one. It's not at all difficult to find one with entirely open source drivers and there is no apparent reason for that to get worse if Linux becomes more popular.
Better do the math, which means 15 years to reach where macOS is nowadays, which is still largely irrelevant outside tier 1 economies, while assuming nothing else will change in the computing landscape.
I was around when everyone was supposed to switch in droves to Linux back in the Windows XP days, or was it Vista, maybe Windows 7, or Windows 8, eventually 8.1, I guess Windows 10 was the one, or Windows 10 S, nah really Windows RT, actually it was Windows 11,or maybe....
I understand, I used to have M$ on my email signature back in the 1990's, surely to be found in some USENET or mailing list archive, yet we need to face the reality without Windows, Valve would not have a business.
> Better do the math, which means 15 years to reach where macOS is nowadays
macOS nowadays is closing in on 20%. And you can only buy macOS on premium-priced hardware and by now Linux supports more games than it does. The thing holding either of them back has always been third party software compatibility, which as the web has eroded native apps has been less of a problem, which is why both macOS and Linux have been growing at the expense of Windows.
And these things have tipping points. Can your company ignore Linux when it has 0.5% market share? Sure. Can you ignore it when it has 5% market share? There is a less of a case for that, so more things support it, which allows it to get even more market share, which causes even more things to support it. It's non-linear. The market share of macOS would already be significantly higher than it is if a new Mac laptop didn't start at a thousand bucks and charge $200 extra to add 8GB of RAM. Linux isn't going to have that problem.
Now, is it going to jump from 5% to 50% in three days? Of course not. But it's probably going to be more tomorrow than it was yesterday for the foreseeable future.
> we need to face the reality without Windows, Valve would not have a business.
Valve makes money from selling games and Steam. If Linux had 70% desktop market share and Windows had 5%, what would change about how they make money?
I mean, part of that is the difference between how easy it is to build a platform in Linux vs how hard it is to get into the tree. This is actually, in my mind, a major change in the Linux development process.
Nobody expected Intel to provide employees to write support for 80386 pagetables, or Philips to write and maintain support for the I2C bus. The PC keyboard driver was not sponsored and supported by IBM. Getting the code into Linux was really easy (and it shows in a lot of the older code; Linux kernel quality standards have been rising over time), because everyone was mostly cooperating on a cool open-source project.
But at some point, this became apparently unsustainable, and the expectation is now that AMD will maintain their GPU drivers, and Qualcomm (or some other company with substantial resources) will contribute code and employees to deal with Adreno GPUs. This led to a shift in reviewer attitudes: constant back-and-forth about code or design quality is typical on the mailing lists now.
This means contributing code to the kernel is a massive chore, which any person with interest in actually making things work should prefer to avoid. What's left is language lawyers, evangelists and people who get paid to sit straight and treat it as a 9-5 job.
The Asahi and pmOS folks have been quite successful in upstreaming drivers to the kernel (even for non-trivial devices like GPU's) as enthusiast contributors with no real company backing. The whole effort on including Rust in the Linux kernel is largely about making it even easier to write future drivers.
Agreed, and I'm fairly impressed by the GPU effort. That said, it did take a very long time, even with the demonstrably extreme amount of excitement from the Linux community (Linus himself was thrilled to use a Macbook). What do you do for parts that are useful but don't get people this excited?
What really burned me on this kind of stuff was the disappearance of Xeon Phi drivers from the kernel. Intel backed it out after they discontinued the product line, and the kernel people gladly went with it ("who'll maintain this?"). Intel pulled a beautiful piece of process lawyership on it: apparently they could back it out without difficulty, because the product was never released! (Never mind it has been sold, retired and circulated in public.)
> What really burned me on this kind of stuff was the disappearance of Xeon Phi drivers from the kernel
If you depend on that hardware, you can get it to be supported again. It just doesn't seem to be all that popular.
Note that the Rust effort is mostly sponsored by Google and Microsoft, thus the 9-5 example of the OP.
Correct me if I’m wrong but I’m pretty sure the Asahi GPU driver has not been upstreamed.
I don't know if the prospect of being the "Intel of ARM" is very appealing when you can manufacture high-margin smartphone SOCs instead. The addressable market doesn't seem to be very large; any potential competition is stifled by licensing on both Microsoft and Softbank's side.
The legend of Windows on ARM is decades old, and people have been seriously trying to make it happen for at least the past two decades. They're all bled dry. Apple is the only one who can turn a profit, courtesy of their sweetheart deal with Masayoshi Son.
Well that would have an obvious solution. Go make RISC-V CPUs for phones etc. until you get good enough at it to be competitive in laptops, at which point Microsoft gets interested in supporting you and you get to be the Intel of RISC-V without dealing with Softbank.
> I will say that Intel has kind of made the original X Elite chips irrelevant with their Lunar Lake chips.
Depends why the Snapdragon chips were relevant in the first place! I got an ARM laptop for work so that I can locally build things for ARM that we want to be able to deploy to ARM servers.
Surprising. Cross compilation too annoying to set up? No CI pipelines for things you're actually deploying?
(I'm keen about ARM and RISC-V systems, but I can never actually justify them given the spotty Linux situation and no actual use case)
Cross compilation is a pain to set up, especially if you're relying on system libraries for anything. Even dynamically linking against glibc is a pain when cross compiling.
We do have ARM CI pipelines now, but I can only imagine what a nightmare they would have been to set up without any ability to locally debug bits that were broken for architectural reasons.
I guess you must be doing trickier things than I ever have. I've found docker's emulation via qemu pretty reliable, and I'd be pretty surprised if there was a corner case that wouldn't show on it but would show on a native system.
Not really trickier, but different stack - we’re a .NET stack with a pile of linters, analyzers, tests, etc. No emulation, everything run natively on both x86-64 and ARM64. (But prior to actually running/debugging it on arm64, had various hang-ups.)
Native is also much faster than qemu emulation - I have a personal (non-.NET) project where I moved the CI from docker/qemu for x86+arm builds to separate x86+arm runners, and it cut the runtime from 10 minutes in total to 2 minutes per runner.
It's more surprising to me that software isn't portable enough that you can develop locally on x86-64. And then have a proper pipeline that produces the official binaries.
Outside the embedded space, cross-compilation really is a fool's errand: either your software is not portable (which means it's not future-proof), or you are targeting an architecture that is not commercially viable.
> It's more surprising to me that software isn't portable enough that you can develop locally on x86-64. And then have a proper pipeline that produces the official binaries.
This is what we largely do - my entire team other than me is on x86, but setting up the ARM pipelines (on GitHub Actions runners) would have been a real pain without being able to debug issues locally.
Linux on arm is probably the most popular computing device platform in the world.
Which doesn't mean that it's easy to use an ARM device in the way I'd want to (i.e. as a trouble-free laptop or desktop with complete upstream kernel support).
Do the Lunar Lake chips have the same incredible standby battery times as the Snapdragon X's? That's where the latter really shines in my opinion.
I have a couple generation back amd laptop that can 'standby' for months.. its called S4 hibernate. Although at the same time its set for S3 and can sit in S3 for a few days at least and recover in less time than it takes to open the screen. The idea that you need instant wakeup when the screen has been closed for days is sorta a niche case, even apple's machines hibernate if you leave the screen closed for too long.
That isn't to say that modern standby/s2-idle isn't super useful, because it is, but more for actual use cases where the machine can basically go to sleep with the screen on displaying something the user is interacting with.
Yea, Lunar Lake made hit into ARM, but Panther Lake should be even stronger hit
Roughly the same on my Intel Lenovo. It’s a great little machine. And Linux runs nicely.
I fully expected this. I really wanted to get the Snapdragon X Elite Ideacentre just because I wanted an ARM target to run stuff on and if I'm being honest the Mac Minis are way better price/performance with support. Apple Silicon is far faster than any other ARM processor (Ampere, Qualcomm, anything else) that's easily available with good Linux support.
I am so grateful to the Asahi Linux guys who made this whole thing work. What a tour de force! One day, we'll get the M4 Mac Mini on Asahi and that will be far superior to this Snapdragon X Elite anyway.
I remember working on a Qualcomm dev board over a decade ago and they had just the worst documentation. The hardware wouldn't even respond correctly to what you told it to do. I don't know if that's standard but without the large amount of desire there is to run Linux on Apple Silicon I didn't really anticipate support approaching what Asahi has on M1/M2.
A tour de force indeed. Asahi Linux only works as well as it does because of the massive effort put in by that team.
For all the flack Qualcomm takes, they do significantly more than Apple to get hardware support into the kernel. They are already working to mainline the X2 Elite.
The difference is that Apple only makes a few devices and there is a large community around them. It would be far less work to create a stellar Linux experience on a Lenovo X Elite laptop than on a M2 MacBook. But fewer people are lining up to do it on Lenovo. We expect Lenovo, Linaro, and Qualcomm to do it for us.
Fair enough. But we should not be praising Apple.
Apple provide even less documentation than Qualcomm. Let that sink in.
Wrong documentation is perhaps worse than no documentation. Although Apple provides little, at least it is usually accurate, and what's left you know you must reverse engineer.
Unfortunately with the main reverse engineers of the Asahi project having moved on, I very much doubt we will see versions working on more recent M-series chips.
Qualcomm doesn't bother to upstream most of their SoCs. They maintain a fork of a specific Linux kernel version for a while and when they stop updating it or new version of Android requires newer kernel then updates for all devices based on that SoC end.
They have little experience producing code that is high enough quality it would be accepted into Linux kernel. They have even less experience maintaining it for an extended period of time.
Related from July:
"Linux on Snapdragon X Elite: Linaro and Tuxedo Pave the Way for ARM64 Laptops"
291 points, 217 comments
https://news.ycombinator.com/item?id=44699393
The first comment there is worth reading again, just for this sentence:
If you want to change some settings oft[sic] the device, you need to use their terrible Electron application.
While I almost certainly wouldn't have done more than wished for one, it's a shame they're not getting any return for their effort.
Somewhat tangent: x86-based laptops of this brand (it is new to me, I never meet Tuxedo Computers before) looks attractive, but there is no information about their screens main property: are they glossy or matt?
My wife is very sensitive to glossy screens and we have big problems to find new laptop for her, as most good ones are glossy now.
If she's ok with macOS, the new "nano-textured display" options on the MacBook Pros are very nice. I'm typing form one right now. It has the sharp color response of the glossy displays, but absolutely no noticeable glare.
I use their InfinityBook Pro 14. Its 2880x1800 display is matte.
One possible alternative is system76. Most of their laptop is matte https://system76.com/laptops/lemp13/configure
FYI you can add a matte layer yourself on any screen
Yes, you can even add a privacy protection layer that blocks viewing from larger angles.
This is infuriating. Everything should be matte unless you live in the dark.
Does anyone know why Linux laptop battery life is so bad? Is it a case of devices needing to be turned off that aren't? Poor CPU scheduling?
It's ACPI - most laptops ship with half-broken ACPI tables, and provide support for tunables through windows drivers. It's convenient for laptop manufacturers, because Microsoft makes it very easy to update drivers via windows update, and small issues with sleep, performance, etc. can be mostly patched through a driver update.
Linux OTOH can only use the information it has from ACPI to accomplish things like CPU power states, etc. So you end up with issues like "the fans stop working after my laptop wakes from sleep" because of a broken ACPI implementation.
There are a couple of laptops with excellent battery life under linux though, and if you can find a lunar lake laptop with iGPU and IPS screen, you can idle around 3-4W and easily get 12+ hours of battery.
Don't just leave us hanging, what model number laptops have that great of a battery life?
LG Gram laptops have excellent battery life. E.g. https://www.notebookcheck.net/Lightweight-with-power-and-20-...
I have an LG Gram 15 from 2021 and it gets 15+ hours under light usage in Linux.
LG Gram user here with Debian as a daily driver. Can confirm, maybe not 15h, but I don't think about charging. Plus, it's super stable, not a single crash or hang-up over years. It just works. I hope LG will keep this up and not mess up next iterations of the hardware.
I had an LG gram before the battery in it gave out and now it won't boot with the battery plugged in. The battery life was amazing, it always slept properly, etc.
Now I have a Framework. It randomly reboots when I close the lid, the battery life is terrible, etc. I live with it since I like the idea of a repairable laptop.
Lunar Lake Lenovo Carbon X1. If you get the IPS screen, you'll get even better than 12 hours.
What's standing in the way of doing something like NDISwrapper but for ACPI? Just that nobody with ghe required skills has spent the effort? Or something technical?
ACPI has been a problem for Linux for so long now…
Its not a problem with Linux, it's a problem with laptop manufacturers not caring about designing their ACPI tables and firmware correctly.
If the observable behavior is bad Linux performance, it's a Linux problem.
There's a saying in motorcycling: it's better to be alive than right. There's no upside in being correct if it leaves you worse off.
There are ways to make things better leveraging the Linux way. Make more usable tools for fixing ACPI deficiencies with hotloadable patches, ways of validating or verifying the patches for safety, ways of sharing and downloading them, and building a community around it.
Moaning that manufacturers only pay attention to where their profits come from is not a strategy at all.
Decompile your ACPI tables and then do a grep for "Linux". You are likely to find it, meaning the vendor took time to think about Linux on their hardware. Some vendors take the time to write good settings and code for the Linux ACPI paths, some dump you into no-man's land on purpose if your OSI vendor string is "Linux".
It's quite literally a vendor problem created by vendors leading anyone that doesn't run Windows astray in some cases.
If you run Linux, then dare to change your OSI vendor string to "Windows", you've entered into bespoke code land that follows different non-standard implementations for every SKU, where it's coded to work with a unique set of hardware and bespoke drivers/firmware on Windows. You also forgo any Linux forethought and optimizations that went into the "Linux" code paths.
You seem to have totally ignored his point...
My point is that from the Linux side, you're damned if you and damned if you don't no matter how you tackle the issue. If the layer above Linux is going to deliberately malfunction and lie on the Linux happy path, or speak some non-standard per-device driver protocol if you lie to use the Windows path, there's not much that can be done.
It's only a "Linux problem" if you're trying to run Linux on hardware that is actively hostile to it. There are plenty of vendors who supply good Linux happy paths in their firmware, using their hardware is the solution to that self-imposed problem.
I think the correct strategy in this case is to return your laptop to the store if it has linux compatibility issues, and keep trying until you find one that works.
i.e. don't support vendors whose laptops don't work in Linux.
That sounds like a problem with linux.
> Does anyone know why Linux laptop battery life is so bad?
It's extremely dependent on the hardware and driver quality. On ARM and contemporary x86 that's even more true, because (among other things) laptops suspend individual devices ("suspend-to-idle" or "S0ix" or "Modern Standby"), and any one device failing to suspend properly has a disproportionate impact.
That said, to a first approximation, this is a case where different people have wildly different experiences, and people who buy high-end well-supported hardware experience a completely different world than people who install Linux on whatever random hardware they have. For instance, Linux on a ThinkPad has excellent battery life, sometimes exceeding Windows.
Are there any repositories of documented battery life behavior?
Newer laptops come with extra power peripherals and sensors. Some of them are in ACPI tables, some are not. Most of them are proprietary ASICs (or custom chips, nuvoton produces quite a bit of those). Linux kernel or the userspace has poor support for those. Kernel PCIe drivers require some tuning. USB stack is kind of shaky and power management features are often turned off since they get unstable as hell.
If you have a dGPU, Linux implementation of the power management or offloading actually consumes more power than Windows due to bad architectural design. Here is a talk from XDC2025 that plans to fix some of the issues: https://indico.freedesktop.org/event/10/contributions/425/
Desktop usage is a third class citizen under Linux (servers first, embedded a distant second). Phones have good battery life since SoC and ODM engineers spend months to tune them and they have first party proprietary drivers. None of the laptop ODMs do such work to support Linux. Even their Windows tooling is arcane.
Unless the users get drivers all the minute PMICs and sensors, you'll never get the battery life you can get from a clean Windows install with all the drivers. MS and especially OEMs shoot themselves in the foot by filling the base OS with so much bloat that Linux actually ends up looking better compared to stock OEM installs.
In addition to the other comments, its worth noting macOS started adding developer documentation around energy efficiency, quality of service prioritization, etc. (along with support within its OS) around 2015-2016 when the first fanless usb-c macbook came out: https://developer.apple.com/library/archive/documentation/Pe...
Think I'm arguing its both things where the OS itself can optimize things for battery life along with instilling awareness and API support for it so developers can consider it too.
On top of this, they started encouraging adoption of multithreading and polished up the APIs to make doing so easier even in the early days of OS X, since they were selling PPC G4/G5 towers with dual and eventually quad CPUs.
This meant that by the time they started pushing devs to pay attention to QoS and such, good Mac apps had already been thoroughly multithreaded for years, making it relatively easy to toss things onto lower priority queues.
> so developers can consider it too
Try writing Apple Watch software.
Everything is about battery life.
It's interesting how they still can't get into the same order of magnitude with Garmin then.
I suspect it’s because the processor is a lot heavier-duty.
Right now, it seems like overkill, but not sure what all the health and fitness stuff requires.
My Dell XPS had pretty good battery life on linux. Probably better than on windows. But Dell sells the XPS wiht linux preinstalled. So I assume it has a lot to do with the drivers. Many notebooks have custom chips inside or some weird bios that works together with a windows program. I'd say laptops are more diverse than desktop PCs with of the shelve hardware.
Yeah, my 3-ish year old 13.4" XPS Plus is currently consuming 3.9 W with around 150 open tabs across four Firefox windows, 3 active Electron apps, Libreoffice Writer & Impress, a text editor, and a couple of terminals.
That's in an extremely vanilla Debian stable install, running in the default "Balanced" power mode, without any power-related tuning or configuration.
That compares reasonably well with my 14" M3 Macbook Pro, which seems to be drawing around 3.5 W with a similar set of apps open.
Sure, the XPS is flattered in this comparison because it has a slightly smaller screen, but even accounting for that it would still be... fine? Easily enough to get through a full day of use, which is all I care about.
There's nothing special about this XPS, and I'd expect the Thinkpad models that have explicit Linux support to be equally fine. The key point is that the vendor has put some amount of care and attention into producing a supportable system.
A big part of it is chipmakers deprecating S3 sleep in favour of Modern Standby.
if Windows and Mac and androids and iOS can achieve great battery life then isn’t the problem Linux?
More like FOSS religion, because those get the capabilities via NDAs or binary drivers.
Surely it's IP religion that keeps this information locked away.
IP religion supports the housing and family, FOSS one unfortunately mostly not, hence the usual licensing changes news every other week.
Install powertop, the "tunables" tab has a list of system power saving settings you can toggle through the UI. I've seen them make a pretty big difference, but YMMV of course.
It mostly just breaks things unfortunately. You can faff around for ages trying to figure out which devices work and which don’t but you end up with not much to show for it.
Yeah I tried that but it made no difference at all.
I ran into this problem on a Slimbook some years ago now. I found that my battery drained way too fast in standby, and I remember determining that this was some (relatively common) problem with sleep states, that some linux machines couldn't really enter/stay in a deeper sleep state, so my Slimbook's standby wasn't much of a standby at all.
But that's just one problem, I bet.
A lot of people say that lightweight desktops/distros help. Probably GNOME/KDE unnecessarily use your SSD, network, GPU and other resources even when you are idle, compared to using a minimal WM and only starting the daemons you actually need.
I personally never tested it, and I can't find definite benchmarks that confirm and measure the waste.
I've found that it can be made considerably better than Windows on the same hardware, but it requires substantial effort.
While each of the comments here describe individual failings, on a well supported laptop it is possible to get better power efficiency than windows if your willing to spend the time manually tuning linux, the powertop/etc suggestions are fine, but fundamentally the reason some of the 'lighter' DE's save so much power is that there is a lot of 'slop' in the default KDE/GNOME and application set. You have random things waking up to regularly and polling stuff which pulls the cores out of deep sleep states. And then there are all the kernel issues with being unable to identify and prioritize/schedule for a desktop. Ex: the only thing that should be given free reign is an active forground application, grouping and suppressing background applications, running them on little cores at slow rates if they have work to do/etc. All that is a huge part of why macos does so well vs linux on the same hardware.
The comment about ACPI being the problem is slightly off base, since its a huge part of the solution to good power management on modern hardware. There isn't another specification that allows the kind of background fine grained power tuning of random busses/devices/etc by tiny management cores who's entire purpose is monitoring activity and making adjustments required of modern machines. If one goes the DT route as QC has done here, each machine needs a huge pile of custom mailbox interface drivers upstreamed into the kernel customized for every device and hardware update/change. They get away with this in the android space because each device is literally a customized OS and they don't have the upstream turnaround problem because they don't upstream any of it, but that won't scale for general purpose compute as the parent article talks about.
> We will continue to monitor developments and evaluate the X2E at the appropriate time for its Linux suitability. If it meets expectations and we can reuse a significant portion of our work on the X1E, we may resume development. How much of our groundwork can be transferred to the X2E can only be assessed after a detailed evaluation of the chip.
Apparently the Windows exclusivity period has ended, so Google will support Android and ChromeOS on Qualcomm X2-based devices in 2026, https://news.ycombinator.com/item?id=45368167
I wonder what made it so hard? I thought that Snapdragon was already providing the Linux drivers? Anyone knows? Maybe those were not OpenSource?
My guess is that it's all the same as in Linux phones that they have large blobs of drivers given by the board producer but not being open, but then... Maybe we should invest time in microkernels? Maybe Linux is a dead end because of the monolithic architecture? Because I doubt big companies will change...
Perhaps they should pursue building around Mediatek CPUs.
Google has already built Chromebooks (which are Linux based) on them, so presumably the necessary drivers exist.
Outside of laptops, NVidia sells its Jetson Devkits and DGX workstations which run Linux and are pretty fast and ARM based.
And System76 also sells a high powered (and $$$) Linux workstation based on an NVidia ARM chipset
So at least for some ARM SOCs, performance issues have largely been solved.
How hard can it be to have an Android laptop? Basically most people just use a browser and the choice of applications is already extensive.
What Android plus phones proves is you can get excellent performance and fantastic battery life from Linux and third party HW. This could and should be applied to Linux running on an ARM64 system but not sure why. Maybe economies of scales WRT investment on the phone driver side.
Except it isn't the same.
First of all the userspace is completely different, secondly Android throughout the years has been aggressively changing the ways background process work (in the context of Android activities, not bare bones UNIX), thus it isn't the same as GNU/Linux where anything goes.
That's a Chromebook
That's Android nowadays: https://chromeunboxed.com/its-official-google-says-the-andro...
No it's not and never will be. Google says every year that ChromeOS and Android are merging but it's not happening. They are just merging some components, e.g. the Bluetooth Stack. ChromeOS got a new design a few months ago so they are still putting work into it.
That is what all those Android tablets with detachable keyboards already are, plenty models to chose from.
There used to be some laptops like Toshiba ac100, actually an almost unusable device even for simple tasks.
This feels like BAU for PC vendors - you test out a product on a new combination of hardware, and it isn't mature/stable/ready for production, so you kick it down the road to develop later - this is especially true for Linux, where a LOT of the work would be done outside of your organisation.
>usually one of the strong arguments for ARM devices—were not achieved under Linux
I mean I feel like once one of the ARM chipmakers can lend a hand on the software side it should be a landslide.
Google and Samsung managed to make very successful Chromebooks together, but IIRC there was a bunch of back and forth to make the whole thing boot quickly and sip battery power.
What’s the primary need for ARM? Is it because Apple silicon showed a big breakthrough in performance to power with reduced instruction set? While it’s amazing on paper I barely notice a difference on my day to day use between an Intel Ultra and a M2 in performance. Battery life is where they are miles apart.
I’m guessing for most people it doesn’t much matter. Most people aren’t writing assembly. They do love an all day battery. I think the competition really helps keep these companies honest.
Hardware companies generally start working on a laptop before a SOC is released, not after. They also need to secure manufacturer support, in this case Qualcomm to be able to deliver in time.
HW companies generally have access to the prototype silicon. It's how they iron out bugs in the bringup HW.
Bios is an issue for most laptop under Linux not just arm.
LVFS doesn't exist? UEFI?
I mean updating it. Often the update are just windows only..
For example I've had this dell Elitebook where I've installed Debian wiping out Win. While on windows system prompts Bios update practically every week but been years in Linux on same bios. IIRC updates were win only or jump thru some complex rings of fire. Haven't bothered looking up in a while..
I've also had to disable some protection such as security before I could install Debian though I guess there's a way if I research hard enough.
If it's Dell, they're one of the most prolific, if not the most, on LVFS. All of my Dell hardware gets firmware updates via fwupd. Dell is conservative with marking their BIOS updates, though, and you might have to enable the testing LVFS repository for regularly updated BIOS.
If you mean HP EliteBooks, it doesn't have to be any more complicated than 1) extract BIOS archive 2) drag & drop BIOS file 3) reboot.
You download the BIOS update .exe, run 7zip on it, and take the BIOS.BIN file and either stick it in the root of your EFI partition and it will install automatically on boot, or just run `fwupdtool install-blob BIOS.BIN` and it will install automatically on reboot.
HP publishes updates to LVFS regularly for both their laptops and their thunderbolt docks(!)
I believe Elitebooks are Ubuntu Certified, which I would imagine gets their firmware updates pushed to LVFS.
Personally, with the Thunderbolt Dock 4 and an HP ZBook G10, I have gotten timely (<30 days of release) automatic updates from LVFS for both on Ubuntu/Fedora with 0 effort on my part.
Thanks I'll check out for my 840. Didn't know about this.
Just an update...learning something thanks...installed fwupd and it said something about disabled service.
Ran fwupdmgr (thanks Google) and it lists a whole bunch of hardware on my laptop as "device with no update found", included in that are UEFI device firmware, UEFI dbx, System firmware etc.
I will check further, thanks again. Didn't know about this.
Sorry for mixup. I have both but in this case I was referring to Dell latitude not HP Elitebook.
I'll check on the point you have made, see if I can update bios. Thanks.
I have updated my HP laptop's UEFI 4-5 times now using LVFS with HP's officially published updates. Just did it 2-3 days ago, in fact, for the latest update. I got a GUI prompt that there was a firmware update for my system, I clicked install, it said it would reboot my system, and I said ok and went to go make tea. Came back to a login screen and the update installed successfully.
I have used Secure Boot with Linux for several years now, too. Microsoft signed the shim loader and most distros can do it out-of-the-box now, much like fwupdmgr (above).
I think the biggest thing is finding, for example, devices that are Ubuntu Certified. You don't have to use Ubuntu necessarily, but the whole ecosystem benefits from hardware manufacturers having a slight degree of accountability having done this.
> Often the update are just windows only..
That's a choice the vendor makes, and Tuxedo Computers is the vendor in this case. Since they control the product they're making, they should be able to provide nice firmware updates for Linux users. They just decided not to even try, I guess?
The issue is that Qualcomm is critical path in that and they don't cooperate nicely even with Microsoft
Dell used to have a means to update BIOS via a small FreeDOS I believe. Not sure why something similar couldn't be done from U-Boot.
Was to be expected. Qualcomm sucks very much to support open platforms.
I was disappointed to see that no more good linux compatible XPS was available anymore because they are now based on the last snapdragon for bullshit windows "ai" reasons.
I wonder if Mediatek will try its hand as laptop oriented SoC now that their flagship mobile SoC are competitive again and Google is merging Android and Chrome OS.
Generally, they are far nicer than Qualcomm when it comes to supporting standard technology.
They already have, and they are in Chromebooks. Last week, another HN:er posted that he uses a Lenovo Chromebook with a Mediatek SoC as his daily Linux dev machine.
https://news.ycombinator.com/item?id=45938410
BTW. I don't think Qualcomm SoCs running Windows was just about performance but more of a time-limited exclusivity deal with MS.
I hate to say it but it looks like Apple is winning, folks.
I'm disappointed, but not surprised.
We can nerd it out about Linux this an S3 sleep that. How much money does the community need to raise all in, for that notebook to happen. Where's the GoFundMeAngelList platform that's a cult where I can pledge $10,000 to get this laptop of my dreams? Or are we all too busy shitposting?
> How much money does the community need to raise all in, for that notebook to happen. > Where's the GoFundMeAngelList platform that's a cult where I can pledge $10,000 to get this laptop of my dreams?
The hard part isn't the money - it's identifying an addressable market that makes the investment worthwhile and assembling a team that can execute and deliver on it.
The market can't be a few hundred enthusiasts who want to spend $10k on a laptop. It has to be at least tens of thousands who would spend $1-2k. Even that probably won't get you to the break-even when you consider the size (and speciality) of team you need to do all this.
[dead]
ARM was always a distraction, and a monopoly i.e. worse than x86's duopoly.
Only RISC-V is worth switching to.
Besides the sibling comment, RISC-V isn't free from proprietary extensions as each OEM can add their own special juice.
monopoly? this is from DeepSeek, ymmv
Here is a list of major ARM licensees, categorized by the type of license they typically hold. 1. Architectural Licensees (Most Flexible)
These companies hold an Architectural License, which allows them to design their own CPU cores (and often GPUs/NPUs) that are compatible with the ARM instruction set. This is the highest level of partnership and requires significant engineering resources.
2. "Cores & IP" Licensees (The Common Path)These companies license pre-designed CPU cores, GPU designs, and other system IP from ARM. They then integrate these components into their own System-on-a-Chip (SoC) designs. This is the most common licensing model.
>monopoly? Here is a list of major ARM licensees...
None of these companies is able to license cores to third parties.
Only ARM can do that. ARM holds a monopoly.
>this is from DeepSeek, ymmv
DeepSeek would have told you this much, given the right prompt. Confirmation bias is unfortunately one hell of a bias.
Sure, but i suspect for basically all of us (maybe Elon is surfing HN today), that literally means nothing. Few of us have the 100's of millions required to design and fab a competitive SoC, and for those that do, the arm licenses are easier to acquire than the knowledge of how to build a competitive system (see RISC-V). You might as well complain about TSMC not publishing the information on how to fab 2nm parts or the code used to generate the mask sets.
For the rest of us, what matters is whether we can open digikey/newegg/whatever and buy a few machines and whether they are open enough for us to achieve our goals and their relative costs. So that list of vendors is more appropriate because they _CAN_ sell the resulting products to us. The problem is how much of their mostly off the shelf IP they refuse to document, resulting in extra difficulties getting basic things working.
ARM holds a monopoly over ARM licences? Wow. Truly you are a genius unappreciated in your own time. /s