Apple discontinues the Mac Pro

(9to5mac.com)

613 points | by bentocorp a day ago ago

551 comments

  • chatmasta 16 hours ago ago

    I bet there’s gonna be a banger of a Mac Studio announced in June.

    Apple really stumbled into making the perfect hardware for home inference machines. Does any hardware company come close to Apple in terms of unified memory and single machines for high throughput inference workloads? Or even any DIY build?

    When it comes to the previous “pro workloads,” like video rendering or software compilation, you’ve always been able to build a PC that outperforms any Apple machine at the same price point. But inference is unique because its performance scales with high memory throughput, and you can’t assemble that by wiring together off the shelf parts in a consumer form factor.

    It’s simply not possible to DIY a homelab inference server better than the M3+ for inference workloads, at anywhere close to its price point.

    They are perfectly positioned to capitalize on the next few years of model architecture developments. No wonder they haven’t bothered working on their own foundation models… they can let the rest of the industry do their work for them, and by the time their Gemini licensing deal expires, they’ll have their pick of the best models to embed with their hardware.

    • whywhywhywhy 10 hours ago ago

      > But inference is unique because its performance scales with high memory throughput, and you can’t assemble that by wiring together off the shelf parts in a consumer form factor.

      Nvidia outperforms Mac significantly on diffusion inference and many other forms. It’s not as simple as the current Mac chips are entirely better for this.

      • rafram 10 hours ago ago

        But where are you going to find an Nvidia GPU with 128+ GB of memory at an enthusiast-compatible price?

        • dabockster 5 hours ago ago

          You don’t need it if you use llamacpp on Windows, or if you compile it on Linux with CUDA 13 and the correct kernel HMM support, and you’re only using MoE models (which, tbh, you should be doing anyways).

          • 0x457 3 hours ago ago

            What MoE has to do with it? Aside from Flash-MoE that supports exactly one model and only on macOs - you still need to load entire model into memory. You also don't know what experts going to be activated, so it's not like you can predict which needs to be loaded.

        • ricardobayes 9 hours ago ago

          That might even be true, but how large is the TAM for such machines?

        • sippeangelo 7 hours ago ago

          Some Chinese sources sell modded Nvidia GPUs with extra VRAM. They're quite affordable in comparison to even a Mac Pro.

          • nextaccountic 6 hours ago ago

            Any links to them? Never heard of this..

            • noboostforyou 3 hours ago ago

              I've seen a guy who sells modded 2080 Ti with 22gb for $500

              https://www.tomshardware.com/pc-components/gpus/chinese-work...

              There's also unreleased Nvidia engineering samples of cards with doubled VRAM like this - https://www.reddit.com/r/nvidia/comments/1rczghu/update_unre...

            • giobox 5 hours ago ago

              It’s been going on for a while. Search YouTube or the web for 48gb 4090 (this is one of the most popular modded Nvidia cards), Nvidia of course never officially made a 4090 with this much memory.

              There are some on sale via eBay right now. The memory controllers on some Nvidia gpus support well beyond the 16-24gb they shipped with as standard, and enterprising folks in China desolder the original memory chips and fit higher capacity ones.

            • elorant 5 hours ago ago

              Go at ebay and search for RTX 4090 48GBs. There's plenty of them with prices around $3.5k

          • giwook 6 hours ago ago

            And how much do you trust Chinese hardware?

            • embedding-shape 6 hours ago ago

              Give that most of mine, and probably yours, and probably most of the world's computers are in fact made in China one way or another, some higher percentage than others, I'm guessing most of us trust our hardware enough to continue using it.

            • x______________ 6 hours ago ago

              When there's no one left to trust, maybe you need to re-evaluate your criteria.

              • sgc 5 hours ago ago

                I wouldn't say that's true or even likely. It's completely possible to be in a pit of vipers where every single snake is venomous, and that is pretty much what we are seeing: With technological advances, there is a certain subset of people that will use them primarily to solidify their power and control over others. There is no utopian society right now whose government doesn't look to spy through technology, which of course is best set up at time of manufacture.

                • x______________ 3 hours ago ago

                  Agreed. Unless you have full control over the production chain to fully produce a device, you are subject to the whims and desires of those who preside over such technological feats that we take for granted in our daily lives.

                  To the original point, it's safe to say that highlighting a nationality with regards to trust is baseless and without merit, as would be for any other topic (men/women from x are y, z food is better here, etc..). Real life is much more complicated and nuanced past nationalities. Some might call it FUD (fear, uncertainty and doubt) but there's always a deeper rationale at the individual level as well.

                  • sgc 3 hours ago ago

                    Rather than people being wary of Chinese in general, it's more that there is a high degree of government control exercised in China and they are known to be very strategic with long-term planning in regards to technology control both for spying and actual remote control of devices. We are all just looking for the least bad option. It's not like devices from other countries are immune, but they are often less organized so there is a better chance of avoiding the Chinese level of planned access.

                    It does seem like pretty low risk in this specific case so I agree OP's comment was bit over the top, but I would have no way to make anything resembling even an educated guess as to how far their programs go.

            • whywhywhywhy 2 hours ago ago

              The Mac is also chinese hardware

        • edelans 7 hours ago ago

          and let alone competing on the energy consumption!

        • colechristensen 4 hours ago ago

          The Nvidia DGX Spark is exactly this and in the same price and performance bracket.

          • andreybaskov 2 hours ago ago

            Sadly, memory bandwidth is abysmal compared to Apple chips - 273 GB/s vs 614 GB/s on M5 Max for similar price. Even though fp4 compute is faster, it doesn't help for all the decode heavy agentic workflows.

        • angoragoats 6 hours ago ago

          You can still buy used 3090 cards on ebay. 5 of them will give you 120GB of memory and will blow away any mac in terms of performance on LLM workloads. They have gone up in price lately and are now about $1100 each, but at one point they were $700-800 each.

          • rybosworld 5 hours ago ago

            I don't see how 5x 3090's is a better option than an M3 Ultra Mac studio.

            The mac will just work for models as large as 100B, can go higher with quantized models. And power draw will be 1/5th as much as the 3090 setup.

            You can certainly daisy chain several 3090's together but it doesn't work seamlessly.

            • whywhywhywhy 2 hours ago ago

              > You can certainly daisy chain several 3090's together

              It's not "daisy chaining" 3090 has NVLink.

              • rybosworld 2 hours ago ago

                Really? How would you NVLink more than 2 3090's?

            • angoragoats 5 hours ago ago

              > The mac will just work for models as large as 100B, can go higher with quantized models. And power draw will be 1/5th as much as the 3090 setup.

              This setup will work for 100B models as well. And yes, the Mac will draw less power, but the Nvidia machine will be many times faster. So depending on your specific Mac and your specific Nvidia setup, the performance per watt will be in the same ballpark. And higher absolute performance is certainly a nice perk.

              > You can certainly daisy chain several 3090's together but it doesn't work seamlessly.

              Citation needed; there's no "daisy chaining" in the setup I describe, and low level libraries like pytorch as well as higher level tools like Ollama all seamlessly support multiple GPUs.

              • rybosworld 4 hours ago ago

                I think it's bad form to say "citation needed" when your original claim didn't include citations.

                Regardless - there's a difference between training and inference. And pytorch doesn't magically make 5 gpus behave like 1 gpu.

              • lowbloodsugar 5 hours ago ago

                How much does it cost to have an electrician wire up 240v circuit just to power the thing?

        • embedding-shape 7 hours ago ago

          Where are you gonna find Apple hardware with 128GB of memory at enthusiast-compatible price?

          The cheapest Apple desktop with 128GB of memory shows up as costing $3499 for me, which isn't very "enthusiast-compatible", it's about 3x the minimum salary in my country!

          • kaashif 6 hours ago ago

            Apple is not catering to minimum salaries in poor countries. Does this really need to be explained?

            $3499 is definitely enthusiast compatible. That's beefy gaming PC tier, which is possibly the canonical example of an enthusiast market.

            This isn't tens of thousands of dollars for top tier Nvidia chips we're talking about.

            • embedding-shape 6 hours ago ago

              Seems I misunderstood what a "enthusiast" is, I thought it was about someone "excited about something" but seems the typical definition includes them having a lot of money too, my bad.

              • NikolaNovak 4 hours ago ago

                I'm an immigrant to Canada, and yes, English has both literal meanings and colloquial meanings.

                In the most literal meaning, absolutely, "Enthusiast" just means a person who likes something, is excited about something.

                When it comes to market and products though, typically you'll see the word "Enthusiast" as mid-tier - something like: Consumer --> Enthusiast --> Professional (may have words like "Prosumer" in there as well etc:)

                In that context, which is typically the one people will use when discussing product pricing and placement, "Enthusiast" is somebody who yes enjoys something, but does it sufficiently to be discerning and capable of purchasing mid-tier or above hardware.

                So while a consumer photographer, may use their phone or compact or all-in-one camera, enthusiast photographer will probably spend $3000 - $5000 in camera gear. Equivalently, there are myriad gamers out there (on phones, consoles, Geforce Now, whatever:), an enthusiast gamer is assumed to have a dedicated gaming computer, probably a tower, with a dedicated video card, likely say a 5070ti or above, probably 32GB+ RAM, couple of SSDs which are not entry level, etc.

                Again, this is not to say a person with limited budget is "not a real enthusiast", no gatekeeping is intended here; simply, if it may help, what the word means when it comes to market segmentation and product pricing :)

                • brailsafe 3 hours ago ago

                  Additionally, "enthusiasts"/"hobbyists" tend to be willing to spend beyond practical utility, while professionals are more interested in pragmatism, especially in photography from what I can tell.

                  If you're an actual pro, you need your stuff to work properly, efficiently, reliably, when it's called for. When you're a hobbyist, it's sometimes almost the goal to waste money and time on stuff that really doesn't matter beyond your interest in it; working on the thing is the point, not the value it generates. Pros should spend money on good tools and research and knowledge, but it usually needs to be an investment, sometimes crossing over with hobbyist opinions.

                  A friend of mine who's a computer hobbyist and retail IT tech, making far far less than I do, spends comically more than me on hardware to play basically one game. He keeps up to date with the latest processors and all that stuff, he knows hardware in terms of gaming. I meanwhile—despite having more money available—have a fairly budget gaming PC that I did build myself, but contains entirely old/used components, some of which he just needed to get rid of and gave me for free, and I upgrade my main mac every 5 years or something. I only upgrade when hardware is really getting in my way.

                • sib an hour ago ago

                  >> So while a consumer photographer, may use their phone or compact or all-in-one camera, enthusiast photographer will probably spend $3000 - $5000 in camera gear.

                  It's interesting that you chose photographers as the example here. In many cases that I've seen, enthusiast photographers spend much more than professional photographers on their gear because the photographers make their money with their gear and therefore need to justify it, while the enthusiasts are often tech people, successful doctors, etc., who spend lots and lots on money on their hobbies...

                  In any case, your point stands, that "enthusiast" computer users would easily spend $3-4K or more on gear to play games, train models, etc.

              • pchristensen 5 hours ago ago

                $3.5k is a lot of money, but not a ton by American hobby standards. It's easy to spend multiples, even orders of magnitude more than that on hobbies like fishing, wine, sports tickets, concerts, scuba, travel, being a foodie, golf, marathons, collectibles, etc.

                It's out of reach for lots of people, even in developed countries. But it's easily within reach for loads of people that care more about computing than other stuff.

                • brailsafe 3 hours ago ago

                  I'd argue that some of those are more consumption and activity than hobby depending on how they're engaged with, and that people use the word "hobby" too loosely, but would agree that Americans in-particular consume at obscene rates.

                  Golf equipment, mountaineering equipment, skiing and snowboarding lift tickets and gear, a single excessive graphics card that's only used for increasing frame rates marginally, or basically a single extra feature on a car, are all things that accumulate quite quickly. Some are clearly more superfluous than others and cater to whales, while some are just expensive by nature and aren't attempting to be anything else

                  • ua709 20 minutes ago ago

                    Those are the prices for just buying equipment, which at least retain some kind of value. 3 million+ American kids are enrolled in competitive soccer with annual clubs dues between $1K and $5K, and that money is just gone at the end of the year. Basically none of those kids are going to have a career in soccer, so it's clearly a hobby, and everyone knows it. And soccer isn't even the most popular sport!

                • oxfeed65261 4 hours ago ago

                  In June 1977, the base Apple II model with 4 KB of RAM was $1,298 (equivalent to about $6,900 in 2025), and with the maximum 48 KB of RAM it was $2,638 (equivalent to about $14,000 in 2025).

                  (Source: Wikipedia via Claude Opus)

                  • prewett 3 hours ago ago

                    Wow, 48k for $14000. Now you can get a MBP with a million times more memory for $3500 or so. Whereas that CPU was clocked at 1 MHz, so CPUs are only several thousand times faster, maybe something like 30,000 times faster if you can make use of multi-core.

                • chirau 4 hours ago ago

                  I live in America, I am very well compensated. Have been for 15 years now. $3500 is a lot of money. A lot. There is a tiny bubble of us tech folks who think it is accessible to most people. It is not. It is also the same reason Macs are still a niche. Don't take your circles to be the standard, it is very very far from it, especially if you think $3500 is not a lot of money.

                  It is easy to confirm this, just look at the sales number of these $3500 devices. It is definitely not an enthusiast price point, even in the US.

                  • tracker1 2 hours ago ago

                    It's not nothing for most people... it's more than a month of rent/mortgage for a significant number of Americans even. But if it's your primary hobby, it's not completely out of reach, and it's not something you necessarily spend every year. A lot of people will upgrade to a new computer every 3-5 years and maybe upgrade something in between those complete system upgrades.

                    I know plenty of people who don't make a lot of money (say top 25% or so) that will have a Boat or RV that costs more than a $3500 computer, and balk at the thought of spending that much on a computer. It just depends on where your interests are.

                  • pchristensen an hour ago ago

                    The first words I said: "$3.5k is a lot of money..."

                    There are tens of millions of top 10% income adults in America. So something can be both unaffordable to most people, and also easily accessible to very many people.

                  • 1123581321 3 hours ago ago

                    It’s a midrange to upper expense in the US if it’s your hobby. Most people don’t have a serious computer hobby but they golf, trade ATVs, travel, drink, etc.

                  • sib an hour ago ago

                    There are something like 24 million millionaires in the United States... Estimates are that Americans spent $157 billion on pets in 2025.

                    There are a lot of people who could easily choose to spend $3,500 on a computer.

                  • jltsiren 2 hours ago ago

                    $3500 would have been 3–4 months' discretionary spending as a PhD student in Finland 15 years ago. A sum you might choose to spend once a year on something you find genuinely interesting.

                    Some people succumb to lifestyle creep or choose it deliberately. Others choose to live below their means when their income grows. The latter have a lot more money to spend on extras, or to save if that's what they prefer.

              • Dylan16807 2 hours ago ago

                For an individual making median income in the US, it would cost 2% of your income to get a machine like this every 4-5 years. That's a matter of enthusiasm, not a matter of having a lot of money. Sorry that income is less where you are, but the people talking about the product tier are using American standards.

              • darkwater 6 hours ago ago

                An enthusiast in the hobby space is by definition someone willing to pour much more money that someone else not that enthusiast in whichever hobby we are talking about.

                • embedding-shape 5 hours ago ago

                  Well, and also has a bunch of money, not just willing. I guess locally we don't really have that difference, as two other commentators here went by, that's why I had to update my local understanding of "enthusiast". Usually we use it for how engaged/interested a person is, regardless of how much money they can or are willing to use.

                  Learned something new today at least, so that's cool :)

                  • sgc 4 hours ago ago

                    Yes, when tech gear is sold as 'enthusiast' gear, it is almost invariably the most expensive non-professional tier of equipment. That is roughly the common understanding: Expensive and focused on features more than security required for public use; while remaining within reach of at least some individuals, not only corporations.

                  • darkwater 5 hours ago ago

                    In a hobby where there are (strong) HW requirements, it mostly takes for granted you have money to shell out for your hobby, indeed.

            • darkwater 6 hours ago ago

              1200$ as the minimum salary covers probably 70% of Europe by population?

              • NetMageSCW 6 hours ago ago

                The Neo has enough power to do small LLM testing and pretty much anything else a bit slowly, and costs $600?

                • 0x457 3 hours ago ago

                  Neo tops at 8GB RAM. What LLM are you going to run there? Functiongemma?

                  It can absolutely do some ML inference on it, but not much in terms of LLMs.

            • monsieurbanana 6 hours ago ago

              Did you need to add poor? Unless apple isn't catering to the US

          • tracker1 2 hours ago ago

            I spent aaround that on my current personal desktop... 9950X, 2x48gb ddr5/@6000, RX 9070XT, 4tb gen 5 nvme + 4tb gen 4 nvme. I could have cut the cpu to a 9800x3d and ram to 32gb with a different GPU if my needs/usage were different. I'm running in Linux and don't game too much.

            That said, a higher end gaming setup is going to cost that much and is absolutely in the enthusiast realm. "enthusiast" doesn't mean compatible with "minimum wage"

          • mprovost 6 hours ago ago

            The original Mac with 128KB of memory cost $2,495 when Apple released it in 1984. It would be about 3x that in today's money.

            • intrasight 2 hours ago ago

              I came here to say the same. Even with my student discount price of $1000, that's over 3K in today's dollars.

              We are so freaking spoiled by the cheap cost of compute now.

          • joe_mamba 7 hours ago ago

            > it's about 3x the minimum salary in my country!

            Enthusiast compute hardware doesn't cater to the people on the minimum salary in any country, let alone developing nations. When Ferrari makes a car they don't ask themselves if people on minimum salary will be able to afford them.

            In in the bottom two poorest EU member states and Apple and Microsoft Xbox don't even bother to have a direct to customer store presence here, you buy them from third party retailers.

            Why? Probably because their metrics show people here are too poor to afford their products en-masse to be worth operating a dedicated sales entity. Even though plenty of people do own top of the line Macbooks here, it's just the wealthy enthusiast niche, but it's still a niche for the volumes they (wish to)operate at. Why do you think Apple launched the Mac Neo?

            • embedding-shape 7 hours ago ago

              Right, I think maybe we're then talking about "upper class enthusiasts" or something in reality then? I understood that to juts be about the person, not what economic class they were in, maybe I misunderstood.

              • Heliosmaster 6 hours ago ago

                Yes, it's a different definition.

                Enthusiast in this contest more or less means you are excited enough about something to get a level above what normal people should get and just below professional pricing. An enthusiast camera body can be 2000 euros.

                I would say an enthusiast computer is 2-4k.

                It really depends what you meant with minimum salary (yearly?) because paying 3 months of salary for a computer like that isn't far fetched. You're not using this to generate recipes for cookies. An enthusiast level car is expensive as well.

              • 0x457 3 hours ago ago

                enthusiasts in computer hardware assumes enthusiasm about hardware, not about "hardware on an budget". It doesn't matter if it's afforable or not.

              • joe_mamba 6 hours ago ago

                >Right, I think maybe we're then talking about "upper class enthusiasts" or something in reality then?

                Why? Enthusiasts are by definition people for whom value for money is not the main driver but top performance and cutting edge novelty at any cost. Affording enthusiast computer hardware is not a human right same how affording a Lamborghini or McMansion isn't.

                But you don't need to buy a Lamborghini to do your grocery shopping or drive your kids to school, same how you don't need an Nvidia 5090 or MacBook Pro Max to do your taxes or do your school work.

                So the definition is fine as it is. It's hardware for people with very deep pockets, often called whales.

      • jiwidi 6 hours ago ago

        tell me what pc with an nvidia gpu can you buy with same memory and performance.

        I never liked apple hardware, but they are now untouchable since their shift to own sillicon for home hardware.

        • traceroute66 6 hours ago ago

          > tell me what pc with an nvidia gpu can you buy with same memory and performance.

          And power consumption !

          The performance per watt of Apple is unmatched.

          • dabockster 5 hours ago ago

            This needs to be sold as the big ticket item for low level devs. Their chips are some of the most power efficient chips on the market right now.

            Hoping they release a blade server version somehow.

            • Melatonic 4 hours ago ago

              Apple releasing anything enterprise or "server" related would be a pretty big pivot - let alone blades.

            • bigyabai 5 hours ago ago

              Nvidia's recent GPUs are more power-efficient than Apple Silicon in raster, training and inference workloads.

              A blade server would get cancelled just like the Mac Pro for exactly the same reasons: https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...

              • traceroute66 5 hours ago ago

                > Nvidia's recent GPUs are more power-efficient than Apple Silicon in raster, training and inference workloads.

                I think you can do better than the proverbial Apples and Oranges comparison.

                In terms of total system, "box on desk", Apple is likely to remain the performance per watt leader compared to random PC workstations with whatever GPUs you put inside.

                • bigyabai 2 hours ago ago

                  Then ignore me, and go ask your local datacenter why Apple Silicon isn't on any of their racks.

          • saltyoldman 5 hours ago ago

            I've owned some beefy computers in the past and this tiny little m4 mini on my desk blows them all out of the water easily. It's crazy.

        • elorant 5 hours ago ago

          Untouchable my ass. You get a PC that has an ssd glued to the motherboard so if you run write intensive workloads and that thing wears out replacing it will have significant cost. Then there’s no PCie slot to get any decent network card if you want to work more than one of them in unison, you’re stuck with that stupid thunderbolt 5 while Infiniband gives x10 network speeds. As for memory bandwidth, it’s fast compared to CPUs but any enterprise GPU dwarfs it significantly. The unified RAM is the only interesting angle.

          Apple could have taken a chunk of the enterprise market now with that AI craze if they had made an upgradable and expandable server edition based on their silicon. But no, everything has to be bolt down and restricted.

        • angoragoats 6 hours ago ago

          This has changed since Sam Altman started buying up all the chip supply, raising prices on memory, storage, and GPUs for everyone, but it used to be the case that you could build a PC that was both cheaper and faster than a Mac for LLM inference, with roughly equal performance per watt.

          You would use multiple *90-series GPUs, throttled down in terms of power. Depending on the GPU, the sweet spot is between 225-350W, where for LLM workloads you only lose 5-10% of performance for a ~50% drop in power consumption.

          Combined with a workstation (Xeon/Epyc) CPU with lots of PCIe, you can support 6-7 such GPUs (or more, depending on available power). This will blow away the fastest Mac studio, at a comparable performance per watt.

          Again, a lot of this has changed, since GPUs and memory are so much more expensive now.

          Macs are great for a simpler all in one box with high memory bandwidth and middling-to-decent GPU performance, but they are (or were) absolutely not "untouchable."

          • Detrytus 5 hours ago ago

            With 6-7 GPUs and EPYC cpu it will also cost 2-3x more than a Mac Studio.

            • deaddodo 5 hours ago ago

              I think OP’s point was that it would do more than 2-3x the workload, thus them stating “blow it out of the water” and specifying “performance-per-watt”.

      • chpatrick 10 hours ago ago

        But they're pretty fast and can have loads of RAM, which would be prohibitively expensive with Nvidia.

        • chocochunks 9 hours ago ago

          A 128GB 2TB Dell Pro Max with Nvidia GB10 is about $4200, a Mac Studio with 128GB RAM and 2TB storage is $4100. So pretty comparable. I think Dell's pricing has been rocked more by the RAM shortage too.

          • adgjlsfhk1 5 hours ago ago

            Unfortunately the GB10 is incredibly bandwith starved. You get 128gb ram, but only 270GB/s bandwidth. The M3 Ultra mac studio gets you 820GB/s. (The M4 max is at 410GB/s. I'm not aware of any workload that gets the GB10 to it's theoretical peakflops.

            • chocochunks 4 hours ago ago

              You can't get a 128GB M3 Ultra, it's also more expensive. For some workloads the Studio is better, for others the GB10.

          • midnight_eclair 8 hours ago ago

            ~not unified memory tho~

            • mciancia 8 hours ago ago

              It is unified memory on this one

            • ctxc 7 hours ago ago

              I took ~ to be a "singing tone" for some reason till I saw sibling and realized it might be an attempted strikethrough xD

            • benoau 6 hours ago ago

              That won't hold much benefit as SOCAMM2 and LPCAMM2 get more popular.

          • traceroute66 5 hours ago ago

            > So pretty comparable.

            The Mac Studio almost certainly uses at least half the power

            (educated guess, I'm too lazy to go look at all the spec sheets and run the numbers)

            • bigyabai 5 hours ago ago

              It's actually reversed. The GB10 chipset has a TDP of 140w, whereas M2/M3 Ultra pulls over 250w from the wall: https://support.apple.com/en-us/102027

              • traceroute66 5 hours ago ago

                > It's actually reversed. The GB10 chipset has a TDP of 140w, whereas M2/M3 Ultra pulls over 250w from the wall

                Come on mate ... I think you and I both know I was talking about complete system here, not discrete components.

                I'm pretty sure your total package (Dell Pro Max + GB10) will pull more from the wall.

                • bigyabai 5 hours ago ago

                  I'm pretty sure you need to look up what you're talking about instead of making a guess.

                  The Dell Pro Max PSU + enclosure is only rated for 240w, it literally can't pull more than 250w from the wall without shorting itself.

          • plagiarist 7 hours ago ago

            Not quite, what is the vRAM bandwidth of each? The bandwidth is a huge contributor to LLM performance.

            • embedding-shape 7 hours ago ago

              AFAIK, for the unified bandwidth, it depends mostly on the CPU, for M4 Max (I think it's the default today?) it does ~550 GB/s, while GB10 does ~270 GB/s, so about a 2x difference between the two. For comparison, RTX Pro 6000 does 1.8 TB/s, pretty much the same as what a 5090 does, which is probably the fastest/best GPUs a prosumer reasonable could get.

      • wappieslurkz 6 hours ago ago

        Do NVIDIA solutions also outperform the Apple M-series in performance per Watt?

        • whywhywhywhy 2 hours ago ago

          No, that's why Apple uses Performance Per Watt not actual performance celling as the metric. In actual workloads where you'd need this power then actual performance is what matters not PPW.

        • Lalabadie 6 hours ago ago

          Probably comparable, but that's only with business-grade products, it's why Apple's current silicon is so remarkable on the market at the consumer level.

      • AdamN 10 hours ago ago

        Nvidia isn't selling one-off home computers afaik. But yes in terms of datacenter cloud usage Nvidia performs.

        • _zoltan_ 8 hours ago ago

          GB300 DGX Station was announced last Monday.

          • eitally 5 hours ago ago

            It's going to cost far more than a diy machine with multiple lower end GPUs. Which is fine -- it's aimed at enterprise, not home labs.

        • newsclues 9 hours ago ago
          • jamespo 9 hours ago ago

            Amusingly there's a macbook next to it in the pic, is this headless?

            • Tsiklon 9 hours ago ago

              It has a HDMI port and its USB-C ports also support display out. But I believe most who buy it intend to use it headless. The machine runs Ubuntu 24.04 and has a slightly customised Gnome (green accents and an nvidia logo in GDM) as its desktop.

    • HerbManic 15 hours ago ago

      Jeff Geerling doing that 1.5TB cluster using 4 Mac Studios was pretty much all the proof needed to demo how the Mac Pro is struggling to find any place any more.

      https://www.jeffgeerling.com/blog/2025/15-tb-vram-on-mac-stu...

      • pjmlp 12 hours ago ago

        That is the proof what is left is a workaround, just like pilling minis on racks because Apple left the server space.

        Also why Swift nowadays has to have good Linux support, if app developers want to share code with the server.

        • coldtea 9 hours ago ago

          A workaround that works is better than an official solution that's barely adequate. Which is often the case.

          • pjmlp 8 hours ago ago

            Or just maybe, to use a Steve Jobs quote, one is holding it wrong and should look elsewhere.

      • zozbot234 15 hours ago ago

        But those Thunderbolt links are slower than modern PCIe. If there's actually a M5-based Mac Studio with the same Thunderbolt support, you'll be better off e.g. for LLM inference, streaming read-only model weights from storage as we've seen with recent experiments than pushing the same amount of data via Thunderbolt. It's only if you want to go beyond local memory constraints (e.g. larger contexts) that the Thunderbolt link becomes useful.

        • wpm 15 hours ago ago

          Why everyone wants to live in dongle/external cabling/dock hell is beyond me. PCIe cards are powered internally with no extra cables. They are secure. They do not move or fall off of shit. They do not require cable management or external power supplies. They do not have to talk to the CPU through a stupid USB hub or a Thunderbolt dock. Crappy USB HDMI capture on my Mac led me to running a fucking PC with slots to capture video off of a 50 foot HDMI cable, that then streamed the feed to my Mac from NDI, because it was more reliable than the elgarbo capture dongle I was using. This shit is bad. It sucks. It's twice the price and half the quality of a Blackmagic Design capture card. But, no slots, so I guess I can go get fucked.

          • wtallis 14 hours ago ago

            For anything that's even somewhat in the consumer space rather than pure workstation/professional, the main reason is that dongles can be used with a laptop but add-in cards can't. When ordinary consumer PCs (or even office PCs) are in the picture, laptops are a huge chunk of the target audience.

            The market segments that can afford to ignore laptops and only target permanently-installed desktops are mostly those niches where the desktop is installed alongside some other piece of equipment that is much more expensive.

        • GeekyBear 14 hours ago ago

          Wasn't streaming models from storage into limited memory a case where it was impressive that you could make the elephant dance at all?

          If you want to get usable speeds from very large models that haven't been quantitized to death on local machines, RDMA over Thunderbolt enables that use case.

          Consumer PC GPUs don't have enough RAM, enterprise GPUs that can handle the load very well are obscenely expensive, Strix Halo tops out at 128 Gigs of RAM and is limited on Thunderbolt ports.

          • zozbot234 12 hours ago ago

            The bad performance you saw was with very limited memory and very large models, so streaming weights from storage was a huge bottleneck. If you gradually increase RAM, more and more of the weights are cached and the speed improves quite a bit, at least until you're running huge contexts and most of the RAM ends up being devoted to that. Is the overall speed "usable"? That's highly subjective, but with local inference it's convenient to run 24x7 and rely on non-interactive use. Of course scaling out via RDMA on Thunderbolt is still there as an option, it's just not the first approach you'd try.

            • Dylan16807 an hour ago ago

              > If you gradually increase RAM, more and more of the weights are cached and the speed improves quite a bit

              It'll increase a lot based on the zero-ram baseline. But it's still complete garbage compared to fitting the model in RAM. Even if you fit most of it in RAM you're still probably an order of magnitude slower than fitting all of it in RAM, most of your time spent waiting for your SSD.

            • GeekyBear 4 hours ago ago

              If you don't care about performance, you have a lot of options.

      • mixdup 8 hours ago ago

        The proposition of a Mac Pro in the Apple Silicon world wasn't necessarily about performance, it was about the existence of the PCIe slots. I don't think AI becoming a workload for pro Macs means the Mac Pro doesn't have a place, people who were using Mac Pros for audio or video capture didn't stop doing that media work and switched to AI as a profession. That market just wasn't big enough to sustain the Mac Pro in the first place and Apple has finally acknowledged that fact

        • alsetmusic 7 hours ago ago

          I had a U-Audio PCI card in a Mac Pro during the Intel era of Macs. It was a chip to run their software plugins and the plugins are top of the line. I have a U-Audio box that runs over Thunderbolt now. I know there are people who need device slots, but it's vanishingly few. I'm disappointed that this category of machine is going away, but it stopped being for me in the Apple Silicon era.

        • grahamlee 8 hours ago ago

          so many peripherals now come in external boxes that communicate _incredibly quickly_ over Thunderbolt 4/5 that the need for PCIe is marginal, while the cost to support it is significant.

      • ActorNightly 3 hours ago ago

        Wow spend 40k to get the same tokens/second in QWEN as you would on a 3090

        I have a feeling that Mac fans obsess more about being able to run large models at unusably slow speeds instead of actually using said models for anything.

    • dragonwriter 10 hours ago ago

      > Apple really stumbled into making the perfect hardware for home inference machines

      For LLMs. For inference with other kinds of models where the amount of compute needed relative to the amount of data transfer needed is higher, Apple is less ideal and systems worh lower memory bandwidth but more FLOPS shine. And if things like Google’s TurboQuant work out for efficient kv-cache quantization, Apple could lose a lot of that edge for LLM inference, too, since that would reduce the amount of data shuffling relative to compute for LLM inference.

      • NetMageSCW 5 hours ago ago

        Or just mean that you could run a 5x bigger model on Apple than before.

        • dragonwriter 5 hours ago ago

          Well, since its kv-cache that TurboQuant optimizes, it means five times bigger context fits into RAM, all other things being equal, not a five times bigger model. But, sure, with any given context size and the same RAM available, you can instead fit a bigger model—which also takes more compute to get the same performance.

          Anything that increases the necessary compute to fully utilize RAM bandwidth in optimal LLM serving weakens Apples advantage for that.

    • utopiah 5 hours ago ago

      > ...making the perfect hardware for home inference machines.

      I really don't get why anybody would want that. What's the use case there?

      If someone doesn't care about privacy, they can use for-profit services because they are basically losing money, trying to corner the market.

      If they care about privacy, they can rent cloud instances in order to setup, run, close and it will be both cheaper, faster (if they can afford it) but also with no upfront cost per project. This can be done with a lot of scaffolding, e.g. Mistral, HuggingFace, or not, e.g. AWS/Azure/GoogleCloud, etc. The point being that you do NOT purchase the GPU or even dedicated hardware, e.g. Google TPU, but rather rent for what you actually need and when the next gen is up, you're not stuck with "old" gen.

      So... what use case if left, somebody who is both technical, very privacy conscious AND want to do so offline despite have 5G or satellite connectivity pretty much anywhere?

      I honestly don't get who that's for (and I did try a dozens of local models, so I'm actually curious).

      PS: FWIW https://pricepertoken.com might help but not sure it shows the infrastructure each rely on to compare. If you have a better link please share back.

      • BeetleB 5 hours ago ago

        > If they care about privacy, they can rent cloud instances in order to setup, run, close and it will be both cheaper, faster (if they can afford it) but also with no upfront cost per project. This can be done with a lot of scaffolding, e.g. Mistral, HuggingFace, or not, e.g. AWS/Azure/GoogleCloud, etc.

        I'm a somewhat tech heavy guy (compiles my own kernel, uses online hosting, etc).

        Reading your comment doesn't sound appealing at all. I do almost no cloud stuff. I don't know which provider to choose. I have to compare costs. How can I trust they won't peek at my data (no, a Privacy Policy is not enough - I'd need encryption with only me having the key). What do I do if they suddenly jack up the rates or go out of business? I suddenly need a backup strategy as well. And repeat the whole painful loop.

        I'll lose a lot more time figuring this out than with a Mac Studio. I'll probably lose money too. I'll rent from one provider, get stuck, and having a busy life, sit on it a month or two before I find a fix (paying money for nothing). At least if I use the Mac Studio as my primary machine, I don't have to worry about money going to waste because I'm actually utilizing it.

        And chances are, a lot of the data I'll use it with (e.g. mail) is sitting on the same machine anyway. Getting something on the cloud to work with it is yet-another-pain.

        • eitally 5 hours ago ago

          To your second issue/question, all the cloud provide CMEK services/features (for many years now).

        • utopiah 5 hours ago ago

          > suddenly jack up the rates or go out of business?

          There is basically no lock-in, you don't even "move" your image, your data is basically some "context" or a history of prompts which probably fits in a floppy disk (not even being sarcastic) so if you know the basic about containerization (Docker, podman, etc) which most likely the cloud provider even takes care of, then it takes literally minutes to switch from one to another. It's really not more complex that setting up a PHP server, the only difference is the hardware you run on and that's basically a dropdown button on a Web interface (if you don't want to have scripts for that too) then selecting the right image (basically NVIDIA support).

          Consequently even if that were to happen (which I have NEVER seen! at worst it's like 15% increase after years) then it would actually not matter to you. It's also very unlikely to happen based of the investment poured into the "industry". Basically everybody is trying to get "you" as a customer to rely on their stack.

          ... but OK, let's imagine that's not appealing to you, have you not done the comparison of what a Mac Studio (or whatever hardware) could actually buy otherwise?

          • BeetleB 4 hours ago ago

            Ok. I think I misunderstood. So the idea is to simple set up the LLM service on the server and access it with an API like I would with any LLM provider? This way whatever application I want to use it for stays at home?

            That's a bit more appealing. How much would it cost per month to have it continually online?

            • utopiah 4 hours ago ago

              Well it depends entirely on what you need. You can even do the training yourself on that infrastructure to rent if you want. The more you do yourself, the more private but also the more expensive it will be.

              I don't want to make an ad here but I'm going to point to HuggingFace https://endpoints.huggingface.co (and to avoid singling them out just https://replicate.com/pricing too but I don't know them well) as an example with pricing.

              The "beauty" IMHO of such solutions is that again you pay for what you want. If you want to use the endpoint only for 5min to test that the model and its API fits your need? OK. You want the whole month? Sure. You want 1 user, namely you? Fine, not a lot of power, you want your whole organization to use that endpoint? Scale up.

              I'm going to give very rough approximation because honestly I'm not really into this so someone please adjust with source :

              Apple Mac Studio M3 Ultra 96GB = $4K

              ~NVIDIA A100 with 80G ~ 10x perf compared to M3 Pro (obviously depends on models)

              So on Replicate today a one can get an A100 for ~$5/hr which is ... about a month. But that's for 10x speed and electricity included. So very VERY approximately if you use a Mac Studio for 10 months on AI non stop (days and night) then it's arguably worth it.

              If you use it less, say 2hrs/day only for inference, then I imagine it takes few years to have the equivalent and by that time I bet Replicate or HuggingFace is going to rent much faster setup for much cheaper simply because that's what they have ALL done for the last few years.

              • BeetleB 3 hours ago ago

                Well, full disclosure (despite my comments above): I'm not interested in buying a Mac Studio. I was merely explaining why I thought people may prefer it.

                For my own use, I'm just looking at absolute price (and convenience).

                I haven't explored open weights models, so I have no idea which I'd want. It would be great to get a "frontier" model like Minimax-M2.5, but at $10/hr, it's not worth it - let alone $40/hr for GLM-5. I'd have to explore use cases for cheaper models. Likely for things related to reading emails, I can get by with a much cheaper model.

                If I set one of these up, how easily is it for me to launch one of these (on the command line on my home PC) and then shut it down. Right now, when I write any app (or use OpenCode), it's frictionless. My worry is that either turning it on will be a hassle, and even worse, I'll forget to turn it off and suddenly get a big pointless bill.

                If there are any guides out there on how people manage all this, it would be much appreciated.

                • utopiah 3 hours ago ago

                  Honestly I doubt it's worth it, hence my suggestion to make a "cold" estimation of both options.

                  Well it's not exactly a guide and honestly it's quite outdated (because I stop keeping track as I just don't get the quality of results I hope for versus huge trade offs that aren't worth it for me) but I listed plenty of models and software solutions for self-hosting, at home or in the cloud at https://fabien.benetou.fr/Content/SelfHostingArtificialIntel...

                  Feels free to check it out and if there is something I can clarify, happy to try.

      • jedberg 2 hours ago ago

        I think the main use case is home automation. You don't want details of your home setup leaking out.

      • pwython 5 hours ago ago

        Genuine question: If I were to fine-tune a model with 10 years of business data in a competitive space, would you feel safe with cloud training?

        • yomismoaqui 5 hours ago ago

          If you already have those 10 years of bussiness data on Microsoft or Google services or their respective clouds, are you feeling safe?

        • utopiah 4 hours ago ago

          I'm not a lawyer but technically most if not all cloud providers, specific to AI ("neo-cloud") or not, to provide Customer-managed encryption keys (CMEK) as someone else pointed out.

          That being said if I were to be in such a situation, and if somehow the guarantees wouldn't be enough then I'd definitely expect to have the budget to build my own data center with GB300 or TPUs. I can't imagine that running it on a Mac Studio.

        • justinhj 5 hours ago ago

          People store that data in databases in the same data centre so it's really the same level of trust needed that your provider adheres to the no training on your data. Trust and lawyers.

    • robotswantdata 13 hours ago ago

      DGX workstations, expensive but allow PCI cards as well.

      https://marketplace.nvidia.com/en-us/enterprise/personal-ai-...

      • fooker 11 hours ago ago

        It's hilarious that not a single one of these has pricing listed anywhere public.

        I don't think they expect anyone to actually buy these.

        Most companies looking to buy these for developers would ideally have multiple people share one machine and that sort of an arrangement works much more naturally with a managed cloud machine instead of the tower format presented here.

        Confirming my hypothesis, this category of devices more or less absent in the used market. The only DGX workstation on ebay has a GPU from 2017, several generations ago.

        • chatmasta 9 hours ago ago

          Nvidia doesn’t list prices because they don’t sell the machines themselves. If you click through each of those links, the prices are listed on the distributor’s website. For example the Dell Pro Max with GB10 is $4,194.34 and you can even click “Add to Cart.”

          • fooker 7 hours ago ago

            I don't mean the small GB10s.

            If you try to find the pricing of the GB300 towers even on the manufacturer sites, you'll see that it's not listed for any of the six or so models.

            • tecleandor 6 hours ago ago

              Because that's a different price point, that's getting near 100K, and the availability is very limited. I don't think they're even selling it openly, just to a bunch of partners...

              The MSI workstation is the one that is showing some pricing around. Seems like some distributors are quoting USD96K, and have a wait time of 4 to 6 weeks [0]. Other say 90K and also out of stock [1]

              --

                0: https://www.cdw.com/product/msi-nvidia-gb300-wkstn-72c-grace-cpu/9087313?pfm=srh
                1: https://www.centralcomputer.com/msi-ct60-s8060-nvidia-dgx-station-cpu-memory-up-to-496gb-lpddr5x-nvidia-blackwell-ultra-gpu-1x-10-gbe-2x-400-gbe.html
            • Melatonic 4 hours ago ago

              Isnt that because nobody has released one yet? They are brand new

        • numpad0 10 hours ago ago

          I don't think it's so odd, very few products above ~$50k have final prices listed for anyone to buy 1-click.

          • fooker 7 hours ago ago

            Workstations above 50k are not that uncommon.

            Older xeon based workstations easily reach that number.

            • tecleandor 6 hours ago ago

              If you put a 50 or 80K workstation in the HP store, it will say:

              "Purchasing limit reached. To complete your order and provide you with the best customer experience, please call 1-877-888-8235"

        • bluedino 7 hours ago ago

          'Important' people in organizations get them. They either ask for them, or the team that manages the shared GPU resources gets tired of their shit and they just give them one.

          • fooker 6 hours ago ago

            Yes, I agree this is the use case.

            Since the user here is not paying for it directly, the manufacturer does not have any incentive to list prices anywhere.

        • deelowe 8 hours ago ago

          There were plenty of them around when I worked at Nvidia. They definitely exist.

          • fooker 7 hours ago ago

            You have seen plenty of third party GB300 DGX workstations?

      • QuantumNomad_ 12 hours ago ago

        How much do those workstations cost? All of the different manufacturers links on that page lack pricing info and you have to contact them for pricing.

        • fotcorn 11 hours ago ago

          Cheapest i know if is around $96k

        • cudima 12 hours ago ago

          $4000

          • eitally 5 hours ago ago

            $4k is for GB10 (DGX Spark reference design). $90-100k is for GB300 (DGX Station reference design).

    • dabockster 5 hours ago ago

      CUDA 13 on Linux solves the unified memory problem via HMM and llamacpp. It’s an absolute pain to get running without disabling Secure Boot, but that should be remedied literally next month with the release of Ubuntu 26.04 LTS. Canonical is incorporating signed versions of both the new Nvidia open driver and CUDA into its own repo system, so look out for that. Signed Nvidia modules do already exist right now for RHEL and AlmaLinux, but those aren’t exactly the best desktop OSes.

      But yeah, right now Apple actually has price <-> performance captured a lot of you’re buying a new computer just in general.

    • diabllicseagull 7 hours ago ago

      I'm not a big fan of reducing computing as a whole to just inference. Apple has done quite a bit besides that and it deserves credit. Mac Pro disappearing from the product line is a testament to it, that their compact solutions can cover all needs, not just local inference, to a degree that an expandable tower is not required at all.

      • kllrnohj 4 hours ago ago

        Their compact solution doesn't cover all needs, they just decided that they didn't care about some of those needs. The Intel Mac Pro was the last Apple offering with high end GPU capabilities. That's now a market segment they just aren't supporting at all. They didn't figure out how to do it compactly, they just abandoned it wholesale.

        Similarly if your use case depends on a whole lot of fast storage (eg, the 4x NVME to PCI-E x16 bifurcation boards), well that's also now something Apple just doesn't support. They didn't figure out something else. They didn't do super innovative engineering for it. They just walked away from those markets completely, which they're allowed to do of course. It's just not exactly inspiring or "deserves credit" worthy.

        • Melatonic 4 hours ago ago

          You could argue they abandoned that market long before (around the era of the mac pro trashcan). Along with the pro software.

          • kllrnohj 3 hours ago ago

            They can abandon it multiple times ;)

            When they introduced the cheese grater Mac Pro the new high end GPUs were a showcase feature of it. Complete with the bespoke "Duo" variants and the special power connector doohickey (MPX iirc?). So I'd consider that an attempt to re-enter that market at least.

      • embedding-shape 7 hours ago ago

        > Mac Pro disappearing from the product line is a testament to it

        Apple removing/adding something to their product line matters nothing, for all we know, they have a new version ready to be launched next month, or whatever. Unless you work at Apple and/or have any internal knowledge, this is all just guessing, not a "testament" to anything.

        • NetMageSCW 5 hours ago ago

          Did you read the article?

          “Apple has also confirmed to 9to5Mac that it has no plans to offer future Mac Pro hardware.”

          • embedding-shape 5 hours ago ago

            I did indeed! Did you read the article? Did you like it? Have you also read the HN guidelines by any chance?

            None the less, what Apple says or doesn't say doesn't really matter. If their plan for a new Mac Pro is secret, they'll answer exactly that when someone asks them about it. Doesn't mean we won't see new Mac Pro hardware this summer. Plenty of cases in the past where they play coy and then suddenly, "whoops, we just had to keep it a secret, never mind".

    • alerighi 5 hours ago ago

      To me there is a fundamental difference. Even if PC hardware costs slightly more (now because of the RAM situation, Apple producing his chips in house can get better deals of course), it's something that is worth more investing in in.

      Maybe you spend 1000$ more for a PC of comparable performance, well tomorrow you need more power, change or add another GPU, add more RAM, add another SSD. A workstation you can keep upgrade it for years, adding a small cost for an upgrade in performance.

      An Apple machine is basically throw away: no component inside can be upgraded, you need more RAM? Throw it away and buy a new one. You want a new GPU technology? You have to change the whole thing. And if something inside breaks? You of course throw away the whole computer since everything is soldered on the mainboard.

      There is then the software issue, with Apple devices you are forced to use macOS that kind of sucks, especially for a server usage. True nowadays you can install Linux on it, but the GPU it's not that well supported, thus you loose all the benefits. You have to stuck with an OS that sucks, while in the PC market you have plenty of OS choices, Windows, a million of Linux distributions, etc. If I need a workstation to train LLM why do I care about a OS with a GUI? It's only a waste of resources, I just need a thing that runs Linux and I can SSH into it. Also I don't get the benefit of using containers, Docker, etc.

      Mac suck even hardware side form a server point of view, for example it's not possible to rack mount them, it's not possible to have redundant PSU, key don't offer remote KVM capability, etc.

      • mjburgess 5 hours ago ago

        "Upgrades" havent been a thing for nearly a decade. By the time you want to upgrade a machine part (c. 5yr+ for modern machines), you'd want to upgrade every thing, and its cheap to do so.

        It isnt 2005 any more where RAM/CPU/etc. progress benefits from upgrading every 6mo. It's closer to 6yr to really notice

        • cesarb 4 hours ago ago

          > By the time you want to upgrade a machine part (c. 5yr+ for modern machines), you'd want to upgrade every thing,

          That's only the case for CPU/MB/RAM, because the interfaces are tightly coupled (you want to upgrade your CPU, but the new one uses an AM5 socket so you need to upgrade the motherboard, which only works with DDR5 so you need to upgrade your RAM). For other parts, a "Ship of Theseus" approach is often worth it: you don't need to replace your 2TB NVMe M.2 storage just because you wanted a faster CPU, you can keep the same GPU since it's all PCIe, and the SATA DVD drive you've carried over since the early 2000s still works the same.

          • secabeen 4 hours ago ago

            Even this is understating it; if you buy at the right point in the cycle, you can Ship-of-Theseus quite a while. An AM4 motherboard released in Feb 2017 with a Ryzen 1600X CPU, DDR4 memory and a GTX780 Ti would be a obsolete system by today's standards. Yet, that AM4 motherboard can be upgraded to run a Ryzen 5800X3D CPU, the same (or faster) DDR4 memory, and a RTX 5070Ti GPU and be very competitive with mid-tier 2026 systems containing all new components. Throughout all this, the case, PSU, cooling solution, storage could all be maintained, and only replaced when individual components fail.

            I expect many users would be happy with the above final state through 2030, when the AM6 socket releases. That would be 13 years of service for that original motherboard, memory, case and ancillary components. This is an extreme case, you have to time the initial purchase perfectly, but it is possible.

        • heraldgeezer 3 hours ago ago

          You can keep CPU and RAM for way longer than the GPU if you game...

          Your point kind of disproves your point.

        • bigyabai 5 hours ago ago

          That's news to me. I see Mac Minis with external drives plugged-in constantly; I bet those people would appreciate user-servicable storage. I doubt they bought an external drive because they wanted to throw away the whole computer.

      • orangecat 4 hours ago ago

        you need more RAM? Throw it away and buy a new one.

        Or sell it, which is much easier to do with Macs because they're known quantities and not "Acer Onyx X321 Q-series Ultra".

        There is then the software issue, with Apple devices you are forced to use macOS that kind of sucks, especially for a server usage

        That's a fair point. Apple would get a ton of goodwill if they released enough documentation to let Asahi keep up with new hardware. I can't imagine it would harm their ecosystem; the people who would actually run Linux are either not using Macs at all, or users like me who treat them as Unix workstations and ignore their lock-in attempts.

      • infecto 4 hours ago ago

        I think most of that is really opinion and experiences. No doubt it’s not designed or built truly for racks but folks have been making rack mounts for Mac minis since they first came out.

        On the upgrade path I don’t think upgrades are truly a thing these days. Aside from storage for most components by the time you get to whatever your next cycle is, it’s usually best/easiest to refresh the whole system unless you underbought the first time around.

      • caycep 5 hours ago ago

        >>Mac suck even hardware side form a server point of view, for example it's not possible to rack mount them, it's not possible to have redundant PSU, key don't offer remote KVM capability, etc.

        https://atp.fm/683

      • TechSquidTV 4 hours ago ago

        As others have said, that's just not the reality of a modern work machine. If I need a new GPU or more RAM, I'm positive I need everything else upgraded too

      • gk-- 3 hours ago ago

        > with Apple devices you are forced to use macOS that kind of sucks, especially for a server usage

        you can just install linux?

        • rantingdemon 2 hours ago ago

          Only really possible with the M1. If referring to Asahi.

      • appletrotter 4 hours ago ago

        > You have to stuck with an OS that sucks, while in the PC market you have plenty of OS choices, Windows, a million of Linux distributions

        Windows is 10x more enshittified than OSX

        > An Apple machine is basically throw away: no component inside can be upgraded, you need more RAM? Throw it away and buy a new one.

        Tell that to all the people rocking 5-10 year old macbook that still run great

    • spacedcowboy 12 hours ago ago

      Agreed. I’m planning on selling my 512GB M3 Ultra Studio in the next week or so (I just wrenched my back so I’m on bed-rest for the next few days) with an eye to funding the M5 Ultra Studio when it’s announced at WWDC.

      I can live without the RAM for a couple of months to get a good price for it, especially since Apple don’t sell that model (with the RAM) any more.

      • wolfhumble 11 hours ago ago

        Just out of curiosity, where do you think is the best place to sell a machine like that with the lowest risk of being scammed, while still getting the best possible price?

        Wish you a speedy recovery for your back!

        • spacedcowboy 10 hours ago ago

          > Just out of curiosity, where do you think is the best place to sell a machine like that with the lowest risk of being scammed, while still getting the best possible price?

          There are none currently on eBay.co.uk, so I'm going to try there. I'll also try some of the reddit UK-specific groups.

          As far as not being scammed - it's a really high value one-off sale, so it'll either be local pickup (and cash / bank-transfer at the time, which happens in seconds in the UK) or escrow.com (for non-eBay) with the buyer paying all the fees etc.

          I'd prefer local pickup because then I have the money, the buyer can see it working, verify everything to their satisfaction etc. etc.

          > Wish you a speedy recovery for your back!

          Thank you :) It is a little better today. Sitting down is now tolerable for short periods... :)

          • Imustaskforhelp 10 hours ago ago

            doesn't escrow.com charge a 50$/pound minimum fees.

            I do know that Escrow.com is one of the most reputable escrow platforms, on a more personal note, I would love to know a escrow service where I can just sell the spare domains I have (I have got some .com/.net domains for 1$ back during a deal for a provider), is there any particular escrow service which might not charge a lot and I can get a few dollars from selling them as some of those domains aren't being used by me.

            > Thank you :) It is a little better today. Sitting down is now tolerable for short periods... :)

            I am wishing you speedy recovery as well. A cowboy gotta have a strong back :-)

            • spacedcowboy 8 hours ago ago

              According to the calculator, it’d be about £280 assuming the purchase cost was £11k. I think that’s probably an upper-bound on the sale-price, though I can see bids of $20k on eBay.com for the same model.

              I sold a domain via escrow.com a long time ago now (20 years or so) but the buyer paid fees, so I don’t know what they charge for that. You could try the calculator they have though (https://www.escrow.com/fee-calculator)

              And thanks for the good wishes :)

        • ricardobayes 9 hours ago ago

          Probably ebay

        • asimovDev 11 hours ago ago

          lowest is probably an apple trade in if available, but i can't imagine how bad of a price hit it will be.

          • spacedcowboy 10 hours ago ago

            I checked, it's terrible. They don't take into account the size of the RAM in the machine, so you get the base-model trade-in value (£1280). Yeah, no.

            • polshaw 10 hours ago ago

              sounds like 100% risk of getting scammed

      • nottorp 6 hours ago ago

        Hey didn't they drop the 512 Gb model?

        https://appleinsider.com/articles/26/03/06/forget-512gb-ram-...

        You may want to hold on to your M3 Ultra! There's no guarantee there will be a M5 Ultra with 512 Gb ram.

        • spacedcowboy 6 hours ago ago

          I don’t actually use the memory anywhere near as much as I thought I would. 256GB would be fine for me :)

          • nottorp 6 hours ago ago

            Heh, my main "heavy stuff" desktop only has 64GB.

            But it feels really good to have more ram than you can think of a use for.

            I have a faint memory of an interview ages ago with Knuth I think where he mentioned as an aside he was using a workstation with 3.2 Gb of storage and 4 Gb of ram :)

            • steve_adams_86 6 hours ago ago

              Around the year 2001 I recall watching 3d studio Max R3 tutorials in which the teacher had an electric purple desktop which possessed an entire 4 gigs of ram. It blew my mind. My computer had 128mb and an ATI Rage 128 Pro.

              I was young and dumb and never would have guessed I'd own a computer with 32gb of RAM that felt pitifully underpowered for today's tasks.

              • nottorp 2 hours ago ago

                Humm purple and 4 gigs of ram in 2001 sounds like SGI. But those purple SGIs ran Irix so no 3d studio.

                • steve_adams_86 an hour ago ago

                  You're right! Crazy, that brings me back. I wonder why he showed it off. I wish I could find it. He probably wasn't using it for the tutorial at all, just nerding out and talking about how beefy computers handle rendering and complex geometry better.

                  I was constantly constrained by my computers back then. Trying to navigate complex scenes or model very detailed meshes could get soooo slow. But man I loved it so much.

                  • nottorp 35 minutes ago ago

                    > I wonder why he showed it off.

                    Probably because it ran Maya. Which was a SGI product back then, not an Autodesk product yet.

    • port11 12 hours ago ago

      As to better or cheaper homelab: depends on the build. AMD AI Max builds do exist, and they also use unified memory. I could argue the competition was, for a long time, selling much more affordable RAM, so you could get a better build outside Apple Silicon.

    • fooker 11 hours ago ago

      The typical inference workloads have moved quite a bit in the last six months or so.

      Your point would have been largely correct in the first half of 2025.

      Now, you're going to have a much better experience with a couple of Nvidia GPUs.

      This is because of two reasons - the reasoning models require a pretty high number of tokens per second to do anything useful. And we are seeing small quantized and distilled reasoning models working almost as well as the ones needing terabytes of memory.

    • Melatonic 4 hours ago ago

      Apple abandoned the pro market long before ever releasing the current iteration of Mac Pro. I doubt they care about getting it back considering its a smaller niche of consumers and probably significantly more investment on the software side.

      At best we probably get a chassis to awkwardly daisy chain a bunch of Mac Studios together

    • SwtCyber 8 hours ago ago

      The interesting question is whether they'll lean into it intentionally (better tooling, more ML-focused APIs) or just keep treating it as a side effect of their silicon design

      • chatmasta 7 hours ago ago

        I think we’ll see a much more robust ecosystem develop around MLX now that agentic coding has reduced the barrier of porting and maintaining libraries to it.

    • tannhaeuser 14 hours ago ago

      For LLMs and other pure memory-bound workloads, but for eg. diffusion models their FPU SIMD performance is lacking.

    • zadikian 5 hours ago ago

      What part of your workflow relies on home LLM inference?

    • alberth 6 hours ago ago

      Just a reminder that the old Intel Mac Pro could handle 1.5TB of RAM ... today's Mac Studio can only handle 0.25TB.

      Seem odd that a computer from a decade ago could have more than a 1TB of incremental RAM vs what we can buy today from Apple.

      • NetMageSCW 5 hours ago ago

        The M5 Ultra Studio may support more as it becomes a replacement for the Mac Pro.

    • tantalor 7 hours ago ago

      > home inference machines.

      The market for this use case is tiny

      • chatmasta 7 hours ago ago

        For now. In a few years it will be part of every day life, because people will see Apple users enjoying it without thinking about it. You won’t consider it a “home inference machine,” just a laptop with more capabilities than any other vendor offers without a cloud subscription.

        • valzam an hour ago ago

          The average person self hosts literally nothing, why would it be different for inference? Which benefits severely from economies of scale and efficient 24/7 utlization

    • FireBeyond 3 hours ago ago

      I do love the Mac Studio. I had a 2019 Mac Pro, the Intel cheesegrater, but my home office upstairs became unpleasant with it pushing out 300W+. I replaced it with the M2 Ultra Studio for a fraction of the heat output (though I did had to buy an OWC 4xNVMe bay).

      > I bet there’s gonna be a banger of a Mac Studio announced in June. Apple really stumbled into making the perfect hardware for home inference machines.

      This I'm not actually as sure about. The current Studio offerings have done away with the 512GB memory option. I understand the RAM situation, but they didn't change pricing they just discontinued it. So I'm curious to see what the next Studio is like. I'd almost love to see a Studio with even one PCI slot, make it a bit taller, have a slide out cover...

    • kranke155 8 hours ago ago

      The new M chips beat basically any PC on video editing. Their new ProRes accelerator chiplet is so good they can’t even compete.

      • Melatonic 4 hours ago ago

        Goodluck storing all those 8k videos, plates, and other content on soldered in SSD

    • _zoltan_ 8 hours ago ago

      how about the newly announced GB300 DGX Workstation?

      • NetMageSCW 5 hours ago ago

        Comparing a $100K workstation to a $4K desktop PC seems a bit Apples and oranges?

    • hermanzegerman 11 hours ago ago

      Framework offers the AI Ryzen Max with ̶1̶9̶6̶G̶B̶ 128GB of unified RAM for 2,699$

      That's a pretty good deal I would think

      https://frame.work/de/de/products/desktop-diy-amd-aimax300/c...

      • eigenspace 11 hours ago ago

        The framework desktop is quite cool, but those Ryzen Max CPUs are still a pretty poor competitor to Apple's chips if what you care about it running an LLM. Ryzen Max tops out at 256 GB/s of memory bandwidth, whereas an M4 Max can hit 560 GB/s of bandwidth.

        So even if the model fits in the memory buffer on the Ryzen Max, you're still going to hit something like half the tokens/second just because the GPU will be sitting around waiting for data.

        Personally, I'd rather have the Framework machine, but if running local LLMs is your main goal, the offerings from Apple are very compelling, even when you adjust for the higher price on the Apple machine.

        • rl3 9 hours ago ago

          There's also the DGX Spark. Granted, its price has been going up recently alongside everything else that has memory in it.

          • eigenspace 9 hours ago ago

            I haven't heard a single good think about DGX Spark from anyone using it, so I'd be pretty wary about that.

          • xienze 9 hours ago ago

            That also has pretty poor memory bandwidth. 283GB/s I think.

            • rl3 9 hours ago ago

              Yeah. The main selling point I'd say is the onboard ConnectX-7 hardware.

      • freeqaz 11 hours ago ago

        128gb is the max RAM that the current Strix Halo supports with ~250GB/s of bandwidth. The Mac Studio is 256GB max and ~900GB/s of memory bandwidth. They are in different categories of performance, even price-per-dollar is worse. (~$2700 for Framework Desktop vs $7500 for Mac Studio M3 Ultra)

      • rl3 11 hours ago ago

        128GB*

        • hermanzegerman 10 hours ago ago

          Thanks for spotting the mistake. No Idea how I got to 192

          • rl3 9 hours ago ago

            For what it's worth, I really wish that was the actual number.

    • DeathArrow 12 hours ago ago

      Still, running 2 to 4 5090 will beat anything Apple has to offer for both inference and training.

      • Tsiklon 8 hours ago ago

        That won’t work for the home hobbyist 2.4KW of GPU alone plus a 350W threadripper pro with enough PCIe lanes to feed them. You’re looking at close to twice the average US household electricity circuit’s capacity just to run the machine under load.

        A cluster of 4 Apple’s M3 ultra Mac studios by comparisons will consume near 1100W under load.

        • bluedino 7 hours ago ago

          I mean if a hobbyist can run a welder or cnc machine in their home workshop...

      • mitjam 5 hours ago ago

        I would say 1-2 RTX 6000 Pro maxQ are more practical.

    • AJRF 7 hours ago ago

      > Apple really stumbled into making the perfect hardware for home inference machines

      Apple are winning a small battle for a market that they aren’t very good in. If you compare the performance of a 3090 and above vs any Apple hardware you would be insane to go with the Apple hardware.

      When I hear someone say this it’s akin to hearing someone say Macs are good for gaming. It’s such a whiplash from what I know to be reality.

      Or another jarring statement - Sam Altman saying Mario has an amazing story in that interview with Elon Musk. Mario has basically the minimum possible story to get you to move the analogue sticks. Few games have less story than Mario. Yet Sam called it amazing.

      It’s a statement from someone who just doesn’t even understand the first thing about what they are talking about.

      Sorry for the mini rant. I just keep hearing this apple thing over and over and it’s nonsense.

    • rubyn00bie 15 hours ago ago

      I don't think Apple just stumbled into it, and while I totally agree that Apple is killing it with their unified memory, I think we're going to see a pivot from NVidia and AMD. The biggest reason, I think, is: OpenAI has committed to enormous amount capex it simply cannot afford. It does not have the lead it once did, and most end-users simply do not care. There are no network effects. Anthropic at this point has completely consumed, as far as I can tell, the developer market. The one market that is actually passionate about AI. That's largely due to huge advantage of the developer space being, end users cannot tell if an "AI" coded it or a human did. That's not true for almost every other application of AI at this point.

      If the OpenAI domino falls, and I'd be happy to admit if I'm wrong, we're going to see a near catastrophic drop in prices for RAM and demand by the hyperscalers to well... scale. That massive drop will be completely and utterly OpenAI's fault for attempting to bite off more than it can chew. In order to shore up demand, we'll see NVidia and AMD start selling directly to consumers. We, developers, are consumers and drive demand at the enterprises we work for based on what keeps us both engaged and productive... the end result being: the ol' profit flywheel spinning.

      Both NVidia and AMD are capable of building GPUs that absolutely wreck Apple's best. A huge reason for this is Apple needs unified memory to keep their money maker (laptops) profitable and performant; and while, it helps their profitability it also forces them into less performant solutions. If NVidia dropped a 128GB GPU with GDDR7 at $4k-- absolutely no one would be looking for a Mac for inference. My 5090 is unbelievably fast at inference even if it can't load gigantic models, and quite frankly the 6-bit quantized versions of Qwen 3.5 are fantastic, but if it could load larger open weight models I wouldn't even bother checking Apple's pricing page.

      tldr; competition is as stiff as it is vicious-- Apple's "lead" in inference is only because NVidia and AMD are raking in cash selling to hyperscalers. If that cash cow goes tits up, there's no reason to assume NVidia and AMD won't definitively pull the the rug out from Apple.

      • AnthonyMouse 12 hours ago ago

        > A huge reason for this is Apple needs unified memory to keep their money maker (laptops) profitable and performant

        None of the things people care about really get much out of "unified memory". GPUs need a lot of memory bandwidth, but CPUs generally don't and it's rare to find something which is memory bandwidth bound on a CPU that doesn't run better on a GPU to begin with. Not having to copy data between the CPU and GPU is nice on paper but again there isn't much in the way of workloads where that was a significant bottleneck.

        The "weird" thing Apple is doing is using normal DDR5 with a wider-than-normal memory bus to feed their GPUs instead of using GDDR or HBM. The disadvantage of this is that it has less memory bandwidth than GDDR for the same width of the memory bus. The advantage is that normal RAM costs less than GDDR. Combined with the discrete GPU market using "amount of VRAM" as the big feature for market segmentation, a Mac with >32GB of "VRAM" ended up being interesting even if it only had half as much memory bandwidth, because it still had more than a typical PC iGPU.

        The sad part is that DDR5 is the thing that doesn't need to be soldered, unlike GDDR. But then Apple solders it anyway.

        • jeffffff 6 hours ago ago

          > None of the things people care about really get much out of "unified memory". GPUs need a lot of memory bandwidth, but CPUs generally don't and it's rare to find something which is memory bandwidth bound on a CPU that doesn't run better on a GPU to begin with. Not having to copy data between the CPU and GPU is nice on paper but again there isn't much in the way of workloads where that was a significant bottleneck.

          the bottleneck in lots of database workloads is memory bandwidth. for example, hash join performance with a build side table that doesn't fit in L2 cache. if you analyze this workload with perf, assuming you have a well written hash join implementation, you will see something like 0.1 instructions per cycle, and the memory bandwidth will be completely maxed out.

          similarly, while there have been some attempts at GPU accelerated databases, they have mostly failed exactly because the cost of moving data from the CPU to the GPU is too high to be worth it.

          i wish aws and the other cloud providers would offer arm servers with apple m-series levels of memory bandwidth per core, it would be a game changer for analytical databases. i also wish they would offer local NVMe drives with reasonable bandwidth - the current offerings are terrible (https://databasearchitects.blogspot.com/2024/02/ssds-have-be...)

          • AnthonyMouse an hour ago ago

            > the bottleneck in lots of database workloads is memory bandwidth.

            It can be depending on the operation and the system, but database workloads also tend to run on servers that have significantly more memory bandwidth:

            > i wish aws and the other cloud providers would offer arm servers with apple m-series levels of memory bandwidth per core, it would be a game changer for analytical databases.

            There are x64 systems with that. Socket SP5 (Epyc) has ~600GB/s per socket and allows two-socket systems, Intel has systems with up to 8 sockets. Apple Silicon maxes out at ~800GB/s (M3 Ultra) with 28-32 cores (20-24 P-cores) and one "socket". If you drop a pair of 8-core CPUs in a dual socket x64 system you would have ~1200GB/s and 16 cores (if you're trying to maximize memory bandwidth per core).

            The "problem" is that system would take up the same amount of rack space as the same system configured with 128-core CPUs or similar, so most of the cloud providers will use the higher core count systems for virtual servers, and then they have the same memory bandwidth per socket and correspondingly less per core. You could probably find one that offers the thing you want if you look around (maybe Hetzner dedicated servers?) but you can expect it to be more expensive per core for the same reason.

        • NetMageSCW 5 hours ago ago

          >The sad part is that DDR5 is the thing that doesn't need to be soldered, unlike GDDR. But then Apple solders it anyway.

          Apple needs to solder it because they are attaching it directly to the SOC to minimize lead length and that is part of how they are able to get that bandwidth.

          • AnthonyMouse 43 minutes ago ago

            Systems with socketed RAM have had on-die memory controllers for more than two decades. CAMM2 supports the same speeds as Apple is using in the M5.

        • nsteel 4 hours ago ago

          Except they don't use DDR5. LPDDR5 is always soldered. LPDDR5 requires short point-to-point connections to give you good SI at high speeds and low voltages. To get the same with DDR5 DIMMs, you'd have something physically much bigger, with way worse SI, with higher power, and with higher latency. That would be a much worse solution. GDDR is much higher power, the solution would end up bigger. Plus it's useless for system memory so now you need two memory types. LPDDR5 is the only sensible choice.

        • actionfromafar 10 hours ago ago

          > Not having to copy data between the CPU and GPU is nice on paper but again there isn't much in the way of workloads where that was a significant bottleneck.

          Isn't that also because that's world we have optimized workloads for?

          If the common hardware had unified memory, software would have exploited that I imagine. Hardware and software is in a co-evolutionary loop.

      • wolfhumble 11 hours ago ago

        > tldr; competition is as stiff as it is vicious-- Apple's "lead" in inference is only because NVidia and AMD are raking in cash selling to hyperscalers. If that cash cow goes tits up, there's no reason to assume NVidia and AMD won't definitively pull the the rug out from Apple.

        These companies always try to preserve price segmentation, so I don’t have high hopes they’d actually do that. Consumer machines still get artificially held back on basic things like ECC memory, after all . . .

      • hermanzegerman 11 hours ago ago

        Nvidia is definitely preparing for this with the Opensource LLMs they are currently developing

      • pjmlp 12 hours ago ago

        No one cares about Metal in that space, plus CUDA already has unified memory for a while.

        https://docs.nvidia.com/cuda/cuda-programming-guide/04-speci...

        Can we also stop giving Apple some prize for unified memory?

        It was the way of doing graphics programming on home computers, consoles and arcades, before dedicated 3D cards became a thing on PC and UNIX workstations.

        • UqWBcuFx6NV4r 12 hours ago ago

          Can we please stop treating this like some 2000s Mac vs PC flame war where you feel the need go full whataboutism whenever anyone acknowledges any positive attribute of any Apple product? If you actually read back over the comments you’re replying to, you’ll see that you’re not actually correcting anything that anyone actually said. This shit is so tiring.

          • pjmlp 11 hours ago ago

            You mean like the Neo marketing materials put out by Apple?

  • andrewl-hn 9 hours ago ago

    This would probably push some high-end audio professionals away from Logic. One of the niches Mac Pro has been popular is audio production. And with cheesegrader the ability to slot in many-many different audio interfaces into a box instead of dangling out to various PCIe enclosures has been a big win.

    Here's a good video how it looks like: https://www.youtube.com/watch?v=kIQINCWMd6I&list=PLi2i2YhL6o... (at 1:40 Neil Parfitt shows Mac audio setup his before and after).

    • audunw 7 hours ago ago

      Feels like it'd just create a market for a big rack-mountable multi-bay PCIe enclosure, with its own internal power supply, that you could connect with one ore more thunderbolt cable. I don't see any reason why a solution built around a Mac Studio should have to be significantly more cluttered.

      I don't know if such a solution exists right now, but I'm thinking there's a fair chance it will soon as the Mac Pro disappearing creates a demand for something like it.

      • dabockster 5 hours ago ago

        Thunderbolt is really an unsung hero here. It is surprisingly nice to be able to move various components around my desk that would have otherwise sat in a huge tower hogging all the PCIe slots they can find.

        • randusername 5 minutes ago ago

          Agreed, I've been doing experiments and it's wild to me what "just works" in a secondhand eGPU case or music production PCIe boxes.

          Dual 10G NIC cards, way cheaper than a comparable dongle 36 HDDs in JBOD, absolutely! 12 optical drives, sure!

      • kllrnohj 4 hours ago ago

        The Thunderbolt offerings on the current Mac lineup offer dramatically less bandwidth in total if that matters for a given use case. Thunderbolt 5 is the equivalent of PCI-E Gen 4 x4. So if all 4 of the Thunderbolt 5 ports on a Mac Studio can run at full speed, that's still only the equivalent of a single gen 4 x16 slot. That's less than half the bandwidth of a basic consumer x86 CPU, to say nothing of the Xeon that was in the previous Intel Mac Pro or a modern Epyc/Threadripper (Pro).

        This is a big reason why things like eGPUs kinda suck. Thunderbolt is fast for external I/O, but it's quite pathetic compared to internal PCI-E.

      • bitbckt 6 hours ago ago

        The DAD AX32/AX64 is such a thing.

    • raydev 20 minutes ago ago

      The video you linked is from 2019. A lot has changed with Thunderbolt capability and the Studios now have enough ports/bandwidth to handle audio processing needs to multiple boxes.

  • jasoneckert 17 hours ago ago

    As someone who came from the SGI O2/Octane era when high-end workstations were compact, distinctive, and sexy, I’ve never really understood the allure of the Mac Pro, with the exception of the 2013 Mac Pro tube, which I owned (small footprint, quiet, and powerful).

    For me, aesthetics and size are important. That workstation on your desk should justify its presence, not just exist as some hulking box.

    When Apple released the Mac Studio, it made perfect sense from a form-factor point-of-view. The internal expansion slots in the M2 Mac Pro didn't make any sense. It was like a bag of potato chips - mostly air. And far too big and ugly to be part of my work area! I'm surprised that Apple didn't discontinue it sooner.

    • linguae 16 hours ago ago

      As much as I love alluring designs such as the NeXT Cube (which I have), the Power Mac G4 Cube (which I wish I had), and the 2013 Mac Pro (which I also have), sometimes a person needs a big, hulking box of computational power with room for internal expansion, and from the first Quadra tower in the early 1990s until the 2012 Mac Pro was discontinued, and again from 2019 until today, Apple delivered this.

      Even so, the ARM Mac Pro felt more like a halo car rather than a workhorse. The ARM Mac Pro may have been more compelling had it supported GPUs. Without this support, the price premium of the Mac Pro over the Mac Studio was too great to justify purchasing the Pro for many people, unless they absolutely needed internal expansion.

      I’d love a user-upgradable Mac like my 2013 Mac Pro, but it’s clear that Apple has long moved on with its ARM Macs. I’ve moved on to the PC ecosystem. On one hand ARM Macs are quite powerful and energy-efficient, but on the other hand they’re very expensive for non-base RAM and storage configurations, though with today’s crazy prices for DDR5 RAM and NVMe SSDs, Apple’s prices for upgrades don’t look that bad by comparison.

      • JumpCrisscross 2 hours ago ago

        > sometimes a person needs a big, hulking box of computational power with room for internal expansion

        Between cloud computing and server racks, is this still a real niche?

    • matthewfcarlson 17 hours ago ago

      As someone who worked on the M2 Mac Pro and has a real soft spot for it, I get it. It’s horrendously expensive and doesn’t offer much benefit over a Mac Studio and a thunderbolt pci chassis. My personal dream is that vms would support pci pass through and so you can just spin up a Linux vm and let it drive the gpus. But at that point, why are you buying a Mac?

      Opinions are my own obvs.

      • zeusk 17 hours ago ago

        > My personal dream is that vms would support pci pass through and so you can just spin up a Linux vm and let it drive the gpus.

        SR-IOV is just that? and is well supported by both Windows and Linux.

        • AbanoubRodolf 17 hours ago ago

          SR-IOV and VFIO passthrough are different things. SR-IOV partitions a PCIe device across multiple VMs simultaneously (common for NICs and NVMe). VFIO passthrough gives one VM exclusive ownership of a physical device. For GPU compute you almost always want full passthrough, not SR-IOV partitioning.

          The harder problem on Apple Silicon is that the M2 Ultra's GPU is integrated into the SoC -- it's not a discrete PCIe device you can isolate with an IOMMU group. Apple's Virtualization framework doesn't expose VFIO-equivalent hooks, so even if you add a discrete AMD Radeon to the Mac Pro's PCIe slots, there's no supported path to pass it through to a Linux guest right now.

          On Intel Macs this actually worked via VFIO with the right IOMMU config. Apple Silicon VMs can do metal translation layers but that's not the same as bare-metal GPU access. It's a real limitation and I doubt Apple will prioritize solving it since it would undercut the "just use macOS" pitch.

      • spacedcowboy 12 hours ago ago

        Under a comment regarding the O2/Octane (both of which I own :) era, I first read “vms” as VMS, not multiple instances of a VM…

      • markn951 17 hours ago ago

        > Opinions are my own obvs.

        Whose else would they be?

        • TheDong 16 hours ago ago

          > as someone who worked on the m2 mac pro

          They're trying to make it very clear they're not speaking on behalf of Apple Inc, despite having worked (or working) there.

          Big companies like to give employees some minimal "media training", which mostly amounts to "do not speak for the company, do not say anything that might even slightly sound like you're speaking for the company".

        • OJFord 12 hours ago ago

          An employer's, especially as they stated having worked (and perhaps still) at Apple in the same comment.

          • code_duck 12 hours ago ago

            Oh, I interpreted it as “did work using a Mac Pro” vs helped develop the Mac Pro itself.

        • dotancohen 13 hours ago ago

            > > Opinions are my own obvs.
          
            > Whose else would they be?
          
          On the internet? Often the opinions of others they see getting upvotes.
        • 9wzYQbTYsAIc 10 hours ago ago

          >> Opinions are my own obvs.

          > Whose else would they be?

          takes a look at the user profile

          Oh, they are a journalist/writer for a big name outfit

        • mandeepj 17 hours ago ago

          Maybe he was trying to say he isn’t a spokesman for anyone else :-)

      • asimovDev 11 hours ago ago

        do / did you have to always work in the office or do you get to work from home by taking a test rig with you ? always been curious about this

    • JumpCrisscross 17 hours ago ago

      > aesthetics and size are important

      It's dumb from a practical perspective. But I keep hoping they'll vertically compress their trashcan design so it looks like their Cupertino headquarters.

    • mrheosuper 16 hours ago ago

      >That workstation on your desk should justify its presence

      It does the work you want it to do is not enough to justify its presence ?

      • mixdup 7 hours ago ago

        Parent comment OP has to be trolling

    • moogly 16 hours ago ago

      > That workstation on your desk should

      Under your desk, right? Right?!

      • kgwgk 12 hours ago ago

        It’s a desktop computer, not a deskbottom computer.

      • petepete 10 hours ago ago

        I have a sit/stand desk so mine's on top, it makes organising the cables much easier.

        Nothing as swish looking as a Mac Pro though, it's a plain black Lian Li behemoth from the late 00s.

        • FinnKuhn 9 hours ago ago

          I also have a standing desk, and my desktop computer is still on the floor. That way I can just route all the cables to the back and then under the desk to my PC. Looks very clean as well.

          • mrkstu 8 hours ago ago

            Yep, with wireless keyboards and mice you really only need your monitor cables on the desk in this setup.

          • moogly 5 hours ago ago

            Yep, same.

      • fouc 12 hours ago ago

        It'd get mighty dusty under there after awhile, best to keep it where you can see it so it doesn't get into trouble.

      • swiftcoder 13 hours ago ago

        I mean... if you spent $7,000 on it, do you really want to hide it away under the desk?

        • yoz-y 12 hours ago ago

          Yes, because you’re buying a tool, not a conversation piece.

          • swiftcoder 11 hours ago ago

            Por que no los dos? ¯\(ツ)/¯

        • NetMageSCW 5 hours ago ago

          That’s why my Lian Li anniversary edition is next to my desk. Also because it is nearly as tall and wouldn’t fit under it.

    • SwtCyber 8 hours ago ago

      But I think the Mac Pro was never really trying to be on your desk in the first place. For a lot of its target users, it lived under the desk or in a rack, and the size wasn't about aesthetics so much as airflow, expansion, and serviceability

    • jayd16 5 hours ago ago

      I hope this is satire.

    • kmeisthax 17 hours ago ago

      I'm surprised they even tried selling an Apple Silicon Mac Pro - I expected that product to die the moment they announced the transition. Everything that makes Apple Silicon great also makes it garbage for high-performance workstations.

      The allure of the Mac Pro is that you could dodge the Apple Tax by loading it up with RAM and compute accelerators Apple couldn't mark up. Well, Apple Silicon works against all of that. The hardware fabric and PCIe controller specifically prohibit mapping PCIe device memory as memory[0], which means no GPU driver ever will work with it. Not even in Asahi Linux. And the RAM is soldered in for performance. An Ultra class chip has like 16 memory channels, which even in a 1-DIMM per channel routing would have trace lengths long enough to bottleneck operating frequency.

      The only thing the socketed RAM Mac Pros could legitimately do that wasn't a way to circumvent Apple's pricing structure was take terabytes of memory - something that requires special memory types that Apple's memory controller IP likely does not support. Intel put in the engineering for it in Xeon and Apple got it for free before jumping ship.

      Even then, all of this has gone completely backwards. Commodity DRAM is insanely expensive now and Apple's royalty-bearing RAM prices are actually reasonable in comparison. So there's no benefit to modularity anymore. Actually, it's a detriment, because price-discovery-enforcing scalpers can rip RAM out of perfectly working computers and resell the RAM. It's way harder to scalp RAM that's soldered on the board.

      [0] In violation of ARM spec, even!

      • AnthonyMouse 11 hours ago ago

        > An Ultra class chip has like 16 memory channels, which even in a 1-DIMM per channel routing would have trace lengths long enough to bottleneck operating frequency.

        CAMM fixes this, right?

        > Actually, it's a detriment, because price-discovery-enforcing scalpers can rip RAM out of perfectly working computers and resell the RAM. It's way harder to scalp RAM that's soldered on the board.

        Scalping isn't a thing unless you were selling below the market price to begin with which, even with the higher prices, Apple isn't doing and would have no real reason to do.

        Notice that in real life it only really happens with concert tickets and that's because of scam sandwich that is Ticketmaster.

        • chongli 9 hours ago ago

          Ticketmaster is a reputation management company. Their true purpose is to take the reputation hit for charging market value for limited availability event tickets. Artists do not want to take this reputation hit themselves because it impacts their brand too much.

    • jjtheblunt 17 hours ago ago

      i wish i'd never traded in my 2016 mac pro (aluminum polished tube) as it was beefy, it was silent, clever thermo design (like the powerpc cube 20 years earlier or so), and i'd upgraded the living crap out of it for cheap.

  • readitalready 18 hours ago ago

    Apple really dropped the ball here. They had every ability to make something competitive with Nvidia for AI training as well as inference, by selling high end multi GPU Mac Pro workstations as well as servers, but for some reason chose not to. They had the infrastructure and custom SoCs and everything. What a waste.

    It really could have been a bigger market for them than even the iPhone.

    • A_D_E_P_T 18 hours ago ago

      Just about everybody who isn't Nvidia dropped the ball, bigtime.

      Intel should have shipped their GPUs with much more VRAM from day one. If they had done this, they'd have carved out a massive niche and much more market share, and it would have been trivially simple to do.

      AMD should have improved their tools and software, etc.

      Apple should have done as you say.

      Google had nigh on a decade to boost TPU production, and they're still somehow behind the curve.

      Such a lack of vision. And thus Nvidia is, now quite durably, the most valuable company in the world. Imagine telling that to a time traveler from 2018.

      • readitalready 18 hours ago ago

        I think for AMD, they were focused on competing against Intel. Remember AMD was almost bankrupt about 15 years ago because of competing against Intel. But the very first GPU use for AI was actually with an ATI/AMD GPU, not an Nvidia one. Everyone thinks Nvidia kicked off the GPU AI craze when Ilya Sutskever cleaned up on AlexNet with an Nvidia GPU back in 2012, or when Andrew Ng and team at Stanford published their "Large Scale Deep Unsupervised Learning using Graphics Processors" in 2009, but in 2004, a couple of Korean researchers were the first to implement neural networks on a GPU, using ATI Radeons: https://www.sciencedirect.com/science/article/abs/pii/S00313...

        And as of now I do believe AMD is in the second strongest position in the datacenter space after Nvidia, ahead of even Google.

      • gehsty 10 hours ago ago

        Why should Apple have done this? It doesn’t fit their business in anyway shape or form. Where does data centre hardware sit relative to electronics / humanities cross roads that is foundational for Apple?

        • DCKing 7 hours ago ago

          > Why should Apple have done this?

          For money, probably.

          Apple is presumably leaving a lot of money on the table by not trying to sell Apple Silicon for AI inference and training. They're the only ones who can attach reasonably large GPUs (M3 Ultra) to very large amounts of cheaper memory (512GB SO-DIMM per GPU). Apple could e.g. sell server SKUs of Mac Studios, heck they can sell M3 Ultra chips on PCIe cards. And they could further develop Apple Silicon in that direction. Presumably they would be seen as a very legit competitor to Nvidia that way, perhaps moreso than Intel and AMD. I'd assume that in the current climate this would be extremely lucrative.

          Now, actually doing this would disrupt Apple's own supply chain as well as force it to spend significant internal resources and cultural change for this kind of product line. There's a good argument to be made it would disproportionally negatively affect its Mac business, so this would be a very risky move.

          But given that AI hardware is likely much higher margin than the Mac business an argument could probably (sadly) be made that it'd be lucrative for them to try it. I personally don't think Apple is inclined to take this kind of risk to jeopardize the Mac, but I'm sure some people at Apple have considered this.

        • AdamN 8 hours ago ago

          Yeah nothing about Apple is server side and imho that's what training is. To be serious about it as a company you have all sorts of other tools (crawlers, etc...) helping with training so it basically has to be in the datacenter at any reasonable scale anyway. And that's just not where Apple lives. We saw with Swift that they couldn't focus on server side enough to make it a serious language there and they've consistently declined to enter that area over the years because it's outside their wheelhouse.

      • BeetleB 5 hours ago ago

        Trust me: If Intel could, it would.

        From inside news: They were not breaking even on their existing GPUs. The strategy was to take a loss just to have a presence in the space.

        • dabockster 5 hours ago ago

          Intel could position their cards as strong for certain workloads. They had AV1 support first in market, for example.

      • 1W6MIC49CYX9GAP 10 hours ago ago

        Intel doesn't limit how much memory card makers can pair with their GPU. It's up to the card maker.

      • bigstrat2003 17 hours ago ago

        > And thus Nvidia is, now quite durably, the most valuable company in the world.

        Nvidia is the most valuable company in the world right up until the AI bubble pops. Which, while it's hard to nail down when, is going to happen. I wouldn't call their position durable at all.

        • gizajob 13 hours ago ago

          The crashing and burning of Nvidia stock has been predicted for a while now and keeps not really happening. It’s gone pretty flat and volatile up there around $180 but they keep delivering the results to back it up. I was thinking this week that Apple is really primed to make a killing from people who want to run their LLM on-device coupled with an agent in the next couple of years. We’re a long way off being able to train the models – this is going to need an Nvidia-powered datacentre for the foreseeable future, but the local inference seems absolutely like a market that Apple could capture, gutting all the most premium revenue from Anthropic and OpenAI by selling Macs with a large amount of integrated memory to anyone who wants to give them the money to run their native OpenClaw/agent instead of paying ever-growing monthly bills for tokens.

        • HerbManic 15 hours ago ago

          It is definitely a case that they will fall a long way but Nvidia will not fail as a whole. They have a way of maximizing their position relentlessly. CUDA turns out to endlessly put them in amazing positions on things like image recognition, AR, Crypto and now AI.

          For all the faults of them leaning in hard on these things for stock market and personal gains, Nvidia still has some of the best quality products around. That is their saving grace.

          They will not be the world most valuable company once the bubble pops, will probably never get back there again, but they will continue to be a decent enough business. I just want them going back to talking about graphics more than AI again, that will be nice.

        • user34283 12 hours ago ago

          I might as well say that no, it is not going to happen.

          As handwriting code is rapidly going out of fashion this year, it seems likely AI is coming for most of knowledge work next.

          And who is to say that manual labor is safe for long?

    • huslage 3 hours ago ago

      Apple makes AI inference and training servers by the thousands. They just don't sell them to anyone. They use them internally in their datacenters. They didn't drop the ball, they are playing a different game while not cannibalizing their existing customer base.

    • greggsy 11 hours ago ago

      They didn’t drop the ball at all?

      They want to be able to sell handsets, desktops and laptops to their customer base.

      Pursing a product line that would consume the finite amount of silicon manufacturing resources away from that user base would be corporate suicide.

      Even nvidia has all but dropped support for its traditional gaming customer base to satisfy its new strategy.

      At any rate, the local inference capabilities are only going to get cheaper and more accessible over the coming years, and Apple are probably better placed than anyone to make it happen.

    • vlovich123 18 hours ago ago

      Don’t mistake stock market performance for revenue. NVIDIA makes ~200B annually, same as what Apple makes from iPhones. It’s a big market but GPUs aren’t just AI.

      • readitalready 18 hours ago ago

        I'm purely talking in terms of revenue. There's a huge demand for AI systems from personal workstations to datacenter servers, and Apple was one of the few companies in the world in a position to build complete systems for it.

        But for some reason Apple thought the sound recording engineer or the video editor market was more important... like, WTF dude? Have some vision at least!

        • Melatonic 3 hours ago ago

          Apple abandoned the pro video editor market many years ago with the trashcan mac pro - theyre "prosumer" only at best.

        • aurareturn 17 hours ago ago

          Some people at Apple see it. That’s why they added matmul to M5 GPU and keep mentioning LMStudio in their marketing.

          • rudedogg 15 hours ago ago

            Their rule of only releasing major software updates once a year in June is holding them back IMO. Their local LLM apis were dated before macOS/iOS 26 was even released. Just because something worked 20 years ago doesn’t mean it works today, but I’m sure it’s hard to argue against a historically successful strategy internally.

        • GuB-42 7 hours ago ago

          Apple already seems to do pretty well when it comes to AI systems on personal computers. Datacenters simply isn't their business, it would need some major changes on their part. Also, AI is a bubble, it will burst eventually, and because Apple doesn't have the fist mover advantage Nvidia has, they have a lot to lose entering this market now.

          Sound recording engineers and video editors will not disappear after the AI bubble bursts, and Apple is wise to keep that market. Bursting the AI bubble will not make AI disappear, it will just end the crazy cashflows we are seeing now. And in that regard, with the capabilities of their hardware, Apple is in a pretty good spot I think.

        • vlovich123 16 hours ago ago

          It is more important. Both for the customer base that actually buys Apple machines as well as the cache and mindshare of being used by the people that create American culture.

          Even if Apple had an amazing GPU for AI it wouldn’t matter hugely - local inference hasn’t taken off yet and cloud inference and training all uses servers where Apple has no market share and wasn’t going to get it since people had already built all the stacks around CUDA before Apple could even have awoken to that.

      • aurareturn 17 hours ago ago

        $280b and growing 70% YoY.

        $1t backlog in orders in next 2 years.

        • HerbManic 15 hours ago ago

          Those back log orders are wild! One does wonder that if the bubble collapses or more global upsets happen in that time, how many of those will ever be fulfilled? Reality might be not so impressive, but considering if it fell even 80%, that is still $200 B in revenue and that is huge.

          Remember when a $1 billion valuation used to be a big thing? That is nothing compared with nowadays.

          • aurareturn 15 hours ago ago

            Just look at the price of H100 cloud rental prices. Demand is increasing.

    • root_axis 17 hours ago ago

      Nah, Apple made the right choice. Nobody except a niche market of hobbyists is interested in running tiny quantized models.

      • gizajob 13 hours ago ago

        About the same niche market as the people who bought the Apple I, and we know where that went.

        • wtallis 13 hours ago ago

          The Apple I was a pretty poor predictor of what mainstream mass-market computing was going to end up looking like. I don't think anybody has yet come up with the Apple II of local LLMs, let alone the VisiCalc or Windows 95.

    • gehsty 10 hours ago ago

      If my Grandma had wheels she would be a bicycle. Apple would need to transition from being a consumer electronics company to being a B2B retailer for data centre hardware to take advantage of this.

      Obviously Siri from WWDC 2yrs ago was a disaster for Apple. Other than that they seem to have done pretty well navigating the new LLM world. I do think they would benefit from having their own SOA LLM, but I don’t think its is necessary for them. My mental model for LLMs and Apple is that they are similar Garage Band - “Now everyone can play an instrument” becomes “now anyone can make an app”. Apple owns the interface to the user (i don’t see anyone making nicer to use consumer hardware) and can use what ever stack in the background to deliver the technical features they decide to.

    • Almondsetat 12 hours ago ago

      If Apple doesn't offer a Linux product, they cannot be used seriously in headless computing task. They are adamant in controlling the whole stack, so unless they remake some server version of macOS (and wait years for the community to accustom themselves with it), they will keep being a consumer/professional oriented company

    • EagnaIonat 6 hours ago ago

      > AI training as well as inference

      Inference has never been an issue for M series, and MLX just ramped it up further.

      You can do training on the latest MBPs, although any serious models you are going to the cloud anyway.

    • bschwindHN 14 hours ago ago

      > They had the infrastructure and custom SoCs and everything. What a waste.

      What are they wasting, exactly?

    • kristopolous 13 hours ago ago

      this is what needs to come back with modern hardware and modern interconnect

      https://en.wikipedia.org/wiki/Xserve

    • zer00eyz 18 hours ago ago

      > something competitive with Nvidia for AI training

      Apple is counting on something else: model shrink. Every one is now looking at "how do we make these smaller".

      At some point a beefy Mac Studio and the "right sized" model is going to be what people want. Apple dumped a 4 pack of them in the hands of a lot of tech influencers a few months back and they were fairly interesting (expensive tho).

      • JumpCrisscross 17 hours ago ago

        > Apple is counting on something else: model shrink

        The most powerful AI interactions I've had involved giving a model a task and then fucking off. At that point, I don't actually care if it takes 5 minutes or an hour. I've cued up a list of background tasks it can work on, and that I can circle back to when I have time. In that context, smaller isn't even the virtue at hand–user patience is. Having a machine that works on my bullshit questions and modelling projects at one tenth the speed of a datacentre could still work out to being a good deal even before considering the privacy and lock-in problems.

        • jiggawatts 13 hours ago ago

          What "tooling" do you use to let AIs work unattended for long periods?

        • raincole 14 hours ago ago

          Cool? And it has nothing to do with what kind of consumer hardware Apple should sell. If your use cases are literally "bigger model better" then the you should always use cloud. No matter how much computing power Apple squeezes into their device it won't be a mighty data center.

          • gizajob 13 hours ago ago

            For running the model once it’s been trained, all a datacenter does is give you lower latency. Once the devices have a large enough memory to host the model locally, then the need to pay datacenter bills is going to be questioned. I’d rather run OpenClaw on my device plugged into a local LLM rather than rely on OpenAI or Claude.

      • root_axis 17 hours ago ago

        > At some point a beefy Mac Studio and the "right sized" model is going to be what people want.

        It's pretty clear that this isn't going to happen any time soon, if ever. You can't shrink the models without destroying their coherence, and this is a consistently robust observation across the board.

        • sipjca 17 hours ago ago

          I don’t think it’s about literally shrinking the models via quantization, but rather training smaller/more efficient models from scratch

          Smaller models have gotten much more powerful the last 2 years. Qwen 3.5 is one example of this. The cost/compute requirements of running the same level intelligence is going down

          • root_axis 6 hours ago ago

            There are no practically useful small models, including Qwen 3.5. Yes, the small models of today are a lot more interesting than the small models of 2 years ago, but they remain broadly incoherent beyond demos and tinkering.

          • HerbManic 15 hours ago ago

            I have said for a while that we need a sort of big-little-big model situation.

            The inputs are parsed with a large LLM. This gets passed on to a smaller hyper specific model. That outputs to a large LLM to make it readable.

            Essentially you can blend two model type. Probabilistic Input > Deterministic function > Probabilistic Output. Have multiple little determainistic models that are choose for specific tasks. Now all of this is VERY easy to say, and VERY difficult to do.

            But if it could be done, it would basically shrink all the models needed. Don't need a huge input/output model if it is more of an interpreter.

          • kyboren 15 hours ago ago

            Yes, but bigger models are still more capable. Models shrinking (iso-performance) just means that people will train and use more capable models with a longer context.

            • sipjca 13 hours ago ago

              Of course they are! Both are important and will be around and used for different reasons

      • Forgeties79 18 hours ago ago

        Cheaper than what you’d expect though. You could get a nice setup for $20-40k 6mo ago. As far as enterprise investments go, that’s a rounding error.

        • a1o 17 hours ago ago

          Not all enterprises are the same, I imagine many companies have different departments working with local optimums, so someone who could benefit from it to get more productivity might not have access to it because the department that is doing hardware acquisition is being measured in isolation.

          • Forgeties79 6 hours ago ago

            I think it’s a little unnecessary to lecture somebody on HN about how enterprises come in different shapes and sizes. It’s pretty clear what I’m implying here if you aren’t actively trying to assume the most reduced, least charitable version of my statement.

        • zer00eyz 17 hours ago ago

          Drop that down to 5k, and make it useful.

          Give every iPhone family a in house Siri that will deal with canceling services and pursuing refunds.

          Your customer screw up results in your site getting an agent drive DDOS on its CS department till you give in.

          Siri: "Hey User, here's your daily update, I see you haven't been to the gym, would you like me to harass their customer service department till they let you out of their onerous contract?"

          • Forgeties79 5 hours ago ago

            I’m running modest setup using a mistral model (24B) on a 9070 (AMD) and 32gb of ram. $1800 machine at the time I built it. It ultimately boils down to what you want to do with it. For me, it’s basically a drafting tool. I use it to break through writer’s block, iterate, or just throw out some ideas. Sometimes summarize but that can be hit or misss.

            I don’t need the latest and greatest and I fine tuned LM studio enough that I get acceptable results in 30 to 90 seconds that help me keep moving ahead. I am not a software engineer, I am definitely not as much of a “coder” as the average person on HN. So if I can do it for less than $2000, I bet a lot of (smarter/experience coding) people could see great results for $5000.

            You can get an M3 ultra Mac studio with 96gb ram for $4000. If you’re willing to go up to $6k it’s 256gb. Wayyyyy more firepower than my setup. I imagine plenty powerful for a lot of people.

    • zer0zzz 13 hours ago ago

      How is this dropping the ball? I think they dropped the ball a long time ago by waiting until M5 to do integrated tensor cores instead of the separate ANE only which was present before.

      For multi-gpu you can network multiple Macs at high speed now. Their biggest disadvantage to Nvidia right now is that no one wants to do kernel authoring in Metal. AMD learned that the hard way when they gave up on OpenCL and built HIP.

    • etchalon 18 hours ago ago

      Nothing is a bigger market than the iPhone, let alone expensive niche machines.

  • IFC_LLC 20 hours ago ago

    I think that's an expected thing.

    G5 was the thing. And companies were buying G5 and other macs like that all the time, because you were able to actually extend it with video cards and some special equipment.

    But now we have M chips. You don't need video for M chips. You kinda do, but truthfully, it's cheaper to buy a beefier Mac than to install a video card.

    Pro was a great thing for designers and video editors, those freaks who need to color-calibrate monitors. And right now even mini works just fine for that.

    And as for extensions - gone are the days of PCIe. Audio cards and other specialized equipment works and lives just fine on USB-C and Thunderbolt.

    I remember how many months I've spent trying to make Creative Labs Sound Blaster to work on my 486 computer. At that time you had to have a card to extend your system. Right now I'm using Scarlett 2i2 from Focusrite. It works over USB-C with my iPhone, iPad and Mac. DJIs mics work just as good.

    Damn, you can buy Oscilloscope that works over USB-C or network.

    It's not the Mac's or Apple's fault. We are actually live in the age where systems are quite independent and do not require direct installations.

    • labcomputer 15 hours ago ago

      > And as for extensions - gone are the days of PCIe. Audio cards and other specialized equipment works and lives just fine on USB-C and Thunderbolt.

      Grumble grumble. Well, there used to more than audio cards, back before the first time Apple canceled the Mac Pro and released the 2013 Studio^H^H Trash Can^H^H Mac Pro.

      Then everyone stopped writing Mac drivers because why bother. So when they brought the PCIe Pro back in 2019, there wasn't much to put in it besides a few Radeon cards that Apple commissioned.

      The nice thing about PCIe is the low latency, so you can build all sorts of fun data acquisition and real time control applications. It's also much cheaper because you don't need multi-gigabit SERDES that can drive a 1m line. That's why LabVIEW (originally a Mac exclusive) and NI-DAQ no longer exist on Mac.

      USB-C oscilloscopes work because the peripheral contains all the hardware, so it doesn't particularly matter that the device->host latency is high. They also don't require much bandwidth because triggering happens inside the peripheral, and only the triggered waveform record is sent a few dozen times per second.

      > It's not the Mac's or Apple's fault. We are actually live in the age where systems are quite independent and do not require direct installations.

      It is, and we don't. Maybe you don't notice it, but others do.

      • rkagerer 15 hours ago ago

        > USB-C oscilloscopes work because the peripheral contains all the hardware, so it doesn't particularly matter that the device->host latency is high.

        Yeah, that's basically the way accessories have gone. Powerful mcu's and soc's have gotten cheap enough to make it viable. Makes me a little sad though, I liked having low latency "GPIO's" straight to software running on my PC (but I'm thinking as far back as the parallel port... love how simple that was).

        • jmalicki 14 hours ago ago

          It's not just that - anything working with analog signals benefits hugely from not living inside the complete EM interference nightmare of the computer case.

      • buccal 14 hours ago ago

        Well there is https://www.crowdsupply.com/eevengers/thunderscope

        With USB4/TB you can get quite far in both latency and throughput. Actually there are network adapters with TB connection that are just TB to PCIe adapters and PCIe network card.

    • magic_hamster 20 hours ago ago

      > gone are the days of PCIe.

      My GPU, NVMe drives and motherboard might disagree.

      • rayiner 19 hours ago ago

        The top Mac Studio has six thunderbolt 5 ports, each of which is a PCIe 4.0 x4 link. Each is a 8GB/sec link in each direction, which is a lot. Going from x16 down to x4 has less than a 10% hit on games: https://www.reddit.com/r/buildapc/comments/sbegpb/gpu_in_pci...

        • mrheosuper 16 hours ago ago

          Your example uses GTX1080, which is a very old GPU. Current flagship consumer GPU will take a harder hit on low bandwidth PCIE.

          • rayiner 13 hours ago ago

            Here’s more recent HW: https://www.pugetsystems.com/labs/articles/impact-of-gpu-pci...

            This is an RTX4080.

            “In the more common situations of reducing PCI-e bandwidth to PCI-e 4.0 x8 from 4.0 x16, there was little change in content creation performance: There was only an average decrease in scores of 3% for Video Editing and motion graphics. In more extreme situations (such as running at 4.0 x4 / 3.0 x8), this changed to an average performance reduction of 10%.”

            • AnthonyMouse 11 hours ago ago

              A 10% performance reduction seems like a lot to be leaving on the table.

              • rayiner 10 hours ago ago

                Not really.

            • mrheosuper 11 hours ago ago

              The article is nearly 3 years old and the 4080 is not even top of the line at the written time.

              Still, 10% in difference is still considerable, almost gen-to-gen difference

        • zozbot234 16 hours ago ago

          PCIe 4.0 x4 is going to be a huge bottleneck, even recent SSDs have more throughput (they use PCIe 5.0) never mind GPUs.

        • washadjeffmad 18 hours ago ago

          Gaming isn't what people are using Mac Studios for. Thunderbolt also isn't a substitute for OCuLink.

          • rayiner 18 hours ago ago

            Sure, but it’s probably reflective of the fact that GPUs generally aren’t PCIe-bandwidth bound. Also, TB5 and Oculink2 both use PCI 4.0 x4 links.

            • angoragoats 18 hours ago ago

              Oculink is generally faster than TB5 despite them both using PCIe 4.0, because Oculink provides direct PCIe access whereas Thunderbolt has to route all PCIe traffic through its controller. The benchmarks show that the overhead introduced by the TB5 controller slows down GPU performance.

              • wtallis 15 hours ago ago

                It's not just the controllers; the Thunderbolt protocol itself imposes different speed limits. The bit rates used by Thunderbolt aren't the same as PCIe, and PCIe traffic gets encapsulated in Thunderbolt packets.

              • rayiner 13 hours ago ago

                Apple Silicon has an integrated thunderbolt controller so that should have less latency than PCs that use a discrete thunderbolt controller.

                • adrian_b 7 hours ago ago

                  Many recent laptop CPUs from Intel and AMD have integrated Thunderbolt controllers (i.e. USB 4), so that has not been a difference for a long time.

                • angoragoats 6 hours ago ago

                  Maybe; I'm unable to find any benchmarks that specifically compare PCs with TB to Macs to test this. But there is certainly still overhead with TB no matter what, and therefore it'll never be as fast as Oculink.

            • izacus 12 hours ago ago

              That's just blatantly wrong, the performance loss of GPUs is very well documented and gets worse as you go towards higher end models. We're talking 30%+ loss of performance here.

          • spacedcowboy 12 hours ago ago

            Um, I have an M3 Ultra 512GB on my desk for development. Love me some Baldur’s Gate 3, everything turned up to 11…

        • matja 9 hours ago ago

          Yeah 80GB/s total I/O bandwidth is a lot for a Mac, but desktop PCs have been doing 1TB/s (128x PCIe5) for years (Threadripper etc).

          • rayiner 8 hours ago ago

            Sure. And lots of people need all that I/O. But my point is that it’s not like the Mac Studio has no I/O. The outgoing Mac Pro only has 24 total lanes of PCIe 4.0 going to the switch chip that’s connected to all the PCI slots. The advent of externally route PCIe is a development in the last few years that may have factored into the change in form factor.

      • aprilnya 19 hours ago ago

        - GPU is integrated into the SoC - Surprisingly, it is possible to plug a drive into a TB/USB port

        …so what do you actually need PCIe for?

        • jltsiren 18 hours ago ago

          High-end Macs have moved to PCIe 5.0 speeds in their internal drives. Thunderbolt 5 is not fast enough to get the same performance from external ones.

          Thunderbolt is also too slow for higher-end networks. A single port is already insufficient for 100-gigabit speeds.

          • vlovich123 18 hours ago ago

            When people talk about 100gigabit networks for Macs, im really curious what kind of network you run at home and how much money you spent on it. Even at work I’m generally seeing 10gigabit network ports with 100gigabit+ only in data centers where macs don’t have a presence

            • jltsiren 18 hours ago ago

              Local AI is probably the most common application these days.

              Apple recently added support for InfiniBand over Thunderbolt. And now almost all decent Mac Studio configurations have sold out. Those two may be connected.

            • adrian_b 7 hours ago ago

              100 Gb/s Ethernet is likely to be expensive, but dual-port 25 Gb/s Ethernet NICs are not much more expensive than dual-port 10 Gb/s NICs, so whenever you are not using the Ethernet ports already included by a motherboard it may be worthwhile to go to a higher speed than 10 Gb/s.

              If you use dual-port NICs, you do not need a high-speed switch, which may be expensive, but you can connect directly the computers into a network, and configure them as either Ethernet bridges or IP routers.

            • Forgeties79 18 hours ago ago

              I work in media production and I have the same thought constantly. Hell I curse in church as far as my industry is concerned because I find 2.5 to be fine for most of us. 10 absolutely.

              • AdamN 8 hours ago ago

                100gbps is going to be for mesh networks supporting clusters (4 Mac Studios let's just say) - not for LAN type networks (unless it's in an actual datacenter).

              • nine_k 18 hours ago ago

                I suppose the throughput is not the key, latency is. When you split ann operation that normally ran within one machine between two machines, anything that crosses the boundary becomes orders of magnitude slower. Even with careful structuring, there are limits of how little and how rarely you can send data between nodes.

                I suppose that splitting an LLM workload is pretty sensitive to that.

        • pjmlp 13 hours ago ago

          To have lots of them plugged together, high end audio cards, electronics integrations, disks with having cables all over the place.

        • gizajob 13 hours ago ago

          Things that aren’t graphics cards, such very high bandwidth video capture cards and any other equipment that needs a lot of lanes of PCI data at low latency.

        • HeWhoLurksLate 19 hours ago ago

          but what about second GPU?

          • wtallis 13 hours ago ago

            Multiple GPUs was tried, by the whole industry including Apple (most notably with the trash can Mac Pro). Despite significant investment, it was ultimately a failure for consumer workloads like gaming, and was relegated to the datacenter and some very high-end workstations depending on the workload.

            Multi-GPU has recently experienced a resurgence due to the discovery of new workloads with broader appeal (LLMs), but that's too new to have significantly influenced hardware architectures, and LLM inference isn't the most natural thing to scale across many GPUs. Everybody's still competing with more or less the architectures they had on hand when LLMs arrived, with new low-precision matrix math units squeezed in wherever room can be made. It's not at all clear yet what the long-term outcome will be in terms of the balance between local vs cloud compute for inference, whether there will be any local training/fine-tuning at all, and which use cases are ultimately profitable in the long run. All of that influences whether it would be worthwhile for Apple to abandon their current client-first architecture that standardizes on a single integrated GPU and omits/rejects the complexity of multi-GPU setups.

        • wpm 15 hours ago ago

          Video capture

          I/O expansion

          Networking

    • tylerflick 19 hours ago ago

      > gone are the days of PCIe

      Thunderbolt is external PCIe.

      • eigenspace 11 hours ago ago

        No, oculink is external PCIe.

        Thunderbolt can kinda-sorta mimic PCIe, but it needs to chop up the PCIe signal into smaller packets, transmit them and then put them back together and this introduces a big jump in latency, even when bandwidth can be rather high.

        For many applications this isn't a big deal, but for others it causes major problems (gaming being the big one, but really anything that's latency sensitive is going to suffer a lot).

    • TheCondor 8 hours ago ago

      I’m at peace with the memory and PCIe basically flows over thundebolt. At one point external gpus were a thing. I think what I’d really love would be a couple or few m.2 slots in my studio for storage expansion.

    • oefrha 18 hours ago ago

      Does M5 series have better video encoding chip/chiplet/whatever it is called than M4 series? Because while I’m happy with my M4 Pro overall, H.264 encoding performance with videotoolbox_h264 is disappointingly basically exactly the same as a previous 2018 model Intel Mac mini, and blown out of water by nvenc on any mid to high end Nvidia GPU released in the last half-decade, maybe even full decade. And video encoding is a pretty important part of video editing workflow.

      • zamadatix 18 hours ago ago

        If you mean editing ProRes is a better fit, if you mean final export software always beats hardware encoders in terms of quality, if you mean mass h.264 transcoding a Mac workstation is probably not the right place though.

    • angoragoats 18 hours ago ago

      > gone are the days of PCIe

      This is a wild and very wrong take.

      Just about every single consumer computer shipped today uses PCIe. If you were referring to only only the physical PCIe slots, that's wrong too: the vast majority of desktop computers, servers, and workstations shipped in 2025 had physical PCIe slots (the only ones that didn't were Macs and certain mini-PCs).

      The 2023 Mac Pro was dead on arrival because Apple doesn't let you use PCIe GPUs in their systems.

      • wtallis 15 hours ago ago

        > This is a wild and very wrong take.

        That's what happens when you quote only part of a statement. Taken in context, it was referring to a very real decline in expansion cards. Now that NICs (for WiFi) and SSDs have been moved into their own compact specialized slots, and Ethernet and audio have been standard integrated onto the motherboard itself for decades, the regular PCIe slots are vestigial. They simply are not widely used anymore for expanding a PC with a variety of peripherals (that era was already mostly over by the transition from 32-bit PCIe to PCIe).

        Across all desktop PCs, the most common number of slots filled is one (a single GPU), and the average is surely less than one (systems using zero slots and relying on integrated graphics must greatly outnumber systems using more than one slot).

        Even GPUs themselves are a horrible argument in favor of PCIe slots. The form factor is wildly unsuitable for a high-power compute accelerator, because it's ultimately derived from a 1980s form factor that prioritized total PCB area above all else, and made zero provisions for cards needing a heatsink and fan(s).

        • AnthonyMouse 11 hours ago ago

          > Ethernet and audio have been standard integrated onto the motherboard itself for decades

          Unless the one it comes with isn't as fast as the one you want, or they didn't integrate one at all, or you need more than one.

          > Across all desktop PCs, the most common number of slots filled is one (a single GPU), and the average is surely less than one (systems using zero slots and relying on integrated graphics must greatly outnumber systems using more than one slot).

          There is an advantage in having an empty slot because then you can put something in it.

          Your SSD gets full, do you want to buy one which is twice as big and then pay twice as much and screw around transferring everything, or do you want to just add a second one? But then you need an empty slot.

          You bought a machine with an iGPU and the CPU is fine but the iGPU isn't cutting it anymore. Easy to add a discrete GPU if you have somewhere to put it.

          The time has come to replace your machine. Now you have to transfer your 10TB of junk once. You don't need 100Gbps ethernet 99% of the time, but using the builtin gigabit ethernet for this is more than 24 hours of waiting. A pair of 100Gbps cards cuts that >24 hours down to ~15 minutes. If the old and new machines have an empty slot.

          • Orygin 8 hours ago ago

            My motherboard has 3 16x PCIe slots, but realistically only one is used for the GPU as the other two are under the mastodon of a cooler needed by the GPU. Can't use a 100G network card if I can't fit it under the GPU. Can't not use the GPU as I don't have an iGPU in my CPU.

            He's not advocating from removing PCIe slots, but in practice, it's needed by way less consumers than before. There's probably more computers being sold right now without any PCIe slot than there are with more than 1.

            • AnthonyMouse 9 minutes ago ago

              > My motherboard has 3 16x PCIe slots, but realistically only one is used for the GPU as the other two are under the mastodon of a cooler needed by the GPU.

              Discrete GPUs generally consume two PCI slots, not three, and even the mATX form factor allows for four PCI slots (ATX is seven), which gives board makers an obvious thing to do. Put one x16 slot at the top and the other(s) lower down and use the space immediately under the top x16 slot for an x1 slot which is less inconvenient to block or M.2 slot which can be used even if there is a GPU hanging over it. This configuration is currently very common.

              It also makes sense to put one of the x16 slots at the very bottom because it can either be used for a fast single height card (>1Gb network or storage controller) or another GPU in a chassis with space below the board (e.g. mATX board in ATX chassis) without blocking another slot.

      • johnebgd 17 hours ago ago

        My post Mortem sentiments exactly. The lack of Nvidia GPU support for the M series Mac Pro models kneecapped the platform for professionals. If Apple had included that in those they’d be the defacto professional workstation for many more folks working in AI tech.

        • chatmasta 9 hours ago ago

          On the other hand it forced developers to invest more in Metal which looks like an investment starting to bear fruit.

      • hazz99 18 hours ago ago

        Plus modern interconnects like CXL are also layers on top of PCIe, and USB4 supports PCIe tunnelling. PCIe is a big collection of specifications, the physical/link/transaction layers can be mixed and matched and evolved separately.

        I don't see it disappearing, at most we'll get PCIe 6/7/etc.

      • GeekyBear 18 hours ago ago

        Thunderbolt is PCIe running over a cable.

        • labcomputer 15 hours ago ago

          Sure, with expensive line drivers to send the data 1+ meters, instead of 10ish cm. And with only 2 channels instead of up to 16.

        • angoragoats 18 hours ago ago

          Yes, I know; this is part of what I was implying when I said "Just about every single consumer computer shipped today uses PCIe."

          I don't understand how this is a response to anything I said.

      • sundvor 17 hours ago ago

        Yup the 4090 and SoundBlaster ZXR in my AM5 7800X3D system would both like to upvote your reply.

        • Arcanum-XIII 15 hours ago ago

          Sound card works fine on USB2 (RME for example has cards on USB2 that can manage 30/30 io at 192khz without issue at low latency if you have the CPU to deal with the load)

          With USB3 you have 94 i/o…

          For years pci has not been mandatory for audio. UAD, Apogee, RME and other high end brands will push you to them. Or even only provide them as usb device… even Thunderbolt is not needed here.

          And that’s been the case for a while! My Fireface UC from 15 years ago can deal with 16 channels at 96khz at 256 sample. On PC and Mac.

          • Intermernet 15 hours ago ago

            Personally, I'd love to see / read / hear more about the way RME do what they do. I know they basically update the fpga on the devices in lock step with the drivers, which allows them to do all sorts of magic (low CPU usage, zero latency recording of each raw channel being one of them) but I'd love an interview or article from some of the hardware and software people from RME. They have been rock solid and basically future proof for decades and I think the entire hardware and software industries could learn something from the way they do things.

            Incredible products, definitely worth the premium.

          • wpm 15 hours ago ago

            Then they should start putting internal high powered USB ports inside the case where I can literally bolt this shit into place because my desk is a goddamn mess of cables and dongles and boxes that don't stack or interlock or interface at all and I am so so utterly tired of being gaslit into beliving that they're just as good as a fucking slot.

            • balamatom 11 hours ago ago

              Sounds like a 9.5" mini rack could help with the stacking, see Geerling.

          • sundvor 13 hours ago ago

            I have about 14 or 15 USB devices in addition to my 4 monitors, and whilst I'm sure you're right I'm very happy to have a high quality soundcard that is not part of that mix.

          • gizajob 13 hours ago ago

            Compared to video data and the speed the CPU is running at, audio trickles in at a snails pace.

    • Lucasoato 20 hours ago ago

      Scarlett 2i2 has been amazing for me, I’d say unbeatable in terms of quality/price ratio.

    • whalesalad 18 hours ago ago

      it's not just about pcie, it's socketed memory and disks. I guess disks are just pcie technically - but memory sockets are great. hell, in the pro chassis I am surprised they didn't opt for a socketed cpu that could be upgraded.

      • zozbot234 16 hours ago ago

        The latest M2-based Mac Pro did not take socketed memory AIUI.

        • angoragoats 6 hours ago ago

          This is correct; Apple has refused to implement socketed memory on any M-series machine.

  • _ph_ 10 hours ago ago

    The latest Mac Pro really didn't make much use of its size, as there were too few useful things to put into. Especially as the GPU is now part of the package anyway. Also, the Mac Studio is the perfect workstation for the desk.

    Still, there are a few things which could be improved relative to the current Studio. First, the ability to easily clean the internals from dust. You should be able to just lift the lid and clean the computer. Also, it would be great to have one Mac which you could just plug in a bunch of NVMe disks.

    On the other side, they might replace the Mac Pro with a rack mountable machine as the demand for ARM servers in the cloud raises.

    • SwtCyber 8 hours ago ago

      Studio for desks, something rackable for scale

  • GeekyBear 19 hours ago ago

    The Ultra variants of the M series chips had previously consisted of two of the Max chips bonded together.

    The M5 generation Pro and Max chips have moved to a chiplet based architecture, with all the CPU cores on one chiplet, and all the GPU cores on another.

    https://www.wikipedia.org/wiki/Apple_M5

    So what will the M5 Ultra look like?

    If you integrate two CPU chiplets and two GPU chiplets, you're looking at 36 CPU cores, 80 GPU cores, and 1228 GB/s of memory bandwidth.

    • muro 10 hours ago ago

      Or it could be the same CPU as in pro/max with more GPU chiplets.

      • GeekyBear 4 hours ago ago

        Sure.

        AI workloads would really benefit from having more RAM, GPU cores, and memory channels.

        • bigyabai 2 hours ago ago

          Something tells me this mentality is how Apple ended up with thousands of idling Private Cloud Compute servers.

          But sure, more GPU cores will definitely fix it this time.

  • mrkpdl 14 hours ago ago

    The 2019 Mac Pro’s main purpose was to provide much needed reassurance that Apple cared about the Mac. In prior years the quality of the Macs had fallen over all product lines. And the question of does Apple care about the Mac at all was a legitimate one.

    This Mac Pro was about resetting and giving a clear signal that Apple was willing to invest in the Mac far more than it was about ‘slots’.

    Today, Mac hardware is the best it has ever been, and no one is reasonably questioning apple’s commitment to a Mac hardware.

    So it makes sense for the Mac Pro to make a graceful exit.

    • hbn 3 hours ago ago

      The hardware teams have done a great job proving Apple cares about the Mac.

      It'd be nice if the people in charge of the software would get the message.

  • brailsafe 20 hours ago ago

    Pour one out for John Siracusa

    • sgerenser 20 hours ago ago

      I guess not enough people bought the shirt: https://cottonbureau.com/p/4RUVDA/shirt/mac-pro-believe-dark...

      • philistine 8 hours ago ago

        Or too many people bought the shirt instead of a Mac Pro.

        • bombcar 8 hours ago ago

          The shirt was a bit cheaper. And probably a bit faster processing, too.

      • bombcar 18 hours ago ago

        Is it strange that only now do I want the shirt?

        • brailsafe 15 hours ago ago

          I think we're all there with you :)

          • wpm 15 hours ago ago

            They better do another run.

            • brailsafe 14 hours ago ago

              Mac Pro... We Believed

    • longislandguido 18 hours ago ago

      Here's an interesting fact, one of the more famous and fanatical fanboy Mac Pro users was late radio host Rush Limbaugh (he owned four of them), who dedicated an entire segment to the topic on his normally all-politics show when Apple dropped the ball on Thunderbolt back in the day.

    • jryio 19 hours ago ago

      If you're reading this, we're sorry John!

    • UqWBcuFx6NV4r 11 hours ago ago

      But what will we do without “this doesn’t work on intel macOS” corner!?

      • brailsafe 2 hours ago ago

        It'll be replaced with an extended "shill arbitrary Apple product corner", iPhones r still interesting right, or should we replace our cars again!?

        Although to be fair the latest two eps have been refreshingly technical

    • kuekacang 14 hours ago ago

      F

  • w-m a day ago ago

    While the trash can generation was somewhat present and around, I don't think I ever saw a cheese grater in the flesh. Did it have any users? Were there any actual useful expansion cards? Did anybody continue buying this at all, after it didn't get the M3 Ultra bump, that the Mac Studio got last year?

    • waz0wski 20 hours ago ago

      I just replaced a 2009 MacPro

      It had many hardware upgrades over the years - upgraded CPUs, 128GB RAM, 4TB NVME storage, a modern AMD GPU, USB3/c, thunderbolt, etc

      The only reason it got replaced is because it became too much of a PITA to keep modern OSX running on it (via OCLP)

      Replaced with an M4 Max Mac Studio, which is a nice and faster machine but with no ability to upgrade anything and much worse hardware resale value on M-series I'll have to replace it in 2-3 years

      • bombcar 8 hours ago ago

        At the price of the Mac Pro you could buy two Mac Studios (at least) - one today and one three or more years in the future.

      • ProllyInfamous 18 hours ago ago

        I'm a former 4,1 user, myself — replaced with an M2Pro mini Jan 2023 (finally retired fully 2025).

        Absolutely recommend you purchase the 4-bay Terramaster external enclosure — gives you four SATA slots that are hot-swappable (unlike MacPro's). 10gbps via USB-C.

        • Marsymars 16 hours ago ago

          Are SMART attributes independently readable on that? (Or any multi-drive USB-C enclosures?)

          • ProllyInfamous 2 hours ago ago

            It does not appear that SMART is supported on either version of my TerraMaster enclosures.

            Still an inexpensive solution to help ease your transition away from MacPro"5,1"land.

            As USB-C is a physical form factor (capable of supporting multiple protocols), I would think that the ability to have multidrive external SMART support would be up to the vendor's choice of datachip/datastream. Again: my Acasis does support SMART for nVMEs.

          • ProllyInfamous 5 hours ago ago

            >or any USB-C enclosure

            SMART is supported on my external Acasis nVME.

            ----

            I have two TerraMaster sledbads; this one [typing] is years older [only 5gbps, macOS Ventura] and shows `SMART: not supported` [1]

            [1] It's mirrored WD_blacks (RAID1) so I have at least some redundancy... I know: a RAID does NOT count as "backed-up".

            ----

            Within an hour I'll have checked the newer system (I suspect it'll be similar — definitely faster!).

            #TodayIlearnt

      • KennyBlanken 15 hours ago ago

        If you were using a 2009 Mac Pro for work until a year or two ago then you seriously need to think about how much your time is worth and how much of your time you were wasting by "saving money" on not buying a new computer.

        If you're self employed, the cost of equipment and depreciation make hanging on to that 2009 system even more of a poor choice.

        If you were still using a 2009 system I don't see why you'd "have to replace in 2-3 years."

    • m463 a day ago ago

      The cheese grater mac pros were very popular, in that people got them and continued to use them.

      The most notable feature was that there were mac-specific graphics cards, and you could also run PC graphics cards (without a nice boot screen). They had a 1.4kw power supply I believe, and there was extra pcie power for higher-end graphics cards. You could upgrade the memory, add up to 6 or more sata hard disks (2 in dvd slot). You could run windows, dual booting if you wanted and apple supported the drivers.

      The 2013 was kind of a joke. small and quiet, but expansion was minimal.

      2019 looked beefy, but the expansion was more like a cash register for apple, not really democratic. There were 3rd party sata hard disk solutions,

      the 2023 model was basically a joke. I think maybe the pcie slots were ok for nvme cards, not a lot else (unless apple made it).

      nowadays an apple computer is more like an iphone - apple would prefer if everything was welded shut.

    • macintux 18 hours ago ago

      My first non-Linux PC was a cheese grater, way overkill for my needs but served me well for many years.

  • lukeh 17 hours ago ago

    Still rocking a 2019 (Intel) Mac Pro here, all slots filled with various Pro Tools and UAD DSP cards, SSD, GPU, etc. I'm planning to get as much mileage out of it as I can. I'm sure a Studio would be more performant, but the Thunderbolt to PCIe chassis are not cheap.

    • alifeinbinary 11 hours ago ago

      i’m in the same boat. I bought mine back in 2021 and honestly I don’t regret my decision. It’s my main software development of music production computer plus every Sunday night I get to play counterstrike with the boys by dual booting into Windows. I’m able to service repair and upgrade it myself and one day when I’m ready to move on I’ll use it as my home server. The crazy thing is that my next upgrade will be going back to a MacBook Pro most likely because the thunderbolt connectivity will be able to handle the Blackmagic 4 camera broadcast capture card and NVME PCIe storage card that are in my Mac Pro right now through some external enclosure.

      The only real drawback that I’ve experienced with the Mac Pro has been the lack of support for large language models on the AMD GPU due to Apple's lacklustre metal drivers but I’ve been working with a couple of other developers to port a MoltenVK translation layer to Ollama that enables LLM’s on the GPU. We’re trying to get it on the main branch since testing has gone well.

      One thing a lot of commenters in this thread are overlooking is that this is the death nell for repairable and upgradable computing for Mac, which is super disappointing.

  • internet2000 20 hours ago ago

    I hope I can get a cheap one on Craigslist eventually, just for the novelty. It looks so cool.

  • al_borland a day ago ago

    They've been trying to kill the Mac Pro for over a decade. I wonder how long before they backtrack again? It seems like they should at least have a migration path for users who needed the expansion cards the Mac Pro supported. Pushing them to the PC seems pretty bad.

    Apple's new "Pro" definition seems more like "Prosumer".

    • wmf 19 hours ago ago

      The migration path is Thunderbolt PCIe enclosures (basically eGPU enclosures but you don't have to use a GPU).

      • bigyabai 17 hours ago ago

        > but you don't have to use a GPU

        That's a cute way of saying that GPUs aren't supported.

      • eigenspace 10 hours ago ago

        Not only are third party GPUs not supported on apple silicon, but thunderbolt has significantly more latency and lower bandwidth than 'real' PCIe implementations, even ones with similarly cut down lanes like oculink.

        Apple tried before to push everything out into external PCIe enclosures and people hated it. Maybe this'll go differently this time, the Mac Studio is certainly a much more compelling offering than the trashcan Mac Pro. But I think this is still a shitty and painful situation for a lot of specific users.

    • bigyabai a day ago ago

      The form-factor always felt like a weird fit for Apple Silicon. With the Intel boxes it was understandable; you want a few liters of free space for a couple AMD cards or some transcode hardware. The system was designed to be expandable, and the Mac Pro was the apex of Apple's commitment to that philosophy after bungling the trashcan Mac Pro.

      None of the Apple Silicon hardware can seemingly justify this form factor, though. The memory isn't serviceable, PCIe devices aren't really supported, the PSU doesn't need much space, and the cooling can be handled with mobile-tier hardware. Apple's migration path is "my way or the highway" for Mac Pro owners.

      • redwall_hp a day ago ago

        I suspect we'll start seeing higher-spec Mac Studio options.

        One of those with an M* Ultra, and some sort of Thunderbolt storage expansion would probably cover most of the Pro's use cases. And Apple probably doesn't want to deal with anything more exotic than those.

      • al_borland a day ago ago

        Their justification for the form factor, when it was released, was that pro users need various PCI cards to interface with some of their equipment, and this would allow them to do that.

        It seemed like the guts of the Mac Pro were essentially shoved inside of a box and stuck in the corner of the tower. It would seem like they could decouple it and sell a box that pro users could load cards into (like other companies do for eGPUs). It wouldn’t feel like a very Apple-like setup, but it would function and allow Apple to focus where they want to focus without simply leaving those users behind.

        I suppose the other option would be to dispense with the smoke and mirrors and let people slot a Mac Studio right into the Mac Pro tower, so it could be upgraded independently of the tower.

        The alternative is people leave the platform or end up with a bunch of Thunderbolt spaghetti. Neither of which seem ideal.

        • testing22321 21 hours ago ago

          It was always strange the Apple Silicon kept the 1.3kw power supply which was massive overkill.

          I always hoped we’d get a consumer version of what they have internally - 10 or 20 or more Apple Silicon chips for 1000 cores or so.

          • bombcar 18 hours ago ago

            A bunch of the Mac Pro decisions seem to have been more driven by "we have a warehouse of these parts" than "this is what the system needs".

            • dijit 16 hours ago ago

              A lot of Apples offerings feel a bit like that actually.

              To be expected when lord of the supply chain Tim Cook is running the show.

  • musicale 17 hours ago ago

    I feel like Apple is going back to the days of toaster "appliance" Macs. No slots, no upgrades, just buy a new one in 3-7 years.

    • steve-atx-7600 16 hours ago ago

      Even the toaster appliance Mac’s had upgradeable ram and hard drives though. But it does seem like that to me also.

    • steve-atx-7600 16 hours ago ago

      I hate how I can't buy an new apple silicon with upgradeable RAM or SSD. Is there a legit reason why they couldn't make these things upgradeable at all even on a studio machine? 4TB is the smallest SSD I ever want in a new machine, but buying one from Apple is stupidly expensive. Back in the intel days, I'd buy a macbook pro, for example, with less ram and a smaller SSD than the max available and then upgrade to much cheaper aftermarket parts a few years later when prices dropped.

      I'm still not going to use windows or linux. Don't want to be an IT guy on the side just to keep linux machines working. This may not be obvious to some unless you try to use printers and scanners that are more than 5 years old and what them to be on the network. And, you don't install virtualization tools like vmware that require compiling and loading kernel drivers which ends up being incompatible with new OS releases...etc.

      Windows is just too much of a painful acceptance of mediocrity and apathy in product design for me.

      • aloha2436 15 hours ago ago

        > Is there a legit reason why they couldn't make these things upgradeable at all even on a studio machine?

        For the SSD, no. For the memory, yes. The memory lives on the same chip as the CPU and the GPU, it's even more tightly bound than just being soldered on. The memory being there has legitimate technical benefits that make it much easier/cheaper for them to reach the extremely high memory bandwidths that they do.

        • lloeki 14 hours ago ago

          This, although it's not merely "easier/cheaper", it's "impossible" (unless you sacrifice a ton of performance)

          Same reason as a) GDDR on dGPUs (I think I read somewhere that GDDR is very much like regular DDR, just with much tighter paths and thus soldered in) and b) Framework Desktop (performance would reportedly halve if RAM were not soldered)

          SSD reasons I seem to recall are architectural for security: some parts (controller?) that usually sit on a NVMe SSD are embedded in the SoC next to (or inside?) the secure enclave processor or whatever the equivalent of the T2 thing is in Mx chips, so what you'd swap would be a bank of raw storage chips which don't match the controller.

          • razakel 12 hours ago ago

            Apparently upgrading the SSD can be done, but it's a weird form factor and you need another Mac to restore it.

        • eigenspace 10 hours ago ago

          The memory does *not* live on the same chip as the CPU and GPU, you appear to be thinking of HBM. Apple is using regular LPDDR5 RAM on separate chips, but soldered near to the CPU/GPU.

          The soldering does serve a purpose though, the shorter traces allow for better signal integrity at higher speeds. This isn't something special about what Apple is doing though, Intel and AMD are doing the exact same thing with the exact same LPDDR5 chips on their respective APUs.

          HBM is still almost purely reserved for datacentre GPUs.

        • StingyJelly 13 hours ago ago

          >it's even more tightly bound than just being soldered on

          No. There is a reason for it but no, it's just soldered on the same carrier board as the APU, in order to be really close to it. Apple could have used a form factor like CAMM2 and it would have worked the same, be it at slightly higher cost. The reason is simply to kill upgrade options and cut manufacturing costs - same as for any other soldered ram.

      • _ph_ 8 hours ago ago

        The SSDs in the Studio are on modules, you can exchange those. They are in a custom format though.

  • arvinsim 4 hours ago ago

    The Mac Pro would have been much more popular if MacOS was still compatible with Nvidia GPUs.

  • SwtCyber 8 hours ago ago

    It totally makes sense. Mac Studio basically ate the Mac Pro's lunch. But it's still kind of sad. The Mac Pro used to represent this idea that Apple cared about the absolute high-end, no-compromises workstation crowd

    • m132 5 hours ago ago

      Yup, exactly my thoughts.

      To me, this discontinuation is less about the product and more about making a statement. The M2 Mac Pro was a dysfunctional product of an internal conflict of interests, but it cast a ray of hope that the M series would develop past the current scaled-up-but-still-disposable phone/embedded SoCs and that Apple had some interest in bringing them closer to the offerings of the competitors from the workstation/server market. Now, with this move, they've made it clear that they would rather give up an entire segment than make at least a narrow part of their ecosystem open enough for the PCIe slots of the Mac Pro to find any serious use.

  • BewareTheYiga 6 hours ago ago

    I am incredibly saddened that the inevitable finally happened. The OG 5,1 cheese grater sparked so much joy. I added and expanded so much over the years before I finally donated it to a computer museum and moved on to Apple Silicon. I did everything from scientific computing, ripping movies, serving files, running websites, and everything in between.

  • stego-tech 17 hours ago ago

    Not surprising, as the market has broadly moved on from add-in cards in favor of smaller form factors and external devices, absent some notable holdouts in specific verticals.

    Gonna miss it, though. If they had reduced the add-in card slots to something more reasonable, lowered the entry price, and given us multi-socket options for the CPU (2x M# Ultras? 4x?), it could have been an interesting HPC or server box - though they’ve long since moved away from that in software land, so that was always but a fantasy.

    At least the Mac Studio and Minis are cute little boxes.

  • karim79 18 hours ago ago

    I have three of the trash can ones. They are absolute pieces of art, as useless as they are computationally these days (energy-to-performance wise at least). I will never sell nor give them away.

  • stephen_g 8 hours ago ago

    I never understood the point of the Mac Pro for the last decade or so - especially after the Mac Studio was released, Apple should have worked out what professionals actually want - basically a Mac Studio but with three or four PCIe slots and a few SSD slots. That’s literally all it should be!

    The Mac Pro was at the same time bizarrely over the top while also weirdly limited in some ways - while also being way to expensive…

    • BillinghamJ 8 hours ago ago

      Isn't that... exactly what the Mac Pro was? PCIe support was the primary point

      • stephen_g 8 hours ago ago

        What I’m trying to say is that with 7 slots and being more than double the size that it needed to be, it was way more machine than 95% of the market who might actually want it would buy, with a price that was correspondingly way too expensive…

  • ge96 4 hours ago ago

    I still remember that $1,000 for a wheel or maybe it was for all 4

  • cco 17 hours ago ago

    I kinda would have loved a new Mac Pro, same case, but just stick 4 Mac Studios in there and connect them all via MLX.

    Would be a killer local AI setup...for $40k.

  • BirAdam 18 hours ago ago

    Reading comments, I don’t think people are being completely fair here. For Intel and AMD to approach what Apple has accomplished they’re making many of the same compromises with Panther Lake and Ryzen AI Max. Apple chose to put disk controllers on their SoP rather than having them on the storage module. This shaves a tiny bit of latency. Worth it? No idea. I’m shit at hardware design.

    As for not having a Pro or otherwise expandable system? It’s shit. They make several variations of their chips, and I don’t think it would hurt them to make an SoP for a socket, put a giant cooling system in it, and give it 10 or 12 PCIe slots. As for what would go in those slots? Make this beast rack mountable and people would toss better network cards, sound/video output or capture, storage controllers, and all kinds of other things in there. A key here would be to not charge so much just because they can. Make the price reasonable.

    • bombcar 18 hours ago ago

      They have tried variations of this since time immemorial (we can argue about "price reasonablé") but there's just not much you can do with it that you can't do much cheaper or simpler in other ways.

      The Xserve has been dead for 15 years now, and it was never tremendously amazing (though it was nice kit).

      Apple apparently has some sort of "in-house" xserve-like thing they don't sell; but turning that into a product would likely be more useful than a Mac Pro, unless they add NUMA or some other way of allowing an M5 to access racks and racks of DIMMs.

      • matthewfcarlson 17 hours ago ago

        WSJ recently did a thing about it but details were rather light

  • sylens 10 hours ago ago

    A Mac Pro without external GPU support was always a dumb idea. They just made this to shut up the hard core fans who were complaining about the outdated Mac Pro in 2018.

  • abmmgb 8 hours ago ago

    It seems they are consolidating their proposition and slimming down their pipelines, not a bad thing

  • ks2048 18 hours ago ago

    With the popularity of mac mini (and macbooks for that matter) for doing ML/AI work, I would have thought Apple could make a Mac Pro that could make for a good workstation for doing in-house ML/AI stuff.

    I bought a GPU maybe a decade ago for this, and it's not worth the hassle (for me at least), but a nice out-of-the box solution, I would pay for.

    • bombcar 18 hours ago ago

      The problem is that the M1 chips foretold the doom of the Mac Pro unless they could figure out some way to do something that you couldn't do with a Mac Studio - thunderbolt is so good that it's hard to justify anything else.

      If they had done more with NUMA in the M series maybe you could have a Mac Pro with M5 Ultras that can take a number of M5 "daughter cards" that do something useful.

  • lapcat 19 hours ago ago

    The 2013 trash can was the end of the Mac Pro. It was never the same after that. The 2012 and earlier Mac Pros were awesome. I had a 2010 model. Here's what I loved:

    • Multiple hard drive bays for easy swapping of disks, with a side panel that the user could open and close

    • Expandable RAM

    • Lots of ports, including audio

    • The tower took up no desktop space

    • It was relatively affordable, starting at $2500. Many software developers had one. (The 2019 and later Mac Pros were insanely expensive, starting at $6000.)

    The Mac Studio is affordable, but it lacks those other features. It has more ports than other Macs but fewer in number and kind than the old Mac Pro, because the Mac Studio is a pointlessly small desktop instead of floor tower.

    • longislandguido 19 hours ago ago

      That's when they stopped designing computers for the pro market and started selling mid-century Danish furniture that can also edit videos.

      I knew it was all over when third party companies had to develop the necessarily-awkward rack mount kits for those contraptions. If Apple actually cared about or understood their pro customers, they would have built a first party solution for their needs. Like sell an actual rack-mount computer again—the horror!

      Instead, an editing suite got what looked like my bathroom wastebasket.

    • SpecialistK 19 hours ago ago

      When it was introduced, Apple said the trash can was a revolution in cooling design.

      Then they said they couldn't upgrade the components because of heat. Everyone knows that wasn't true.

      By the time Apple said they had issues with it in 2017, AMD were offering 14nm GCN4 and 5 graphics (Polaris and Vega) compared to the 28nm GCN1 graphics in the FirePro range. Intel had moved from Ivy Bridge to Skylake for Xeons. And if they wanted to be really bold (doubtful, as the move to ARM was coming) then the 1st gen Epyc was on the market too.

      Moore's Law didn't stop applying for 6 years. They had options and chose to abandon their flagship product (and most loyal customers) instead.

      • dijit 18 hours ago ago

        The biggest issue was actually that the Mac Pro was designed specifically for dual GPUs- in the era of SLI this made some sense, but once that technology was abandoned it was a technological dead-end.

        If you take one apart you'll see why, it's not the case that you could have ever swapped around the components to make it dual-CPU instead; it really was "dual GPU or bust".

        Somewhat ironically, in todays ML ecosystem, that architecture would probably do great. Though I doubt it could possibly do better than what the M-series is doing by itself using unified memory.

        • SpecialistK 18 hours ago ago

          I'll admit that while I've used the trash can but never taken it apart myself. But I can't imagine it would have been impossible to throw 2x Polaris 10 GPUs on the daughterboards in place of the FirePros.

          • dijit 18 hours ago ago

            I think on a technical level you're right, but you need to run two of them and they'd need a custom design like so:

            https://i.ebayimg.com/images/g/RQIAAOSwxKFoTHe3/s-l1200.jpg

            For what is essentially a dead-end technology, I'm somewhat doubtful people would have bought it (since the second GPU is going to be idle and add to the cost massively).

            the CPU being upgraded would have been much easier though I think.

            • SpecialistK 12 hours ago ago

              That's the crux, I think.

              Apple even in 2017 had the money and engineering resources to update or replace their flagship computer - whether with a small update to Skylake & Polaris and/or a return to a cheesegrater design as they did in 2019.

              But they chose not to. They let their flagship computer rot for over 2000 days.

      • jasomill 18 hours ago ago

        Aside from the GPU mess, the 2013 was a nice machine, basically a proto-Mac Studio. Aside from software, the only thing that pushed me off my D300/64GB/12-core as an everyday desktop + front-end machine is the fact that there's no economically sensible way to get 4K video at 120 Hz given that an eGPU enclosure + a decent AMD GPU would cost as much as a Mac mini, so I'm slumming it in Windows for a few months until the smoke clears from the next Mac Studio announcement.

        At which point I'll decide whether to replace my Mac Pro with a Mac Studio or a Linux workstation; honestly, I'm about 60/40 leaning towards Linux at this point, in which case I'd also buy a lower-end Mac, probably a MacBook Air.

        • SpecialistK 18 hours ago ago

          I'm in the Linux desktop / Mac laptop camp, and it works well for me. Prevents me getting too tied up in any one ecosystem so that I can jump ship if Apple start releasing duds again.

    • __loam 19 hours ago ago

      The studio is also like 5x as fast as those machines.

      • lapcat 19 hours ago ago

        What's your point? Of course processors have gotten a lot faster between 2012 and 2025.

        I was talking about the form factor of the machine.

  • dhruv3006 4 hours ago ago

    I think they have something packing.

  • chvid 17 hours ago ago

    The only thing that is missing from the current Mac line up is a one rack unit machine.

    • magarnicle 15 hours ago ago

      They had that with the previous gen mini, but the new ones are too tall now.

  • giancarlostoro 19 hours ago ago

    Honestly the Mac Studio is the new Mac Pro, this makes more sense to me.

    • therealmarv 18 hours ago ago

      but even that one looks kinda outdated when looking at latest M5 Max laptops.

      • jshier 18 hours ago ago

        Mac Studio waits for the Ultra chips to ship, which are always last in a generation. Perhaps the M5's chiplet architecture will help them move faster there.

  • timnetworks 8 hours ago ago

    Of course it's Made on iPhone! They stopped making anything else!

  • andy_ppp 10 hours ago ago

    Damn I was hoping they would build proper GPUs at some point.

  • bredren 17 hours ago ago

    I’m much less concerned about the Mac Pro and more concerned we won’t see an XDR Ultra Display to replace the 32”.

    • matthewfcarlson 17 hours ago ago

      They did come out with a new XDR. It’s just not 32”.

  • throwaway85825 16 hours ago ago

    For AI Apple internally has rack mounted M chips but they wont sell them to the public.

  • lwhi 12 hours ago ago

    Apple need to sort out their software.

    Mac OS is a horrible experience.

    • dwayne_dibley 12 hours ago ago

      Have you seen windows lately?

      (but yes, Apple seems happy to ship buggy software these days)

      • lwhi 12 hours ago ago

        Sure, Windows is awful, but that's no reason to ship terrible software.

        Apple's hardware is great, but without choice of software, they need to provide an amazing default option.

  • drnick1 17 hours ago ago

    This makes sense, for that kind of money you could always build a beastly workstation in a real ATX case with standard components. Install Linux and the Mac looks like an expensive toy in comparison.

  • looopTools 13 hours ago ago

    This honestly saddens me a little. From the PowerMac's to the MacPro I always loved them when having the opportunity to work with them. Plus I loved the expandability they offered.

    I don't find the external GPU houses for Mac Studio as appealing to use.

    • bombcar 4 hours ago ago

      They were a great aspirational product in the "when I win the lottery" category.

  • ruptwelve 17 hours ago ago

    I mean for the cost of the Mac Pro wheels you can get a Macbook Neo these days!

  • Simulacra 8 hours ago ago

    I may be in the minority but I liked the cheese grater, it was a machine I could upgrade and use as a powerful workstation. The trashcan really turned me off of the Mac Pro series. I think Apple really missed an opportunity here, but hope Springs eternal.

  • openports 20 hours ago ago

    R.I.P. to the cheese grater

  • dangus 20 hours ago ago

    > Serviceable, repairable, upgradable Macs are officially a thing of the past.

    Well, not exactly. Apple’s desktop Macs actually all have modular SSD storage, and third parties sell upgrade kits. And it’s not like Thunderbolt is a slouch as far as expandability.

    I can see why the Mac Pro is gone. Yeah, it has PCIe slots…that I don’t really think anyone is using. It’s not like you can drop an RTX 5090 in there.

    The latest Mac Pro didn’t have upgradable memory so it wasn’t much different than a Mac Studio with a bunch of empty space inside.

    The Mac Studio is very obviously a better buy for someone looking for a system like that. It’s just hard to imagine who the Mac Pro is for at its pricing and size.

    I think what happened is that the Studio totally cannibalized Mac Pro sales.

    • wpm 15 hours ago ago

      Thunderbolt absolutely is a slouch.

      Every PCIe card I have requires it's own $150+ PCIe to Thunderbolt Dock and its own picoPSU plus 12V power supply.

      External PCIe is convenient for portables. Not for desktops. It's a piss-poor replacement for a proper PCIe slot.

    • bombcar 17 hours ago ago

      Apparently the Neo is surprisingly repairable - in that parts can be replaced, not that you can buy stuff at Microcenter or Fry's (RIP) and shove them in.

    • bigyabai 20 hours ago ago

      > Apple’s desktop Macs actually all have modular SSD storage

      "Modular" does not mean that it's serviceable, repairable or upgradable. Apple's refusal to adopt basic M.2 spec is a pretty glaring example of that.

      • internet2000 20 hours ago ago

        > Apple's refusal to adopt basic M.2 spec

        I get the ideological angle, but in practical terms that's not a barrier: https://www.aliexpress.us/w/wholesale-apple-ssd-adapter.html...

        • wtallis 19 hours ago ago

          Those are all for Intel Macs, and not even the recent Intel Macs. You can't use a passive adapter to put a NVMe SSD into a current Mac like you could a decade ago, because back then the only thing non-standard about the SSD was the connector. Now most of the SSD controller itself has moved to the SoC and trying to put an off the shelf SSD into the current slot makes no more sense than trying to put an SSD into a DIMM slot.

        • bigyabai 20 hours ago ago

          This is the USB-C dongle argument all over again, but with a proprietary connector that a total of one (1) company uses.

          • dijit 18 hours ago ago

            Honestly I don't care, but Apples SSDs don't have a storage controller on them, and those adapters are designed to "bypass" the controller on m.2 drives.

            You can argue that it's different for the sake of being different, but

            A) I personally don't always hold that monopoly is a good thing, even if we agree m.2 is fairly decent it doesn't make it universally the best.

            B) I'd make the argument that Apple is competing very well with performance and reliability..

            C) IIRC there are some hardware guarantees that the new filesystem needs to be aware of (for wear levelling and error-correction) and those would be obfuscated by a controller that thinks its smarter than the CPU and OS.

            if we're talking about Intel era Macs then that proprietary connector predates M.2 entirely and is actually even thinner and smaller (which is pretty important when the primary use-cases is thin-and-lights); though I suppose that the adapter fits is a sign that it would have been possible to use a larger connector...

            • bigyabai 17 hours ago ago

              That is an absolutely awful argument against what I just said. I can tell that you don't care.

              Tens of thousands of mini PC and laptop boards ship with multiple M.2 slots. Apple can use both connectors, with the exact same caveats that normal M.2 SSDs have on ordinary filesystems. Apple does not have to enable swap, zram, or other high-wear settings on macOS if they are uncomfortable with the inconsistency of M.2 drives. Now, I'd make the argument that people don't complain about APFS wear on external SSDs, but maybe I'm wrong and macOS does have some fancy bypass saving thousands of TBW/year.

              Whatever the case is, "the annoying thing is competitive" was not a justification for the Lightning cable when it reached the gallows. It did not compete, it specifically protected Apple from the competitive pressure of higher-capacity connectors. The same is true of Apple's SSD racket and the decade-old meme of $400 1tb NVMe drives.

              • dijit 17 hours ago ago

                I don't buy that argument, "a PC by any other name" is what made intel mac's somewhat uncompetitive when compared to the M-series laptops: which are currently dominating with total vertical integration of the OS and hardware.

                Also: All things being equal, the lightening connector was technically superior to USB-C and arrived much earlier.. so it's somewhat on the same path.

                USB-C succeeded due to a confluence of;

                A) Being a standard people can get behind. (lightning was, of course, much more awkwardly licensed)

                B) Lightning never got a sufficient uplift from USB-2.0 performance.

                C) The EU eventually killed lightening through regulation.

                It was, however, smaller, more durable and (as mentioned) earlier.

                I'm totally not against our new USB-C everywhere situation w.r.t. phones, but if anything it reinforces the point: The technically superior thing being too proprietary caused its death (despite being early).

    • kelnos 20 hours ago ago

      It's sad that "you can replace the SSD" is in some people's eyes "serviceable, repairable, and upgradeable".

      We should demand better of our computer-manufacturing overlords.

      > It’s not like you can drop an RTX 5090 in there.

      Why not? Oh, right, because Apple won't let you. Sad.

      • dangus 19 hours ago ago

        I didn’t phrase myself very well. What I’m saying is that the loss of the Mac Pro didn’t reduce the repairability or modularity at all in the product lineup.

        It was exactly as modular as the Mac mini and Mac Studio.

        The only difference is that it had some PCIe slots that basically had no use since you couldn’t throw a GPU in there, and because thunderbolt 5 exists.

        Yeah, sure, there were some niche PCIe things that two people probably used. Hence the discontinuation.

        I am an ex-Mac user, I own a Framework. Don’t worry, you’re preaching to the choir.

  • shevy-java 10 hours ago ago

    > It has gone without an update since then, languishing at its $6,999 price point

    What I find fascinating is how people pay so much for Apple-related products. Perhaps the quality requires a premium (I don't share that opinion, but for the sake of thinking, let's have it as an option here), but this seems more deliberate milking by Apple with such price tags. People must love being milked it seems.

  • adolph 18 hours ago ago

    In other news, the Mac Pro Wheels Kit was also discontinued.

    https://www.macrumors.com/2026/03/26/mac-pro-wheels-kit-disc...

  • pjmlp a day ago ago

    Now everyone that needs classical workstations can finally move on into Linux or Windows workloads.

    Believe t-shirts at WWDC were not enough.

    Thus the workstation market joins OS X Server.

    • Amorymeltzer a day ago ago

      For those who don't know what the t-shirt reference is, it's a creation by John Siracusa/The Accidental Tech Podcast: <https://cottonbureau.com/p/4RUVDA/shirt/mac-pro-believe-dark>.

      • nosrepa a day ago ago

        And I still don't get it.

        • Amorymeltzer a day ago ago

          Siracusa—probably best known here for doing fabulous OS X reviews for Ars—is a co-host of ATP. He is also known is such circles for having Mac Pros, and using them for a long time (sometimes by choice, sometimes by circumstance). He thinks Apple should make a Mac Pro, not necessarily because it's a big seller, but because he thinks Apple should make a "best computer," much in the same way car companies might make a car that will never sell but pushes engineers, etc.

          They made a shirt. It was fun.

          • WillAdams 18 hours ago ago

            Ages ago, when new Mac hardware came out, I'd amuse myself by putting together an "ultimate Mac workstation" in the configurator --- once upon a time, one could hit 6 figures pretty easily --- these days, well I panic bought a duplicate computer because I was worried a chipped/cracked display was going to make it unusable (turns out a screen protector has worked thus far).

            I agree with the reasoning, and would like to see Apple continue to make aspirational hardware, but maybe the mainstream stuff is good enough?

            • bombcar 17 hours ago ago

              > maybe the mainstream stuff is good enough?

              Even Siracusa admits that - he's found it hard to articulate what a true "Mac Pro" would do that you can't do with other things.

              Back in the heyday of the $100k Mac Pro you could certainly imagine it doing things that wouldn't be easily done by anything under $50k, and it would look good doing it.

    • badc0ffee 21 hours ago ago

      Apple still sells a workstation-type machine: the Mac Studio.

      • pjmlp 14 hours ago ago

        No it isn't, it is a mini where you can add audio cards, which is basically the only extensions it has available.

        Hardly workstation class.

        • badc0ffee 13 hours ago ago

          It's certainly beefier than a Mini - 6 TB5 ports (which can drive 6 PCIe 5.0 x4 slots in an enclosure if you want), M3 Ultra, up to 256GB RAM.

          • pjmlp 12 hours ago ago

            The detail you are missing is that one has to buy an additional enclosure and the set of PCI cards is actually limited, hardly workstation class.

            This is a workstation,

            https://www.dell.com/en-us/shop/desktop-computers/precision-...

            • badc0ffee 5 hours ago ago

              Damn, that Dell case, fancy Xeon processor and nVidia card must really be worth a lot, because the rest of what you get for $9k is 32 GB RAM, a 512 GB SSD, and a Windows 11 Pro license, all while consuming hundreds of watts of power.

            • tonyedgecombe 10 hours ago ago

              Interesting that all the memory options are "no longer available".

              • pjmlp 8 hours ago ago

                That is an AI bros problem that will affect most of the industry, not only workstations.

      • wpm 15 hours ago ago

        It's not at all a workstation type machine. It's a Mac Mini with bigger SoCs and better cooling.

      • bigyabai 20 hours ago ago

        What is this, a workstation for ants?

        • badc0ffee 20 hours ago ago

          It's a pizza box, for a 6" pizza.

  • WesolyKubeczek 13 hours ago ago

    Sad. I had this pipe dream of an Apple Silicon system made as a PCIe endpoint, so a Mac Pro could be a coordinator and host to like 4 of such systems in a cluster with very fast interconnect. Imagine the possibilities.

  • zer0zzz 13 hours ago ago

    It is kind of a bummer they never supported Cuda/HIP GPGPU using those slots.

  • karel-3d 13 hours ago ago

    They also discontinued the 1000 dollars Pro Stand, apparently

    • karel-3d 10 hours ago ago

      oh and those wheels

  • jeffbee 16 hours ago ago

    I guess A/V pros are used to getting screwed constantly, but it must be really irritating to face the prospect of eventually having to move PCI add-in cards to TB5 enclosures that cost $1000 per slot.

  • chaostheory 17 hours ago ago

    The price to value for the Mac Pro kept going down. Mac Studio made more sense.

  • yalogin 17 hours ago ago

    Another company would have killed it a long time ago. It lasted this long because it’s Apple.

  • pipeline_peak 16 hours ago ago

    Good, Mac Pro was fugly.

    I like Apple when they make pretty stuff. Especially small, shiny, and quiet.

  • system2 19 hours ago ago

    If I remember correctly, the maximum configuration was something like $35k back in the day. I wonder what those people feel like now. On the other hand, if they have $35k to burn, probably they don't even think about it.

    • giantrobot 15 hours ago ago

      If you spend $35k and just idle the machine or just check e-mail you've burnt the money. If it's your work machine and you've got a $100/hr billable rate it's paid for in a little over a month. Three months at a $50/hr rate.

      If you bought the $35k Mac Pro in 2023 when it was released and have a $50/hr rate it's been paid off for about 30 months. So as of today those owners probably aren't too broken hearted. They'll likely get at least another three years out of them.

      People buying $35k Mac Pros probably paid them off after a single contract. So they've just been making money rather than costing money.

      • gjm11 8 hours ago ago

        I think these calculations are a bit bogus.

        If you spend $35k on a nice computer, and then earn $35k from doing some work using it, that doesn't mean that buying the computer has paid for itself unless the computer is solely responsible for that income. It probably isn't.

        It's not necessarily even true that after doing that work it's "paid for", in the sense that getting the $35k income means that you were able to afford the $35k computer: that only follows if you didn't need any of that income for other luxuries, such as food and shelter.

        If you're earning $50/hour, 40hr/week then what you've done after 17.5 weeks is earned enough to buy that $35k computer. Assuming you don't need any of that money for anything else, like food and shelter.

        If the fancy computer helps you get that income then of course it's perfectly legit to estimate how much difference it makes and decide it pays for itself, but it's not as simple as comparing the price of the computer with your total income.

        Regardless of how much it contributes, if you have plenty of money then it's also perfectly legit to say "I can comfortably afford this and I want it so I'll but it" but, again, it's not as simple as comparing the price of the computer with your total income.

      • klausa 12 hours ago ago

        >If it's your work machine and you've got a $100/hr billable rate it's paid for in a little over a month.

        Are you working 996 weeks or something?

        At standard 40h work-week the math works out to 8.75 weeks to "pay for itself".

        • bluedino 7 hours ago ago

          What if that machine lets you do your job 4x faster?

          • klausa 6 hours ago ago

            I don’t think working faster 4x makes you experience time dilation to the degree that you experience 8.75 weeks as 4 in your frame of reference; but my relativity math is a bit rusty, I could be wrong.

  • hulitu 14 hours ago ago

    > Apple discontinues the Mac Pro

    They replaced it with Mac Neo. Did you notice the wonderful build quality, the accesible price and that everyone is buying it ? And it has USB: U from universal.

  • longislandguido 20 hours ago ago

    Apple betrayed their pro customers years ago—right around the time they went to version X of the Pro apps—it's all been a slow death by a thousand paper cuts since then.

    The money's all in selling phones to teen girls now, and taking their mafia cut of app store sales.

  • razkaplan 7 hours ago ago

    My autonomous YouTube show picked this up for today's rabbit hole episode — worth watching: https://www.youtube.com/watch?v=g89_Me4iAi4