435 comments

  • 0xbadcafebee 2 days ago ago

    Colossus 1 datacenter is the one using illegal power, is poisoning the air for poor communities near Memphis, and is potentially poisoning the water. It's likely the additional demand on the grid will cause massive blackouts during extreme weather events, putting residents at further risk. https://en.wikipedia.org/wiki/Colossus_(supercomputer)#Envir...

    So you can put Anthropic on your list of companies that like to talk big about safety, but when the rubber hits the road, profits matter more than safety.

    • boldlybold 2 days ago ago

      Illegal is a strong term here. While the wiki link you included indicates there might be some permitting nuances, I've seen nothing claiming the power is "illegal."

      • Thrymr 2 days ago ago

        xAI removed its illegal gas turbines and obtained permits for the others only after being sued by the Southern Environmental Law Center. They then built another unpermitted site (Colossus 2) across the state line in Mississippi, and they are being sued again. [0]

        "The company began operations at its first site, Colossus 1, in June of 2024 and used as many as 35 unpermitted gas turbines to power the facility. Despite receiving intense public pushback over the use of illegal turbines and the lack of public input and transparency around Colossus 1, xAI officials said it planned on “copying and pasting” its unlawful turbine strategy to power Colossus 2."

        "xAI removed its unpermitted turbines at the Colossus 1 data center after SELC, on behalf of the NAACP, sent a notice of intent to sue under the Clean Air Act. The company obtained permits for its remaining 15 turbines."

        [0] https://www.selc.org/news/xai-built-an-illegal-power-plant-t...

        • 486sx33 a day ago ago

          They did not require permits at the time as they were portable Think transport trailer sized. If you use portable power for under 365 days a year, an epa permit was not required. They changed the rules on permitting after and xAI complied

          • eli 17 hours ago ago

            Yes, I believe it's xAI's position that they were technically in compliance at the time. I don't know that a judge would agree. The new EPA rule is more of a clarification; they do not concede that point.

      • fancyfredbot 2 days ago ago

        The ethics are questionable, legal or not. Anthropic are tarnishing their image again here.

        Not sure how much it hurts then compared to blocking openclaw though.

        • DonsDiscountGas 21 hours ago ago

          I don't quite understand the business logic behind "blocking" openclaw (you can still use it at API rates) but I never saw how this was unethical. Anthropic has no ethical obligation to support other people's software

          • fancyfredbot 17 hours ago ago

            Blocking openclaw made everyone realise that what anthropic giveth, anthropic can take away.

            It is similar to the xAI gas turbines in that it tarnished their image - at least amongst those naive people who saw them as a plucky startup rather than a profit seeking corporation who don't like competition.

            I agree with you that the ethics are very different.

          • butlike 20 hours ago ago

            I don't get it. On the one hand we had Steve Jobs saying "No App Store!" and everyone getting up in arms, then here we have "no obligation to have Anthropic support other people's software," and that being OK. So which is it? Or does the answer change daily depending on what makes us feel good?

        • port11 a day ago ago

          I find the ethics of power generation, resource use, and pollution in a world struggling with climate change to be more of a challenge than whether a few people can run some software. And that’s coming from a Claude user that’s getting tired of their shenanigans.

      • breadsniffer a day ago ago

        from perplexity deep research: "Colossus‑related gas‑turbine power plants have been run in ways alleged to violate the Clean Air Act, in already over‑polluted Black and low‑income communities near Memphis, and Anthropic has now become the main user of that infrastructure."

        sources: https://www.tba.org/?pg=Hastings2025AIX (Tech, Toxins, and Memphis: Evaluating the Environmental Footprint of the xAI Facility)

        • lostmsu a day ago ago

          Any specifics? What are they doing and what statutes are allegedly being violated?

          • 0xbadcafebee a day ago ago

            Report from February 2026:

              The independent study, conducted by EmPower Analytics Group and commissioned
              by the Southern Environmental Law Center, was led by a Harvard-trained 
              environmental health scientist Dr. Michael Cork and shows that operation
              of xAI’s proposed permanent gas turbines would measurably increase 
              health risks for families throughout the area—even in places as far away
              as Germantown and North Memphis.
            
            https://www.memphiscap.org/
            • londons_explore a day ago ago

              The southern environmental law center is a political action group, not a government agency.

          • SirSavary a day ago ago

            Emphasis my own:

            > "The xAI facility has already deployed *nearly 20 gas turbines, including four large units with a combined capacity of 100MW*, to power its AI system Grok... There are plans to add *15 more gas turbines between June 2025 and June 2030*, and the turbine application projects *annual emissions of around 11.51 tons of hazardous air pollutants*."

            > "it is currently *running gas turbines without the necessary permits from the Shelby County Health Department*"

            > "findings from the Southern Environmental Law Center indicate that the facility has 'installed' gas turbines. This suggests that new industrial systems are in place and that *xAI is obligated to comply with the new NSPS* [New Source Performance Standards] *to avoid violating the Clean Air Act*"

            > "NSPS are authorized under *Section 211 of the Clean Air Act*... All new sources must comply with the *Best System of Emission Reduction (BSER)*, which mandates the use of state-of-the-art technology to minimize air pollutants."

            > "there is a history of Elon Musk's companies, such as *SpaceX and the Boring Company, being fined thousands of dollars for violating environmental law* to circumvent regulation"

            • i_think_so a day ago ago

              I wonder what the pollution from these gas turbines is like. SO2 from trace sulfur compounds? Is it much worse than a traditional gas-fired power plant for some reason? I can't imagine it would be but I have to plead ignorance and beg for hints here.

            • lostmsu a day ago ago

              So they haven't gotten permits, but why? Why where the permits denied?

              Just the other day we had news that some Californian environment protection agency denied permits for SpaceX for political reasons as opposed to following objective rules, as ruled by a judge. So the fact that some permits were not issued doesn't tell me anything.

              • SirSavary 9 hours ago ago

                To my understanding: the permits weren't denied, they were never applied for.

                Edit: I re-read https://www.tba.org/?pg=Hastings2025AIX and yes, it seems that xAI never applied for permits related to the gas turbines as they're making the argument that the permits aren't required.

    • ethagknight 20 hours ago ago

      I live in Memphis, none of this is true. What is true is that there is a concerted effort to smear anything related to xAI‘s presence in Memphis for some reason.

      For some facts, the colossus data center is next-door to a steel mill and city sewage treatment plant, a vacated gigawatt scale coal power plant complete with nasty Coal Ash Ponds, and a brand new combined cycle gas power plant. The area is at the far edge of Memphis city limits up against the river, in a heavy industrial area. There’s even a major Valero oil refinery right there too.

      Memphis has trillions and trillions of gallons of water, both in a gigantic underground aquifers and the Mississippi River itself. xAI has agreed to shed load in case of impending brownouts. The fear mongering is out of control.

      They had a ton of portable turbines that were under operating under a temporary permit, and that was the disputed part. However, the blame should rest with TVA and or Memphis light gas and water for not being able to run an appropriate high voltage connection less than 1 mile from the plant to the data center in a timely manner. However… What difference does it make if the natural gas is burned at TVA plant or very similar gas turbines on site in the same neighborhood. Environmental groups and the county health department tried suing, was struck down, xAI works closely with the State, but the whining continues. xAI is paying gargantuan taxes to the city, no tax breaks.

      These environmental groups do not care about the nasty unregulated cars burning oil, that I have to breathe every day. We terminated our motor vehicle inspection requirements due to the “burden” it places on the low income population. So they can burn their oil in my face, but then they sue to stop a SOTA turbine in an industrial area? There are junkyards in these same areas that burn their piles of waste tires every year or so “on accident”. No lawsuits there either.

      • pathartl 18 hours ago ago

        We have similar issues here in Wisconsin. Especially when it comes to solar and battery storage facilities. I absolutely think there needs to be more regulations carved out for data centers, just as there is for any other industrial building, but yeah the great mongering is incredible to see. Especially when the argument of "save our beautiful farmlands" is brought up. Do you even know how nasty agricultural runoff is?

      • whamlastxmas 18 hours ago ago

        Agreed there is a huge effort to smear as much as possible. Between parent comment being very highly voted and Wikipedia page being militantly updated and seeing these tired, wrong talking points everywhere, it's pretty obvious

    • causal 2 days ago ago

      So I was just Googling this, and apparently most datacenters don't pay any state tax on revenue generated by said datacenter? Huge loophole if true, no wonder capital investment in datacenters is so high. [0]

      [0] https://www.datacenterknowledge.com/regulations/how-are-data...

      • polski-g 2 days ago ago

        I like how you said "googling this", but then didn't actually read the article you linked.

        • causal a day ago ago

          > In general, data centers only pay corporate income tax if they generate revenue. Not all data centers do this because many don’t sell goods or services; they simply house servers. By qualifying as business expenses rather than revenue generators, they reduce the tax liability of their parent companies.

          > Thus, when it comes to income tax, at least, many data centers – especially hyperscale data centers owned by large companies – don’t generate tax revenue because they don’t generate direct operating income.

          • osiris970 a day ago ago

            They pay property taxes. The best tax there is as of now (LVT when)

            • skinfaxi a day ago ago

              I would be surprised if these developments didn't have tax abatement clauses.

            • causal 19 hours ago ago

              For a datacenter that generates billions, that is not much.

              • polski-g 17 hours ago ago

                "The datacenter" doesn't generate billions. The computations performed inside might, but almost never is the datacenter owner the same person running the servers inside. The owner is just leasing space; selling electricity and cooling to a third-party company. Their margins (ie: taxable income) are thin because competition is high. A landlord's taxable income is not determined by his tenant's income.

                • causal 16 hours ago ago

                  You are being pedantic to pretend you don't understand. It's tiring and unproductive.

                  At the end of the day, people are paying money to utilize the servers within the datacenter. That money is revenue. That revenue ought to be taxed by the state.

    • rcbdev 20 hours ago ago

      > profits matter more than safety

      For all the big talk from U.S.-Americans on European 'overregulation', they sure seem to have much more dystopian societal failure modes materialize.

    • timmmmmmay a day ago ago

      it's in a former appliance factory that's right next to two pre-existing TVA power plants, a Nucor steel mill, and a sewage treatment facility. you've been lied to about how close it is to a residential area, just look at a map

      • 0xbadcafebee a day ago ago

        "The independent study, conducted by EmPower Analytics Group and commissioned by the Southern Environmental Law Center, was led by a Harvard-trained environmental health scientist Dr. Michael Cork and shows that operation of xAI’s proposed permanent gas turbines would measurably increase health risks for families throughout the area—even in places as far away as Germantown and North Memphis." - https://www.memphiscap.org/

      • alsetmusic a day ago ago

        Air pollution travels.

    • matthewiiiv a day ago ago

      Are you going to stop using Claude Code then?

      • Zetaphor 21 hours ago ago

        Yes. Was that supposed to be a gotcha? Local models are becoming more useful, and I still remember how to write code.

        • andsoitis 21 hours ago ago

          Now that you have stopped using Claude Code, what have you replaced it with? Would love to know your setup. I am experimenting with local models too, but nothing comes close to Claude (Code), at least for me - not just for coding, mind you.

          • zamalek 20 hours ago ago

            FWIW, if you want frontier-level performance, as it was a few months back, Deepseek v4 and K2.6 are there. Almost zero chance you can run them locally, but you do have choice in terms of providers.

            Qwen-coder-next is considered SOTA for things you could actually run locally.

    • ETH_start 2 days ago ago

      Not every allegation that appears in print is true. One should be very skeptical about these kinds of allegations, especially when there are deep-pocketed corporations involved who can be sued or pressured to settle in the face of sufficiently "plausible and persistent" (to borrow Hazlitt's term) claims of harm done by their operations.

    • MagicMoonlight 2 days ago ago

      How would a data centre poison the water? They don’t produce any chemicals or do anything.

    • Rover222 19 hours ago ago

      Get out of your bubble, my god.

    • Bombthecat a day ago ago

      Who cares about Memphis! We need more AI bro! /s

  • arian_ 2 days ago ago

    Anthropic renting out the data center Elon built for Grok is the kind of plot twist you can't make up.

    • brokencode 2 days ago ago

      Pretty smart for SpaceX though. They’re turning an asset they made for a money-pit (Grok) into probably a major source of revenue ahead of their IPO.

      • floatrock 2 days ago ago

        We all remember 2 weeks ago when SpaceX bought $10B of Cursor services. https://news.ycombinator.com/item?id=47855293

        Since Cursor often relies on Claude models, some of those services will flow back to their own datacenter compute. Especially if there's, lets call it, "customer demand loadbalancing optimization agreements" that makes those Cursor services prioritize Claude models using the app keys that get load-balanced onto the SpaceX datacenter.

        Did SpaceX just spend $10B to rent out its own datacenter, juicing their recurring revenue metrics with their own AI services investment?

        • selicos a day ago ago

          If the question involves Elon and fraud or pumping numbers, then the answer is yes.

        • brikym 6 hours ago ago

          With Anthropic's help. And when it's time for Anthropic to hype their IPO maybe SpaceX will return the favour and offer some deal that looks great to retail investors.

        • predkambrij a day ago ago

          Either way, now those datacenters run Claude that they didn't before.

        • jordanb a day ago ago

          Big wheel keeps on turning

        • giwook 2 days ago ago

          I don't think it's the conspiracy theory that you're making it out to be.

          It is publicly known that the vast majority of deals in the AI space are circular in nature without the need for explicitly encoding any of it in a legal contract or even tacit agreements.

          e.g. Nvidia has invested significantly in many AI companies including both Anthropic and OpenAI which rely heavily on Nvidia's hardware and will undoubtedly use some of said investment towards that end.

          • floatrock 2 days ago ago

            Nvidia and Oracle are already public companies, they're just aiming for their next quarterly statements.

            SpaceX is getting dressed for their debutante ball and is putting on the makeup to make a grand entrance on the auction floor.

            Is there a difference? I legitimately have no idea. You are right that we can add another entry to the list of interconnected circular dealmakings. All this ain't gonna end well next time the music stops playing.

          • svnt a day ago ago

            Your argument is that since it is common in a bubble to make circular deals, there is no conspiracy. But you seem to suggest that people committing tens of billions of dollars aren’t looking any further down the pipeline than the name on the receiving bank account? Have you ever been anywhere near a large deal?

            • giwook a day ago ago

              That's a lot to imply from my simple comment. My viewpoint is actually the exact opposite of what you claim: it all feels like a house of cards that is set to collapse at any moment. I can also tell you're quite passionate about this and I wonder if that emotion is clouding your interpretation of what was meant to be an innocuous comment.

              My point was that there is a lot of this happening, it is not a unique statement nor is it surprising to see at this point.

              I made no attempt to dismiss or justify any of it.

              • svnt a day ago ago

                > I made no attempt to dismiss or justify any of it.

                > I don't think it's the conspiracy theory that you're making it out to be.

                Which is it then?

        • nutjob2 a day ago ago

          It's the circle jerk economy.

          Companies appear to be spending endless billions on AI but ultimately it's a huge wank.

        • dlev_pika a day ago ago

          When you put it like that sounds like another subprime crash in the making lol

      • Thrymr a day ago ago

        Sure, if "pretty smart" means overinvest in capital spending on an dirty datacenter powered by unpermitted gas generators that you don't even need anymore because of lack of demand for your product, so you lease it to a competitor (presumably at a huge loss). I am not sure that "major source of revenue" as a datacenter provider is the kind of growth opportunity that IPO investors are looking for.

        • cavisne a day ago ago

          Definitely not going to be leasing it at a loss. GPU's are sold out, Anthropic will be paying a significant premium.

          • AntiUSAbah a day ago ago

            Anthropic doesn't has that much pressure to pay while Musk has an IPO coming up and he wants to cleanup his numbers.

            Its also not a good sign because he should be able to leverage Grok, his billion dollar investment, instead of renting it out to Anthropic. But hey what does it matter to investor? if the IPO explodes, it is clear that people either can't read, don't care or don't understand.

        • hx8 a day ago ago

          > presumably at a huge loss

          Why do you say that? I was under the impression that everyone in the datacenter business was printing money.

          • XorNot a day ago ago

            Oracle certainly isn't.

            • signatoremo a day ago ago

              Says who? Oracle spends a lot of money to get ready for AI customers like OpenAI. They aren't there yet. They can't lose money serving what they don't have.

      • 23rf 2 days ago ago

        Its not even that. Its better to be involved in the game with a leader/help out a competitor who is competing against someone you don't like and don't want them to win, than to sit it out.

        • giwook 2 days ago ago

          The enemy of my enemy is my friend.

          • bombcar a day ago ago

            Maxim 29: The enemy of my enemy is my enemy's enemy. No more. No less.

            • thrownthatway a day ago ago

              Ferengi Rule of Acquisition #76:

              Every once in a while, declare peace. It confuses the hell out of your enemies.

          • a4isms a day ago ago

            This is something you say aloud, while muttering "useful idiot" under your breath.

        • AntiUSAbah a day ago ago

          Thats just bullshit in this corporate world.

          If he could fill his Datacenter with Grok use, he would make a lot more money.

          This is not a good sign at all.

      • runako a day ago ago

        The financials for a Musk company do not, and will not, affect investor sentiment in the slightest.

        Investors in the SpaceX IPO are buying a call option on Musk.

      • AntiUSAbah a day ago ago

        The weird thing is, that the IPO might still work out for him and save his ass again.

        At least he doesn't come across as a happy person...

        While i'm really curious though when someone might hit him back after all the garbage he did and still does

      • BrianGragg 2 days ago ago

        I see it more of lets make money off the hardware we are not using anymore.

        From Elon on X: ... After that, I was ok leasing Colossus 1 to Anthropic, as SpaceXAI had already moved training to Colossus 2.

        https://x.com/elonmusk/status/2052069691372478511

        • philipwhiuk a day ago ago

          But like... most companies are so short of GPUs they'd run it on anything. SpaceXAI not needing the compute is not really a good sign imo.

          • scottyah a day ago ago

            Are you worried about Google too? They're selling compute. Same with Microsoft, and Amazon. As far as I know Anthropic is really the only one that's compute-bound.

            • lmm a day ago ago

              Amazon is a compute specialist, their competitive advantage is in the compute business. And conversely they're not really trying to play in the AI business, so it's not at all suspicious that they don't want to use all their compute themselves.

              I am worried about Google and Microsoft, yes.

              • scottyah a day ago ago

                Amazon tries to make money at pretty much everything they can. They are investing a LOT in AI even if it's not a consumer-facing chatbot.

            • fhn a day ago ago

              Google, Microsoft, and Amazon's business model include selling compute - SpaceX, not so much.

              • signatoremo a day ago ago

                Why not? They can sell what makes sense for them, like surplus capacity, especially when there are desperate buyers.

              • scottyah a day ago ago

                Amazon is a bookseller and Google is just a web indexer. GCP didn't even open it's preview until 2008. Not sure why you think a business model is in any way a static thing.

              • skinfaxi a day ago ago

                Add Lidl to your list.

            • dzhiurgis a day ago ago

              > As far as I know Anthropic is really the only one that's compute-bound.

              I use gemini models daily. Jetbrains tells me when they are overloaded and switches to alternative (usually to openai which turns everything to shit). I'd say happens about fortnightly.

              It's a good litmus and forecaster for AI demand and I wish we had more visibility.

              • elxr a day ago ago

                Is gemini really better than gpt 5.5 currently? I haven't seen much sentiment along that direction.

                • igor47 a day ago ago

                  No it's really bad

    • cedws 2 days ago ago

      It was pretty obvious to me that the merger was a way of quietly shutting xAI down in a way that keeps investors happy. With it also being used as a vehicle to offload the Twitter debt to the public, he certainly has good accountants.

      • HarHarVeryFunny 2 days ago ago

        Yep - and in the meantime it's an asset of SpaceX to boost their IPO price, as long as this is done before people realize that xAI is apparently becoming a datacenter company not an AI one.

        Then you've got SpaceX buying 1200 cybertrucks from Tesla, so it's serving as failure laundering vehicle for all his endeavors.

        • nerdsniper 2 days ago ago

          > it's serving as failure laundering vehicle for all his endeavors.

          Which would be fine to me if Tesla wasn't a publicly traded company and SpaceX wasn't about to IPO. Whereas juicing companies in a way that affects the open stock market feels very inappropriate.

        • kcb a day ago ago

          Elon Musk has been failing any minute now since like what? 2015

          • HarHarVeryFunny a day ago ago

            I didn't say he's failing at everything - SpaceX certainly seems a huge success. Telsa had been doing well, although sales are now declining fast, and the Cybertruck has been a failure. He massively overpaid for Twitter, ruined the site, then got X.ai to bail him out. X.ai seems like a failure - evidentially not enough demand to utilize the data center he built for it, and when have you seen anyone say they use Grok for anything ?

            And now SpaceX investors are going to be left as the bag holders for X.ai/Twitter.

            • 23rf a day ago ago

              He's a big gambler with some judgement. But being a big gambler by definition means you will not always get it right.

              • scottyah a day ago ago

                It is always so odd seeing how many internet people consider any new attempt that doesn't go immediately viral with success as a bad mark on someone's character.

                • HarHarVeryFunny 21 hours ago ago

                  If you're not able to see a whole slew of "bad marks" on Musk's character, then you haven't been paying much attention. It's not either/or - you can be be successful in some areas while being a childish twit and moron in others.

                  • scottyah 16 hours ago ago

                    I think I've just seen many more fake/exaggerated "bad marks" than real ones so I've become bitter. He's definitely not been perfect, but the areas I see flaws (he can be extremely rude, some drug abuse, doesn't treat close/loved ones well, seems to lash out when getting too close to someone, can be very Ego driven and doesn't admit it until it's too late, constantly needs to be at war with someone) are always just passed over for "He's a LiTErAL NAzI" and "He is HOARDING all the MonEY because he's SO GreedY"- which are just so demonstrably false.

                    Overall though, to classify the work he's done and the impact on the world as unsuccessful is just insane. It's almost always from someone who hasn't even managed to lead a team of 10 through one project too.

            • HumanOstrich a day ago ago

              Hey Grok is pretty good for meme videos and pics. For anything serious, not so much.

            • simianwords a day ago ago

              He didn’t ruin the site. I think you don’t use it much so you don’t know. It’s pretty good now.

              • HarHarVeryFunny 21 hours ago ago

                It's unusable nowadays if you use the "For You" feed, and even if you stick to people you follow, most of the interesting people have left.

          • AntiUSAbah a day ago ago

            Yeah his luck is annoying. But he stoped having any character and ethics. He literaly was with his Tesla in front of the White House and bought himself a seat next to a Clown.

            But he also plays in areas were market disruption can't be done by many people at all.

            But look Tesla: He did the cybertruck debakel. He tanked Tesla as a brand, he is burning money on xAI and Twitter, he destroyed a beloved brand Twitter. He did the boring company garbage.

            The only thing this shows is some kind of masterclass between manipulation, public ignorance, luck, economy of high invest high risk and risk adverse industries.

            Starlink doesn't scale very well which is a low margin business, especially when Amazon and the others are joining the club.

            xAI is just a loss.

            Twitter probably still a loss.

            Tesla made a lot of money with co2 certificates. And a market were people were quite ignorant for a long.

            Space-X he wants to push that to the death, without a real endplan. He now talks about Mars and Datacenter in space like there is any real business up their.

          • blueaquilae a day ago ago

            If only those people listened to your guidance!

      • redox99 2 days ago ago

        Why would they spend 10B and potentially 60B in cursor if they were to shut xAI down? And I'm pretty sure Elon wants to have a model of his own, even if weaker, so it's "not woke".

        • whamlastxmas 18 hours ago ago

          The idea they're shutting down xAI is outrageously dumb and to me just sounds like FUD before the IPO

      • charlieflowers 2 days ago ago

        Not a merger, right, unless I missed something (admittedly skimming).

      • nprateem 2 days ago ago

        Yeah it's corporate subprime. Bundle a load of overpriced "assets" with made up valuations into something that's actually valuable, then shove it on the public markets so everyone has to buy it in their index trackers.

    • freakynit a day ago ago

      Excess money and influence makes a lot of things possible. Evil or good, that's a separate discussion.

    • aurareturn 2 days ago ago

      Plot twist but makes perfect sense for both companies.

      Anthropic gets the compute they so desperately need to keep growing. Elon rents out compute that xAI couldn't make use of due to little demand for Grok. SpaceX gets revenue on the books for IPO.

      PS. I want to translate this part:

        We’re very intentional about where we’ll add capacity—partnering with democratic countries whose legal and regulatory frameworks support investments of this scale
      
      To real speak:

        We're putting profits above anything else. Yes, Elon is a far right guy who supported Trump, a president who isn't very democratic, but we're just really desperate for more money. We're also trying to make you forget that xAI is funded by Middle East non-democratic governments. Heck, we'll even buy compute from China if we can sell Anthropic models there.
      • VortexLain 2 days ago ago

        >we'll even buy compute from China if we can sell Anthropic models there.

        Considering that Anthropic mass-bans Chinese users accounts based on using VPN (used to circumvent the Chinese firewall) and then demands an ID or a residence permit of a country where Claude officially works to ensure that the user doesn't live in China, seems unlikely.

        • aurareturn 2 days ago ago

          If the Chinese government tells Anthropic they can freely sell Claude in China, Dario is suddenly going to be kissing China's ass instead of saying how we can't let China win the AGI race for democracy and western values.

          • BrianGragg 2 days ago ago

            After they told the US government no on a very large contract. I find that hard to believe.

            • SyneRyder a day ago ago

              While I agree with the sentiment, $200 Million is really not a big contract for Anthropic when they're on $44 Billion annual revenue. It's less than half a percent.

              https://www.wsj.com/tech/ai/anthropic-ai-defense-department-...

            • XorNot a day ago ago

              They told the US government no on using Claude for approving lethal military strikes.

              China can get plenty of value from Claude without needing to use it for anything similar.

              They very specifically avoided a trap where the next time the US blows up a school full of children they were very obviously going to blame Claude for it.

      • phatfish a day ago ago

        Which naive souls are downvoting this? Anthropic is speed running Google's "don't be evil" mantra.

      • toephu2 2 days ago ago

        > funded by Middle East non-democratic governments

        What's the problem here exactly? Are you insinuating any non-democratic government is bad and evil and only democratic governments are the correct and right way to govern? sort of like: "there is only one true prophet, and it's the one I follow, and all the others are false!"

        • driverdan a day ago ago

          > Are you insinuating any non-democratic government is bad and evil

          The ones run by people who chop up journalists certainly are.

        • aurareturn 2 days ago ago

          No, I didn't say that.

          My point is that Anthropic cares a lot about "democracy" but will buy compute from a data center mostly funded by non-democratic nations.

          • trollbridge 2 days ago ago

            Who do you think is the sources of funding for Anthropic's lead investors?

            • aurareturn 2 days ago ago

              Your tone suggests I'm unaware of the fact that Middle East money heavily invests in American AI companies and data centers.

        • lern_too_spel a day ago ago

          Anthropic brought up the "democratic" justification, not GP. GP was just pointing out that Anthropic doesn't actually care. If it can get a sweetheart deal from an autocrat, it'll take it.

          But assuming there are people that care, if a government doesn't derive its right to govern from the will of the people it governs, under what definitions can it be considered legitimate? Divine right of kings?

        • whamlastxmas 18 hours ago ago

          I would say ruling over people without their consent is blatantly morally wrong, yes. In the same way anything non consensual is wrong

        • shimman 2 days ago ago

          Yes, especially in the context of supporting US imperialism and capitalists interests (perpetual war + extraction machine) over what would actually benefit Americans: peace + cooperation initiatives. Something also tells me that American civilians would rather cooperate with peaceful governments than those that feed the blood machine.

          America could do so much to compel the world to work in from a human rights perspective rather than petrodollars. I can't imagine any serious person would say the average American benefits from US imperialism. All US politicians did was traded away were secure middle class lifestyle for cheaper widgets, hardly anything worth caring about.

          Who benefits from American petrodollar policies? Not Americans, all the wealth gets extracted to the elites while civilians suffer from the imperial blowback/boomerang.

          Look at what the new deal coalition brought in and they nearly burnt out enough to allow neoliberalism to flourish during their fall. What do we have in return? No universal healthcare, no universal childcare, a broken welfare system, increasing income inequality, losing the ability to make a better life.

          • Footnote7341 a day ago ago

            IDK what world you're living in, but in the real world Americans are the richest people on earth and richer than ever before in real terms. And yes international seigniorage is part of that.

      • 2ndorderthought 2 days ago ago

        Don't forget the whole, "maybe this will make it easier for xAi to distill anthropic models and we can make another attempt at mechahitler"

      • foobar_______ 2 days ago ago

        Thank you for the "real speak" section. Accurate and hilarious.

    • dboreham 2 days ago ago

      I'm just relieved to read that it isn't in fact...in space.

      • stevefan1999 a day ago ago

        SPAAAAAAAACEEEEEEEEEE (it is a Portal 2 space sphere reference)

      • georgemcbay a day ago ago

        I'd rather it be in space than where it is now, poisoning people in the rural parts of Memphis with off-gassing from their methane turbines.

        • robwwilliams a day ago ago

          Urban/industrial and refinery complex; not at all rural. Located about 8 miles southwest of downtown Memphis (3231 Paul Lowery Rd) on a bend of the river.

  • gpugreg 2 days ago ago

    > As part of this agreement, we have also expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity.

    Anthropic is either taking this space business more serious than the general public, or posting this sentence was part of the deal to get the compute.

    • airspresso a day ago ago

      > posting this sentence was part of the deal to get the compute

      This 100%

      • londons_explore a day ago ago

        Expressing an interest is presumably free... So anthropic might as well.

      • theptip 18 hours ago ago

        But also - anyone would be interested in purchasing orbital compute at the price Elon is quoting.

      • scottyah a day ago ago

        Source? I'd like to read more on Anthropic's views of space compute

        • DANmode a day ago ago

          > As part of this agreement, we have also expressed interest in

          • stingraycharles a day ago ago

            Which is now their official position I guess as this whole “AI space datacenter” stuff is a significant part of the whole SpaceX IPO.

            I assume privately they may not share that opinion, but it’s not in Anthropic’s interest to talk about this (very little to gain, and may ruffle a lot of feathers if they say the wrong thing).

            • ethbr1 a day ago ago

              AI space datacenters only make sense from one perspective -- sovereignty.

              If you're someone with a lot of money, who dislikes governments meddling in your business, and often pisses off governments...

              ... oh, I see why this is an Elon talking point now.

    • Sevii 2 days ago ago

      Anthropic needs any compute they can get. So if Elon wants to build orbital data centers Anthropic would be happy to run models on it. There isn't really any doubt Elon can build orbital data centers the question is if they are economical compared to earth based.

      • 23rf 2 days ago ago

        I love how this line of thinking completely avoids the issue re. improvements in local models.

        I suppose if you are desperate to justify a large investment this what you would do - frame the story in a particular way.

        • impulser_ a day ago ago

          Local models are always going to be useless unless compute get significantly cheaper, and it's not. TSMC might literally run out of capacity to build any consumer compute product.

          Once computer constraints ease up, you will see much larger models. The reason LLM seems to have stalled a bit is because there just not enough compute.

          You have more people using AI which requires more compute, and you want to build larger models which requires more compute and you have limited compute. What do you do?

          • 23rf a day ago ago

            Right.. and computers were once the size of a large room vs now fit into a pocket.

            " The reason LLM seems to have stalled a bit is because there just not enough compute."

            lol okay mate.

            • s08148692 a day ago ago

              > Right.. and computers were once the size of a large room vs now fit into a pocket.

              and yet now we have far bigger rooms with far bigger computers anyway

              Hardware may improve exponentially, but demand for compute increases double-exponentially. we'll always need more, bigger computers

      • cyclopeanutopia 2 days ago ago

        What are you talking about

        There is no doubt that it's not a serious idea.

        • charlieflowers 2 days ago ago

          Help me understand why not? I know solar power generation in space, and "beaming" the power back, was a naive idea. But this would actually use the power up there, mostly for training, but also for inference.

          That claim seems reasonable. I have zero knowledge of the economics of launching and maintaining satellites though.

          • PufPufPuf 2 days ago ago

            As I understand it, the problem is cooling. There isn't any medium to take away the heat, so the only option is to slowly radiate it away.

            • Gagarin1917 a day ago ago

              Which is apparently manageable. Scott Manley isn’t an industry veteran, but he does know a lot about space engineering and science. Here’s his breakdown of the feasibility, and heat management is not really a major issue:

              https://youtu.be/DCto6UkBJoI

              • throwaway85825 a day ago ago

                Heat management is not a technical issue but a reliability issue.

                • londons_explore a day ago ago

                  These satellites will be in orbits where they are always illuminated. That means constant temperatures, which means no thermal cycling and no reliability concerns.

                  When people say 'running it hot is bad for reliability', they mean 'running it hot and then brining it back to room temp from time to time will eventually kill it'.

                  • throwaway85825 a day ago ago

                    It's in space which requires liquid cooling. No rocket is big enough so it has to be assembled on orbit. No liquid cooling terrestrial system is 100% leak free.

            • Sammi 2 days ago ago

              Anyone who has googled just once to ask if datacenters in space make any sense, has found out they don't because they can't get rid of heat.

              That leaves only two kinds of people left who are still talking excitedly about datacenters in space: The uninformed and the grifters.

              • dgfl a day ago ago

                The existence of starlink proves that this is false. Look at most current pitches, they don’t talk about GW-class monsters anymore. There’s absolutely nothing stopping a 20-30kW satellite bus the size of starlink (or I guess up to 100kW? once starship is available - it’s all about payload fairing diameter) from hosting ~1 rack of compute and antennas. The economics may or may not make sense, we’ll have to see.

                There’s very little research work needed to make this happen; it’s all about engineering some satellite buses and having them fly in close formation to get a “data center”. And this group of satellites in sun-synchronous orbit would relay to a comms constellation e.g. starlink itself) and operate as a global scale data center. The heat management and orbital mechanics are all straight forward really.

                • Sammi a day ago ago

                  I've heard this before. A datacenter and a starlink sattelite are not in the same ballpark of power usage and heat dissipation needs. The are orders of magnitude off from each other.

                  • dgfl 20 hours ago ago

                    The point is that you don’t need to put a whole datacenter into a single satellite. You can put a single rack per satellite and have different racks communicate via antennas, laser links, or perhaps even wires since they’ll be launched in groups of 10-50 anyway. You could also dock them to each other, but that’s not necessarily needed.

                    • niam 17 hours ago ago

                      I don't understand what makes these "datacenters" if they're distributed across satellites with WAN-esque interconnect.

                      Are we overloading the term "datacenter"? Or is it not overloaded but somehow able to achieve datacenter-like speeds / (tail) latency even when distributed across satellites?

                    • Sammi 14 hours ago ago

                      Ok, but then what's the point? How is having a small amount of compute in space useful?

                • dmlittle a day ago ago

                  It's worth noting that GPUs have a much higher failure rate than traditional CPUs. Over 10x the failure rate due thermal stress. The amount of heat generated is very different. You can't really replace a GPU in a satellite (at least today?) which would place most of these satellites as space debris in a ~5 year horizon.

                  • sroussey a day ago ago

                    Usually satellites utilize an older node as newer nodes are easily bit-flipped by radiation. And blocking radiation is heavy.

                    AI calcs may handle wrong calculations better than cpus where software will tend to panic.

                  • lijok a day ago ago

                    Which is the same lifetime as a starlink sat

                    • gambiting a day ago ago

                      So what exactly is the benefit of having that thing in orbit then, where it costs you millions of dollars to put it there?

                      • Footnote7341 a day ago ago

                        The current bottleneck on compute is power and zoning. Solar panels are 5x more efficient in space, and there is no zoning in space.

                        • jsnell a day ago ago

                          The current bottleneck is silicon. Every chip that is manufactured gets housed and powered. (It makes sense: the cost of compute is dominated by capex, the power costs are irrelevant, so they're ok paying a premium for power).

                          The space data center hypothesis relies on compute supply growing faster than power supply. (Both are bottlenecked on parts of the supply chain that will take ages to scale.)

                          Even if you believe that's the case, the point at which orbital data centers start making sense is incredibly sensitive to the exact growth rates.

                          • lijok a day ago ago

                            The current bottleneck is not silicon. There is plenty of silicon locked up in previous gen GPUs that are no longer efficient enough to run relative to newer models. The bottleneck is the economics of owning the older GPU models - which is why all the GPU neoclouds are gonna go bust unless they can get customers to continue renting old GPUs.

                            The economics are vastly different when opex is near zero for these things

                            • jsnell a day ago ago

                              All of that is incorrect.

                              H100 rental prices are still as high as when the cards were brand new. The prices vastly exceed the power costs.

                              In a world where power or DC permits are the current bottleneck those H100s would be getting retired in favor of Blackwells. But they aren't. They are instead being locked in for years long contracts.

                              • lijok a day ago ago

                                Why exactly would the H100s get retired for Blackwells if specifically power and DC permits were the bottleneck?

                                • jsnell 15 hours ago ago

                                  Because they are >10x more power efficient.

                                  If silicon were relatively abundant and power/DC space scarce, you'd get an order of magnitude more bang for the Watt by replacing the H100s with newer GPUs.

                                  But nobody is doing that. Blackwells are being installed as additional capacity, not Hopper replacements.

                                  So it is pretty clear that silicon is the primary bottleneck.

                                • brohee 13 hours ago ago

                                  Because you'd need to trash the old GPUs in order to make room for new GPUs. Right now new GPUs get online mostly in new DCs. TSMC fab capacity is much more limiting than DC building and it will likely keep being the case. It's much easier to build a DC than a fab.

                      • lijok a day ago ago

                        Millions of dollars? Where did you get that number from?

                        • gambiting a day ago ago

                          ...how much do you think each rocket launch costs?

                          • lijok a day ago ago

                            Not millions of dollars per sat. Are you being intentionally obtuse?

                            • gambiting a day ago ago

                              Are you intentionally misreading what I'm saying?

                      • dzhiurgis a day ago ago

                        Self destruction is a feature, not a bug.

                        That said eventually they can be lifted to higher orbits and have robots deliver and swap updated compute (if not made in space itself!).

                • zbentley a day ago ago

                  "Space datacenter" -> overpriced starlink with some shitty edge compute -> "look guys, we built a space datacenter; earnings results to follow" -> number go up.

                • dpkirchner 19 hours ago ago

                  How much power could we get out of the fuel required to launch a 20-100kW rack in to space, if we were to burn it on the ground?

                • aetherspawn a day ago ago

                  I don’t think Sun synchronous orbit is possible except in LEO.

                  LEO is high risk and star link satellites deorbit or burn up all the time. Not good from a capex POV on graphics cards.

              • redox99 2 days ago ago

                The area you need in radiators is only half the area you need in solar panels. So it's definitely not a deal breaker.

                Its still very dumb because of economics, logistics, serviceability and more.

                • prepend a day ago ago

                  Solar on earth was dumb because of logistics, right?

                  Things get cheaper.

                  • redox99 a day ago ago

                    Long term maybe, but I don't think it makes sense now, and won't for many years.

                • scottyah a day ago ago

                  Pretty much everything has been "very dumb because of economics, logistics, serviceability and more". What kind of hacker are you to be on this site lol

              • small_model a day ago ago

                SpaceX have presented on this and it's fairly straightforward and they already do it with starlink satellites, just at a larger scale. Sound like you are the uniformed one (or an EDS victim)

                • ceejayoz a day ago ago

                  Starlink satellites don't generate the sort of heat a datacenter full of GPUs does. The ISS has enormous radiators, and it's only in space because it's a space station. Putting datacenters there is just goofy given the amount of available space on the ground.

                  • ericd a day ago ago

                    All of that has been repeatedly addressed in anything that discusses it, if you care to try to understand. It has ~nothing to do with available space, the US grid can’t handle the current rate of expansion. It’s bad enough that apparently Span, the smart electrical panel company, is pitching a box full of Blackwells that’ll sit outside new construction homes and use all the headroom on residential 200A circuits. Space is starting to look reasonable.

                    Related, US readers should call their reps and ask them to support a successor to EPRA, the Energy Permitting Reform Act, the vast majority of the generation that’s waiting for approval is from clean energy sources. It nearly got over the line before the last Congress ended, and it’s one of the most impactful things we can do to combat climate change, combined with electrifying various carbon intensive activities.

                    • ceejayoz a day ago ago

                      > the US grid can’t handle the current rate of expansion

                      This is a self defeating argument. Neither can space!

                      Any scenario in which you can get data centers and power into orbit is easier on land.

                      • ericd a day ago ago

                        Not quite, I'm rooting for the solar/battery microgrids down here, one of the startups I've invested in is working on those, but you don't really even need batteries for panels in a dawn-dusk sun synchronous orbit, which is a pretty huge advantage. Also, there aren't weeks where you have 1/4 the output because it's just cloudy all week, and your output isn't crushed during winter.

                        And the hardest part of my home solar install, by far, was the counterparties (inspectors, power company, and subcontractors). My understanding is that it's much worse when you're trying to get a grid scale install online, the interconnection queue is currently years long. This avoids most counterparties except the ones they're already routinely dealing with.

                  • lijok a day ago ago

                    Why are you comparing the output of a datacenter to the output of a single sat?

                    How much power do starlink sats draw and how does it compare to say 8x H200s?

                    • ceejayoz a day ago ago

                      > This gives us access to more than 300 megawatts of new capacity (over 220,000 NVIDIA GPUs) within the month.

                      27,500 satellites need launching - fast! - just for Claude to meet a demand spike?

                      • lijok a day ago ago

                        There are over 10k starlink sats in orbit already. They’re obtaining approval for another ~30k.

                        So clearly not a problem for them.

                • Sammi a day ago ago

                  I've heard this before and these are not comparable at all. Starlink is missing a few digits in it's power usage and heat dissipation needs compared to a datacenter.

                • kristjansson a day ago ago

                  Why did we start saying EDS this week, just in time for the IPO?

                • wbxp99 a day ago ago

                  EDS? Like still believing Elon's claims are truthful?

              • lijok a day ago ago

                what do you mean they can’t get rid of heat? radiators exist

                • ceejayoz a day ago ago

                  https://en.wikipedia.org/wiki/External_Active_Thermal_Contro...

                  All that gets you 70kW of cooling. Radiating to vacuum isn't very efficient.

                  • lijok a day ago ago

                    And that’s sufficient for roughly 100 unoptimized DC grade H200s.

                    Not efficient, and it doesn’t have to be, because the cooling system has 0 opex cost. And capex clearly can be made to wor

                    • ceejayoz 17 hours ago ago

                      OK, so you've got a football-sized solar and radiator array to support 100 H200s.

                      Why are we not building it on land again in some abandoned mall's parking lot?

                      • lijok 16 hours ago ago

                        Because it’s allegedly more expensive

                  • scottyah a day ago ago

                    Dang, sucks we can never improve any technology. Let's just call it quits, guys.

                    • cyclopeanutopia a day ago ago

                      Maybe one day we can, but it's definitely not in a category "there is no doubt".

                      • scottyah a day ago ago

                        Of course not, where's the fun in that category?

                    • ceejayoz a day ago ago

                      Technology is wonderful.

                      Physics still gets a say.

                  • dzhiurgis a day ago ago

                    Cooling space station is very different from cooling off chips. One requires extensive piping, other - a simple radiator.

                    • ceejayoz a day ago ago

                      Both require the same thing - moving heat - and you’ll find plenty of piping in a datacenter for this reason.

                • joe_mamba a day ago ago

                  Space radiators are not very efficient due to lack of airflow in space.

                  • lijok a day ago ago

                    Efficiency in the cooling loop is of no consequence as it has 0 opex cost in space. Do the capex numbers make sense?

              • dzhiurgis 2 days ago ago

                Scott Manley, I’d say one of the top pop space youtubers say otherwise. If anything it’s easier in space. On earth most complexity in datacenter is cooling. In space you just radiate it away.

                And SpaceX already proven they can launch sort of datacenters 10k times by launching Starlink (up to 20KW of solar each IIRC).

                FWIW Musk should support Bernie Sanders more. Putting moratoriums on datacenters would make space based ones far more economical.

                • vel0city a day ago ago

                  He just mentions and walks through idea of having some amount of compute up there and what the heat rejection calculations roughly look like. He doesn't actually explore the economics of doing such a thing or discuss if it's actually worth doing.

                  It's not that you can't put a server in space, but the costs to do it almost assuredly don't make any sense. Because, if you can do it in space you can do it easier on the ground and save yourself millions in launch cost and extra complexity. Your cooling challenges are way cheaper and simpler in an atmosphere.

                  There's nothing much being in space really gets you, other than it makes it harder for a government to take your computers away. Not impossible, just harder.

                  • scottyah a day ago ago

                    Especially with everyone clamoring to have datacenters built in their backyards. There's absolutely no way there can be an advantage to figuring out compute outside Earth's magnetosphere, especially since none of the engineers as SpaceX would ever think of any long-term benefits of that.

                  • dzhiurgis a day ago ago

                    I'm just responding to op saying it's impossible to get rid of heat. None of us touched economics.

                • mcmcmc a day ago ago

                  “YouTuber” is an extremely poor qualification for a supposedly trusted source

                  • boredatoms a day ago ago

                    He's a physicist though, not just a mic jockey

                  • kibibu a day ago ago

                    Let's hear what Big Money Salvia has to say about all this

          • runako a day ago ago

            Cost.

            The economics don't work unless Starship is doing flights in quantity, and it has met or exceeded its cost targets.

            Roughly, a single rack plus solar to power it in the $15m+ range just to launch. (This assumes power dissipation is handled via some means that does not require launch to orbit. Also does not include batteries.) Choose your own hardware for the rack, but call it < $5m.

            SpaceX earning $15m every time someone launches a $5m rack would be a great business for SpaceX.

            Use your own calculator/LLM, but mine is suggesting that the ~$7B Colossus 1 data center in TFA would be around $50B if launched on Falcon 9 (still ignoring cooling and batteries).

            (There are obviously a lot of other asterisks. I'm ignoring power storage and heat dissipation. Maintenance probably doesn't matter given 75% of cost is in the launch. Network bandwidth could be a problem considering how DCs are used. Competition - if Company A spends $100B for $25B of actual AI infra, how competitive will they be against Company B who gets $100B for their $100B by spending it in Canada or Mexico, which they can do right now? Etc.)

            None of this works without Starship, which has not set a date for its first LEO insertion test yet. Yet the whole point of orbital DCs is nothing on the ground can move fast enough, hence the rush to orbit...which can't really move at all right now.

            No, it doesn't make any sense.

          • super256 a day ago ago

            In space you get bit flips fairly quickly when using very small transistors. You would have to run stuff on fairly old hardware, which probably makes the whole thing economically inefficient for serious "computation in space".

    • JMKH42 2 days ago ago

      I don't think space compute is going to work out, but I would certainly say "yes happy to buy space compute from you in the future if you offer it at a good price"

      If it happens it happens, if not, it doesn't.

      • CamperBob2 2 days ago ago

        It makes no sense. We're being presented with a forced choice -- put them in space, or put them in the middle of downtown Seattle.

        This is stupid. I don't understand what's happening... specifically, what mental virus is spreading that lowers everybody's IQ by 10-20 points, evidently including my own. Put the data centers in the ocean, powered by solar and networked with Starlink or LEO. Put them in the desert. Put them 20 miles south of Nowhere, Idaho.

        But space?!

        • Karrot_Kream 2 days ago ago

          Because the US has levied high tariffs on solar cells, can't build their own solar cells economically enough, and has such a torrid permitting system that it can't build transmission lines. Natural gas is the only form of generation that's easy to permit outside cities (due to pipeline agreements and this admin fast-tracking natural gas generation approval) but few cities will allow one. DCs need to be built within low latency interconnect of urban areas or else they become uncompetitive.

          Elon claims (which I take with a huge grain of salt because he's made endless broken promises in investor calls and interviews) that he disagrees with the administration's stance on solar and would use it to power his DCs if he could, but contends that permitting is a huge problem.

          The US needs to figure out how to build again.

          > This is stupid. I don't understand what's happening... specifically, what mental virus

          "Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes"

          • CamperBob2 2 days ago ago

            What does that have to do with my point? Space-based data centers need solar cells too. They are just like terrestrial data centers, only more expensive. For every dollar you save on the PV array, you'll spend two more on radiators.

            And you don't need permits in international waters, any more than you need them in orbit. Lease space on container ships.

            • Karrot_Kream 2 days ago ago

              The argument is that it's too hard to gain the necessary approvals on Earth such that space is faster and easier. Not sure I buy it fully (I do see it somewhat), but that's the argument.

              • 0cf8612b2e1e a day ago ago

                The permitting slowdown is if it want to connect to the grid. If you want to run solar behind the meter, you can go nuts.

                • Karrot_Kream a day ago ago

                  Sure but acquiring enough land to build solar profitably near a DC that can hook up to the big US interconnects is very expensive and will often be blocked by residents. If you want to build solar where there are few people and cheap input costs, then you need to transmit the power to the DC.

                  • 0cf8612b2e1e a day ago ago

                    I keep hearing about brand new data centers they want to create. Seems reasonable to go to sunny, enormous, business friendly Texas and surround the data center site with acres of solar panels, batteries, emergency gas, and whatever sized grid connection you can get approved immediately.

                    If the DC is for training or text inference, latency seems irrelevant, so go where you can quickly plop down power.

                    • Karrot_Kream a day ago ago

                      Texas is actually absorbing a lot of the US's new generation capacity (though the grid there remains dirty)

                      It's fraught to make a DC for a single purpose because it reduces the value of the DC. A DC that serves multiple purposes can handle other workloads. Moreover even if inference is slow, latency does still matter, and it costs quite a bit to light up net capacity (you still have to run fiber to an interconnect and depending on how far you are, this can get expensive fast.)

                      • 0cf8612b2e1e 15 hours ago ago

                        I guess, but the amount of money getting thrown around is just stupid. Having to spend a few million to light up some more fiber is a drop in the bucket.

                        Supposedly some of the behind the meter gas turbines that have been getting installed are rated for a ten year service life. The DCs are burning them out in 10 months from rapid cycling. If they are willing to treat $10-100 million generators as disposable, cost seems irrelevant.

        • zeafoamrun a day ago ago

          The middle of downtown Seattle would be greatly improved if it were replaced by a giant data center.

    • joshstrange 2 days ago ago

      Ehh, I think they are just "kissing the ring". This was part of the agreement for the terrestrial datacenter access, pretend like the space orbital compute is more than the boondoggle that it clearly is.

      I want to be clear, I do think that one day something like that will exist, I just don't think it's anywhere close to being a reality, much like FSD.

      Also it costs them, almost [0], nothing to say it and then later come up with some reason why they are no longer interested.

      [0] Maybe a little bit of respect

    • re-thc 2 days ago ago

      > or posting this sentence was part of the deal to get the compute

      All it says is expressed interest.

      That's like asking a casual how are you...

    • anthonypasq 2 days ago ago

      most of the big tech ceos have mentioned this.

      • shimman 2 days ago ago

        Most big tech CEOs are people that only "succeeded" due to have an unregulated monopoly or picking the right lotto ticket and not due to any innate above average intelligence. Go look at the 100s of billions in wasted capital and tell me who benefitted from such waste while workers + children suffer from lack of medical care.

        You honestly expect this trajectory to continue unabated?

        • pdimitar a day ago ago

          > You honestly expect this trajectory to continue unabated?

          Knowing humanity's history, yes. Not sure we're ever going to see a second French Revolution. People are pacified and are not rioting. And they really should. Most of us are kind of privileged. I know people out there who are barely holding on and the recent fuel + food price increases might push them over the edge to actual poverty.

    • Rover222 2 days ago ago

      It’s weird to not take this seriously. It’s obvious it’s serious and they’re pursuing it.

      • s08148692 a day ago ago

        The whole armchair engineer debate online about this is hilarious

        I'm just a software engineer, all I need to know is SpaceX is aggressively pursuing this - that's enough for me to believe it's viable

        SpaceX operates literally orders of magnitudes more satellites than anyone else. If anybody understands the physics and engineering of space compute, it's SpaceX. Lay people debating this online is just showing their ignorance as far as I'm concerned, and it mostly comes from an emotional place of wanting Musk enterprises to fail

      • scottyah a day ago ago

        Thank you for a reasonable comment. I know internet people love to comment on how "dumb" things are, but we're seeing a growing group of funded, motivated, and intelligent people working towards a common goal. It's at least something to be curious about, I wish the comments were more oriented towards in-depth discussions on the actual current blockers.

    • sourcegrift a day ago ago

      Anybody who spends 5 minutes on reddit outside of pornographic or cuckoldry subs knows that this is not a serious idea

  • mirzap 2 days ago ago

    Doubling the five-hour rate limits is merely a marketing stunt if the weekly rates are not also doubled. It simply means that you can reach the weekly limits in three days instead of five.

    • swalsh 2 days ago ago

      I have never come close to my weekly limit, but have hit my hourly limit frequently.

      • codazoda 2 days ago ago

        Same. I hit limits after 45 minutes. I'm on a measly Pro plan. I'm usually building small, open source projects, often from scratch. I only work on these projects in a 2-hour window in the morning. This is my "free time" development. I hope this change helps, because I was days away from switching back to Codex, though I like Claude Code a bit better these days.

        I also hope that the fact I had OpenClaw in my sandbox once is not why I hit these limits so damn fast. I don't use it anymore and I've tried to rid my sandbox of anything "openclaw" but it is in my git history in various places on various projects. Claude doesn't seem to be transparent about this limitation.

        • bryanhogan 20 hours ago ago

          You should definitely try:

          - Codex

          - OpenCode Go

          - Ollama Cloud

          All are very useful, still a subscription, but with higher usage limits.

          Specific providers like GLM also provide subscriptions like Z.ai.

          Using DeepSeek, Kimi etc. through OpenRouter or from them directly is also great, here you pay per token but it's still more usage overall.

        • piyh 2 days ago ago

          Are you using haiku for most tasks? I'm in the Google ecosystem so I'm curious how it is on the other side.

          • codazoda 2 days ago ago

            Nope, I use Opus 4.7, mostly. Sometimes Sonnet 4.6 if I’m trying to use less tokens.

      • mirzap 2 days ago ago

        For me it's the opposite. I almost never hit hourly limit, but I hit weekly limit in about 5 days.

        • nickthegreek 2 days ago ago

          Would be more meaningful if everyone said what plan they are on, as there are 3 different ones that users could be discussing.

          • jizzywizzy a day ago ago

            Along with how many 5-hour windows they use in a day.

            If you're using it 24/7 then yes, I'm sure the weekly limit is more of a concern.

            If you're just using it during working hours - ie. you only use two 5-hour windows per day - then you probably, like me, struggle to hit the weekly limit even if you do max out some 5-hour windows.

          • replygirl 2 days ago ago

            last week with claude i saturated a team premium seat at day 6 of its cycle, and a max 20x seat at day 4, plus ~$150 extra usage spend, with a 60hr work week where i am not even primarily an IC, as well as a codex 20x plan at day 3 with a personal project

          • noisy_boy a day ago ago

            Hit weekly limits all the time with Pro. Too cheap to go for Max.

          • mirzap 2 days ago ago

            I'm on $200 Max plan

        • extr 2 days ago ago

          What does your usage look like day to day? Are you using a low level amount all day long? I'm with the others here, I've never hit the weekly limit ever, only the hourly, and I consider myself a heavy user.

          • mirzap 2 days ago ago

            I dedicate a significant amount of time to defining the precise actions that agents should perform (PRD/ADR). I break down the feature sets into Milestones and slices (tasks). These tasks are small, well-defined, and scoped. I have a prompt template that the “architect” agent prepares whenever I want to initiate a new feature. This ensures that the prompt structure remains consistent and standardized over time. The generated prompt is then pasted to the “orchestrator,” which performs context discovery (using Repoprompt) and finalizes the plan then proceeds to launch subagents to do the work.

            Based on the size and complexity of the task, as well as any inter-task dependencies, the orchestrator deploys one or more subagents (sometimes 5 or 6 subagents) to work on these mini tasks. Once all tasks are completed, the orchestrator initiates verification and launches a review workflow. This workflow uses the original prompt, acceptance criteria, repository internal guidelines, and relevant skills to conduct a thorough review of the agents’ work.

            Typically, there are one or two review iterations, during which the review agent identifies any issues. Sometimes, I may also notice issues and have to "steer" the orchestrator. The time required for a slice to complete ranges from 30 minutes to 4 or 5 hours, depending on its size, complexity, and the number of subtasks it contains.

            Only if I run about 3 such orchestration in parallel I can reach hourly limit.

            • calgoo 2 days ago ago

              I have found that it uses a lot more tokens if I give it a very detailed todo and loop over every task 1 by 1. I now keep it to phases with detailed tasks underneath and use /loop over the phases and it uses a lot less. I also manage the context windows and tend to clear it often to keep it under around 200k (or less depending on project size)

              • mirzap 2 days ago ago

                Yeah, I do that too. Essentially, the system I described begins working on a task that is small enough and clearly defined. Each “slice” in a milestone usually have 5-10 subtasks (for instance, Slice E1 has P1...P6 subtasks). The orchestrator then receives the prompt to implement E1-P1.

            • jLaForest a day ago ago

              It sounds like you are describing oh my open agent

              • mirzap a day ago ago

                I use Repoprompt's workflows for this. They are pretty good.

      • culopatin 2 days ago ago

        That’s because the week ends before you can use them because you’re waiting for your hourly resets. Now the week essentially got longer with the same limit

      • vidarh 2 days ago ago

        I hit my weekly limit in 3 days this week. Irregularly do in 5. With the top MAX sub.

        • scottyah a day ago ago

          Wow, then you are most likely doing something very wrong.

          • vidarh a day ago ago

            No, I'm just using it a lot. It's productive enough that I've found it worthwhile tacking on subs for GLM 5.1 and Kimi as well (GLM is fantastic, Kimi is good when it works but temperamental)

      • headcanon 2 days ago ago

        same, I struggle to use more than half of my weekly, even if I max out my 5-hour windows regularly during the day.

    • druskacik 2 days ago ago

      For me personally, I have the basic Claude Code subscription that I use to rewind on some evenings or on weekend, to code a bit for 1-2 hours. I have like 3-5 session with it every week.

      The 5h windows are frustrating because I can go through them quickly if I have a more complex task. I haven't yet met the weekly limit. I'd say there are many cases similar to mine.

    • Salgat a day ago ago

      I disagree. I routinely hit the 5 hour limit on Pro with Opus 4.7 just trying to have it do one design task or comprehensive code review on a large PR, and the worst part is, the overhead and bringing all that context back into another 5 hour window blows through 30%+ of my 5 hour usage limit.

    • dwaltrip a day ago ago

      I don’t think I’ve hit either limit a single time in the past 5 months after upgrading to the $100 plan.

      On heavy weeks I probably am using it consistently for at least 6+ hours a day.

      Although, I’m pretty rigorous about always keeping my sessions under 200-250k tokens.

      • airstrike a day ago ago

        I've maxed out weekly limits for 2 $200 accounts before

    • 9wzYQbTYsAIc 2 days ago ago

      Exactly, the weekly limits are the real limiting factor. If you really push it, you can easily hit the weekly $200/mo Max limit in a day.

      • solenoid0937 2 days ago ago

        5 hours were the painful ones. If you're hitting your weekly you've outgrown the sub and should use extra billing

        • 9wzYQbTYsAIc 20 hours ago ago

          Or start switching to open-weights, local LLMs for basic development. Would rather invest in my own hardware than Anthropic’s, tbh.

    • sidrag22 2 days ago ago

      I've found with opus 4.6 which im still stubbornly using i can burn about 10% of the weekly within a 5 hour window with my workflow.

      Mentally i think about the weekly usage in terms of usage per day so about 14% per day which results in me not using that much early in the week so i can kinda "burn freely" later on. which leads me to a spot where usually on the final two days im sorta thinking about how can i expend that usage ive "saved".

      the 5 hour windows make this harder, sometimes the final day of the week im trying to get that 10% in every 5 hour window of my waking hours and i HATE that, i wanna work when i am most productive, not around some ridiculous window of time, i dont wanna think "I am gonna be utilizing claude the most around 11am so i should send a dumb message to haiku to get my 5 hour window started at 7:30am so i can have it roll over at 12:30."

      So im happy about this change sure. But it is 100% them creating a problem and pretending having some relief from that problem is them doing their users a favor. I understand they are doing it to lower peak hours usage and all that, I still despise it.

      • alwillis a day ago ago

        People are waisting tokens by using Opus for everything.

        Using Advisor [1], you can use Sonnet most of time; Sonnet can handoff work it can't handle to Opus. When Opus is done, you automatically go back to Sonnet.

        [1]: https://www.mindstudio.ai/blog/claude-code-advisor-strategy-...

        • sidrag22 17 hours ago ago

          I think the main reason that workflow has not worked for me is because im using an ide version of claude code, which means my main agent isn't a crafted agent and is "stock" sonnet or "stock" opus. I'll likely swap to the cli version soon enough and see if that remedies it (this isn't laziness on my part, i instead learned opencode workflows first because it applies more broadly, the only limitation is usage of a claude subscription within it).

          So with the stock sonnet i get the chatty confidently wrong sonnet instead of a strict crafted agent. Stock Opus is a lot more reasonable, and hands off simple tasks to crafted sonnet agents with the chatty and more strict workflows, so i guess im literally doing the opposite(closer to what that old article describes).

        • port11 a day ago ago

          I rarely use Opus for planning (in the Pro plan). Spec a feature in Sonnet, hand it to Haiku, come back for review. That’s a 5-hour window gone, sometimes 2.

          I hit my weekly limit around day 4, with 2 maxed out windows per day (and sometimes a bit of usage at night).

          I completely understand why people would use Opus for everything, it’s much more thorough and effective. Sonnet as well, but on Pro it’s gonna be Haiku all the time.

          • sidrag22 17 hours ago ago

            my workflow allows for about 10 windows being maxed out each week(if this threads claim is true that is now 5 windows), i always use Opus for planning and just have strict rules for delegation when its actually crafting the code.

            I have a pretty nailed down .claude/ where the goal is single sources of truth, so agent md files all reference the relevant files for what domain they are working within with that domain's conventions and structure etc, i think keeping this stuff up to date is massive compounding context savings, as well as just better for performance because it keeps all agents context windows free of noise by helping them only load in what is actually needed.

            I've never really messed with haiku for anything besides absolute low end repetitive tasks, its usually an agent i have crafted when i want to ask it to generate a bunch of seed data or generic questions for tests or something similar. My assumption is that it would just be terrible and even though its super cheap, it is still inevitably bringing the final results back to the better models and if thats not valuable tokens then im wasting the haiku tokens and the passoff to the better models with work that will be repeated anyway.

      • solenoid0937 2 days ago ago

        > Mentally i think about the weekly usage in terms of usage per day so about 14% per day

        20%, there are 5 work days in a week, not 7.

        • sidrag22 a day ago ago

          weird distinction to make when replying to someone talking about their own personal usage of the weekly limit that is a 7 day window of time.

    • _giorgio_ a day ago ago

      It's not, because I've never hit my weekly limits because of the very restrictive 5 hours limits. Let's see if I really hit my weekly limits now.

      However you see it, it's an improvement for the consumer.

    • varispeed 2 days ago ago

      Who cares about rate limits if they serve your prompt using dumbed down model.

  • htrp 2 days ago ago

    >Higher usage limits

    >The following three changes—all effective today—are aimed at improving the experience of using Claude for our most dedicated customers.

    >First, we’re doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans.

    >Second, we’re removing the peak hours limit reduction on Claude Code for Pro and Max accounts.

    >Third, we’re raising our API rate limits considerably for Claude Opus models,

    Looks like Elon's finally giving up on XAI and just selling the compute

    • peder 2 days ago ago

      > Looks like Elon's finally giving up on XAI and just selling the compute

      I don't think that's certain yet, but I do think that the open-source models like Gemma and Qwen are getting so good so fast that even Anthropic has real risk around the long-term value of their models and tooling.

      Basically, if I'm Anthropic or xAI, I try to get revenue whenever and wherever possible and see what sticks. There's no value in playing for monopolistic control when everything is so volatile.

      • swalsh 2 days ago ago

        There's always money in the giggawatt datacenter

    • petercooper 2 days ago ago

      I don't know if it relates to the same data centers, but this also comes hours after several still recent Grok models were deprecated at short notice. Grok 4.1 Fast is the cheapest way to do research on X (cheaper than the X API!) and it's gone on May 15: https://docs.x.ai/developers/models - freeing up compute to sell?

      • swalsh 2 days ago ago

        Fuck, I loved grok 4.1, it was a really capable model for the money.

        I'd run agents consuming hundreds of millions of tokens for less than a hundred dollars.

      • Geee 2 days ago ago

        Unlikely, because xAI had huge amount of overcapacity.

    • JustSkyfall 2 days ago ago

      Probably a good idea in all honesty. xAI is a deeply unserious lab

      • throwa356262 2 days ago ago

        From a technical standpoint xAI is basically Gemini team B who were give A+ salaries to join the company.

        But even then, I suspect their hands were tied in some areas because Elon had some expectations from his AI.

        • fancyfredbot 2 days ago ago

          Did Google outbid Elon for team A? Or A team just don't like Elon?

          • throwa356262 2 days ago ago

            It's an internal jokes since very few high profile Deepmind engineers accepted his offer despite some serious cash being thrown at them.

            Meta engineers on the other hand, couldn't wait to jump ship. But that only reinforces the B team theory.

            • lostmsu a day ago ago

              LLaMA was pretty good at the time

      • cyanydeez 2 days ago ago

        There's only so much determinism you can create when you try not to filter (read CENSOR) your LLM.

    • kingstnap 2 days ago ago

      The details are secret. It very well could be wasted GPU time but Anthropic could have made a killer offering as well.

      I'm just speculating, but a particularly killer offering Elon wouldnt be able to refuse would be if Anthropic agreed to give them some training data / technology.

      • swalsh 2 days ago ago

        Billions in revenue just before your IPO isn't a bad deal either.

        • fancyfredbot 2 days ago ago

          The icing on the cake for Elon is that it strengthens the competition to OpenAI.

          Or is that actually his main motivation. Hard to know. Either way it's a win win win for him.

          • throwa356262 2 days ago ago

            That's certainly one way one could spin this.

            I guess loosing a ton of money then trying to get some if it back makes you a genius...

            • scottyah a day ago ago

              Yeah real geniuses go down with the ship and never change what they set out to do

            • fancyfredbot a day ago ago

              Elon has many many faults but "loosing" money doesn't appear to be one of them. He's literally the richest person alive!

    • vagab0nd a day ago ago

      Giving Musk the benefit of the doubt, here's a thought experiment: It doesn't seem like any of the big labs in the US can keep a lead for more than 3 months. The Chinese models are closing in. Even if xAI comes up with the best model, so what?

      On the other hand, power and compute are limited. Ridiculous as orbital compute sounds, land/power on earth is not easily scalable. There are too many limiting factors, chief among which in the US is regulation. But in space, if you make one satellite work, you just get more resources and launch more. This also leads naturally to Tesla's plan for a chip fab.

      So if you squint, Musk might not be that crazy.

    • spikels 2 days ago ago

      No I don't ever give up. I would have to be dead or completely incapacitated.

      -Elon

      https://x.com/XFreeze/status/2012390928221094335

    • AlexCoventry 2 days ago ago

      I don't think this is giving up. He's getting inside information on how Claude works, and a huge stream of Claude usage data. This will all inform future grok development, IMO.

    • hn1986 a day ago ago

      question is, will they buy cursor?

    • croes 2 days ago ago

      Or he just got leverage on a competitor

  • losvedir a day ago ago

    > 300 megawatts of new capacity (over 220,000 NVIDIA GPUs)

    The scale is just mindboggling here. Are there any blog posts or anything discussing what kind of infrastructure is used for even just the inference side (nevermind the training) for SotA models like Opus? I would have thought it might be secret, but given that you can actually run the models yourself on AWS Bedrock doesn't that give an indication?

    • epistasis a day ago ago

      I know you're probably talking about the compute infrastructure, but I think the electricity infrastructure side is interesting too, data centers are doing things in dumb ways because the need for operational expansion speed is greater than the need dollars:

      > It’s regulation with the utilities. There are ramp rates, there are all of these things that you’re supposed to do to not screw up the grid. Data centers have been in gross violation of that. When you think about what’s wrong with data centers, they have load volatility, which we just talked about, then they decide to power it with behind-the-meter natural gas generators. These natural gas generators, their shaft is supposed to last for seven years. It’s lasting 10 months because of all the cycling.

      https://www.volts.wtf/p/doing-data-centers-the-not-dumb-way

      On the compute infrastructure, there are standard NVIDIA reference designs like this:

      https://www.nvidia.com/en-us/technologies/enterprise-referen...

      I haven't bothered to look but I'd guess Mellanox GPU-to-GPU networks, and massive custom code for splitting tensors across GPUs, and for shuttling activations across GPU nodes.

    • airspresso a day ago ago

      > but given that you can actually run the models yourself on AWS Bedrock

      That's not exactly how it works. Anthropic are hosting their models in AWS Bedrock as a managed service. Customers call those LLMs just like calling any other API. There's no visibility into what kind of AWS infrastructure is serving that API request.

    • cavisne a day ago ago
    • kristjansson a day ago ago

      All evidence is that the final training runs across thousands to low tens of thousands of GPU, and that a single instance of the resulting model runs (or could run) well within a rack (ie NVL72).

      The massive scale is all massively parallel: test-time compute for users, test time compute for RL rollouts (and probably increasingly environments for those rollouts), other synthetic data generation, research experiments, …

    • sroussey a day ago ago

      > 300 megawatts of new capacity (over 220,000 NVIDIA GPUs)

      That’s just for the SpaceX part (over provisioning for grok, lol).

      The Amazon and Google deals are each over an order of magnitude larger! Pretty wild indeed!

    • giwook a day ago ago

      How many instances of Doom can it run though?

  • gck1 2 days ago ago

    Limits were the last straw that made me cancel my subscription and make my workflow completely model agnostic with pi.

    While this is good news, I'm not coming back. Anthropic just lost me with too many wrongs in too short of a time period.

    Opus has been replaced with GPT 5.5, DeepSeek, Kimi, Qwen and they all allow me to use my own, single harness and switch models easily if any of them start treating me the same.

    • farfatched a day ago ago

      Same, though I'm reconsidering, in light of the recent bugs (which can happen to any provider) and the increased limits. I guess that's at least 3x more Opus for my usecase.

    • sergiotapia a day ago ago

      I wouldn't make any grand stand declarations like this honestly. The models themselves are all hot swappable with minimum effort. The AI labs american or chinese don't really have a moat. Today anthropic is bad and openai is good. Last month it was the other way around. Next month it may be google.

      The only certainty is that you can swap models quickly and painlessly.

  • nl a day ago ago

    Say what you like about Sam Altman, but given how Anthropic is scrambling to sign capacity deals for compute we can sure say he was right about the capcity build out needed.

    • stingraycharles a day ago ago

      That’s correct, but from what I understand his move was also strategic: to choke the market.

      Having said that, Anthropic’s position is fully understandable, as Sam took a very large risk here, and OpenAI’s future is all but certain.

    • sigmar a day ago ago

      Scrambling? Seems to me xAI built too much capacity (for what they can use in 2026). Does that mean OpenAI built the right amount? I don't see how this proves that just because we see one AI company willing to sell compute. We don't even know the terms/pricing.

      • nl a day ago ago

        > Scrambling?

        Yes.

        To quote:

        > Anthropic CEO Dario Amodei said his company tried to plan for 10-fold growth. But revenue and usage increased 80-fold in the first quarter on an annualized basis, which he says explains why it’s been so hard to keep up with demand.

        > “That is the reason we have had difficulties with compute,” Amodei said Wednesday at his company’s developer conference in San Francisco. Amodei added that the company is “working as quickly as possible to provide more” capacity and will “pass that compute on to you as soon as we can.

        https://www.cnbc.com/2026/05/06/anthropic-ceo-dario-amodei-s...

        I think "scrambling" is a fair characterization of the CEO saying "we have had difficulties with compute" and "working as quickly as possible to provide more"

        They've also signed new compute deals with Google and AWS recently.

    • lmm a day ago ago

      Or the bubble he was pumping hasn't popped yet. We won't be able to say how much of this capacity was actually "needed" until 10 years in the future, if ever.

      • nl a day ago ago

        The point is that Anthropic is already a decent way into eating through all that capacity, and it's based on real revenue.

        • lmm a day ago ago

          Some entities are paying for it, sure. I'm still not convinced that's because it's "needed".

          • nl 21 hours ago ago

            True. No one "needs" the internet or computers, right?

            • lmm 10 hours ago ago

              People are getting real stuff done with the internet. But there were also a whole lot of overhyped companies that rightly crashed back in '99.

              Once we've gone through the AI equivalent of the dot.com crash, will Anthropic still be scrambling for more capacity, or will they have more than they can profitably use, like the dark fiber we were left with last time?

              • nl 5 hours ago ago

                Depends when it happens. I'm sure they'll take up whatever capacity is available.

                At the moment computer providers are charging more for outdated H100 capacity now than when the H100s were new. That capacity is going to the smaller labs, not the frontier labs.

                That hardware has already been depreciated financially so even if all those small labs disappeared it's not sending computer providers bankrupt - they can just cut prices and so long as they can charge more than electricity and maintenance they'll just keep them running.

  • minimaxir 2 days ago ago

    > First, we’re doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans.

    The fine-print-omission appears to be that weekly limits are not doubled. The progressive 5-hour rate limit shrinking was indeed an efficiency blocker that finally convinced me to cancel, but being only able to get 4 full sessions a week as opposed to 8 doesn't compell me to resubscribe.

    • dw_arthur 2 days ago ago

      For my hobbyist purposes Deepseek v4 Flash has replaced Claude Code because I was also sick of hitting 5 hour limits with Claude. Right now, the only thing I miss from Claude is multi-modal image support. I can work around no image support since I can use v4 Flash all day and spend around $1. I am aware Deepseek is currently discounting their API at 75% off so I may try out another provider once the discount is gone at the end of the month.

      At this point if feels like if you properly scope your work open weight LLMs are adequate.

      • mostafas a day ago ago

        The 75% discount only applies to Deepseek v4 Pro. Flash will stay the same price after the discount ends. It's remarkably cheap for what it delivers.

        https://api-docs.deepseek.com/quick_start/pricing

      • farfatched a day ago ago

        Ouch, I wasn't aware they were discounting so much. There goes my subscription escape plan.

    • scottyah a day ago ago

      The datacenter isn't operational yet, they don't magically get more processing instantly after signing a deal.

  • Philpax 2 days ago ago
    • quinncom 2 days ago ago

      One of the reasons I refuse to use xAI’s models is because of the outsized negative environmental impacts of the methane gas turbines.

      Now I have to avoid Claude too.

      • bottlepalm 2 days ago ago

        If you can make up an inconsequential arbitrary rationalization to not use a service then I’m sure you can do the opposite to convince yourself to use it.

        That’s what virtue signaling is I guess - the action you’re taking is pointless, the only point is to tell everyone you’re taking it therefore feed the narrative forward?

        The entire economy runs off gas turbines though this is the thing you boycott?

        • quinncom 2 days ago ago

          Obviously I’m virtue signaling, and I hope instilling a feeling of shame in people who support businesses that contribute to climate change.

          But more than that, the emissions generated by the Colossus data centers are far worse than typical combined-cycle gas plants or data centers that buy renewable: these turbines emit NOx, fine particulates, carbon monoxide, and formaldehyde into a population-dense area.

          I thought people knew about this already. Post from last year: https://simonwillison.net/2025/Jun/12/xai-data-center/

        • Footnote7341 a day ago ago

          I'm using grok to help bring awareness to black and brown and latinx communities that live within 100 miles of Colossus 1!

        • data-ottawa 2 days ago ago

          Sorry, what?

          Deciding not to spend money with a company you don't like is not pointless. The point is that you're not participating in something that you judge to be wrong.

          The world is full of things I feel are wrong yet have near zero power to stop. That does not mean I should willingly support those things.

        • formvoltron 2 days ago ago

          gas turbines generally are for peaking. Not for base load.

          Hopefully Elon lets you into his glass bubble when the s** cooks on the fan.

      • everfrustrated 2 days ago ago

        You realize natural gas is one of the more environmentally friendly methods of generating power. Lots of work went into moving to natural gas generation to improve the environmental impact for electricity generation.

        This is nothing like burning coal.

        • danaw 17 hours ago ago

          as other have mentioned tons of portable generators are no where near as safe as power plants and these are built right next to people's homes with no oversight or regulation.

        • dymk a day ago ago

          More greenhouse gases in the atmosphere, speeding up global climate change. Renewables or don’t do it.

        • stuaxo 2 days ago ago

          This natural gas burning is suited too close to where people live.

          • everfrustrated 2 days ago ago

            You do understand natural gas burns very clean? People burn it in their houses to cook with it!

            • throwaway473825 a day ago ago

              People burn coal to heat their houses. That doesn't mean it's healthy. Gas stoves are known to cause asthma.

            • dymk a day ago ago

              Most municipalities ban nat gas in new construction because it’s so unhealthy and unsafe compared to an induction or resistive electric range. No, it doesn’t boil water faster than electric either.

        • jLaForest a day ago ago

          While the burning of methane is cleaner, the extraction of methane is a massive source of uncontrolled pollution emissions which is made worse by the fact that methane is 20x worse for greenhouse effect than CO2. Clean methane is another green washing myth to encourage people to keep consuming at much as possible

          • scottyah a day ago ago

            A lot of it is just captured during oil extraction, whereas before it just wasn't captured.

    • thrownthatway 2 days ago ago

      Why?

      • chainwax 2 days ago ago

        I think he's referring to the fact that Colossus is powered by fossil fuels.

        • kfrzcode 2 days ago ago

          literally the entire economy is powered by fossil fuels

          • HarHarVeryFunny 2 days ago ago

            As far as electricity goes, the US is currently 50/50 fossil fuels and renewables (solar, wind, etc).

  • cbg0 2 days ago ago

    They're doubling the five hour limits, but no mention about the weekly limit. So overall it's the same maximum usage, right?

    • adriand 2 days ago ago

      I think so, but that's also really great because I frequently run into the five hour caps, but very rarely use my entire weekly allotment. There are lots of situations where I do things like write the plan for all the work that has to get done, and then set a reminder to execute the plan after I get home, when I'm done making dinner (because e.g. my five hour cap ends at 6pm). Higher caps for the five hour period is a lot more convenient.

      • novaleaf 2 days ago ago

        I (and many others) are the opposite. I run out of quota is 4-5 days. Generally no issues with the 5hr cap. ($200 sub)

        • solenoid0937 2 days ago ago

          Like 90% of people I know never hit their weekly but they hit their hourly. I'd bet your case is way rarer.

    • farfatched a day ago ago

      If this logic applied, then there would be no purpose in them having the 5 hourly limit.

      • cbg0 a day ago ago

        The purpose is to control the total amount of requests they need to handle in a given timeframe. If everyone could use up their whole weekly limit in 5 hours, many would do so, thus pushing the GPU/TPU clusters to or above their capacity limits.

    • joncik91 2 days ago ago

      Some get the reset, some don't it seems :(

  • antipaul 2 days ago ago

    "All of [SpaceX]'s compute capacity at Colossus 1"

    SpaceX/xAI also has Colossus 2, with double or more the GPUs

    Seems xAI will still be around

    • empath75 19 hours ago ago

      That is one take, but here is how I interpret it. They spent a lot of money training a model which isn't doing enough inference to justify continuing to use those GPUs, and now they are buying even more GPUs to build an even bigger model that also won't be very popular.

  • danaw a day ago ago

    this feels more like something designed to bolster the spacex ipo than anything else

    300MW is peanuts compared to their multiple 50GW+ deals to the point you start to wonder why just 300MW is making the difference in their capacity that they can increase limits this much... also, why couldn't their many existing multi billion dollar deals not allow them to expand capacity?

    when you take this into account, then you read their statement about orbital compute it starts to smell quite fishy

    • sailingparrot a day ago ago

      The difference is the 300 MW are real, the 50GW are printed on some paper and don’t exist.

      There aren’t that many 300MW+ datacenter in the world, relative to the capacity Anthropic has online, it’s a lot, probably in the 20% range.

  • skeledrew 2 days ago ago

    Oh. Just as I'm in the process of migrating to Pi+Qwen (local). This was probably going to be my last month on the Pro sub as I'm seriously fed up with the limits and degradation that started weeks after I signed up. Let's see how this shakes out.

    • flumpcakes a day ago ago

      How does Pi+Qwen (local) compare to Anthropic's offerings? Surely you're not getting the same breadth and quality of output using Qwen? How is the performance?

      • skeledrew a day ago ago

        So far I've only really set things up and done some benchmarking (a set of capability prompts created and evaluated by Claude, HumanEval and MBPP; haven't completed the latter 2) on several local models (Qwen 1.7b, 4b, 9b & 35b a3b; 1.7b got 6/8 correct at ~14.7 tok/s on the capability set, to 35b for 8/8 at ~4.5 tok/s; can share full results if interested), and setup llama-swap so I can dynamically select them. I'll need to decide which of my projects I'll be really testing them on, with the awareness that I'll have to be even more involved.

      • dymk a day ago ago

        It’s a toy compared to Opus or Sonnet. Obviously the 5 trillion parameter models running on $$$$ hardware is going to outperform a local model.

    • z3ratul163071 a day ago ago

      we all are. thank god for alibaba, seems all that crap we bought from aliexpress served some indirect purpose.

  • boramalper 2 days ago ago

    I wonder if it's just Elon realising that xAI can't beat OpenAI and thus deciding to give all his compute capacity to Anthropic instead.

    Certainly an interesting day for xAI.

    • bpodgursky 2 days ago ago

      Building datacenters plays to his strengths. It's a good partnership if he can stomach it.

  • int32_64 2 days ago ago

    What's the current status of the 'biggest computer wins' vs. specialized proprietary research/data in the AI arms race? People had such high hopes for xAI because of the monster machine Elon built. Or has xAI just turned over too much staff too quickly?

  • tanh 2 days ago ago

    Wouldn't trust them not to take a copy and use it to distill. Wonder what security there is

  • exabrial a day ago ago

    This is where I see the economy of AI going:

    * Inference becomes cheap

    - speciality accelerators hit the market and race to the bottom begins

    * Training remains expensive

    - This works out for Anthropic/OpenAI, they go into the business of training

    * Models become rental units or purchasable assets, you run on inference hardware

    - Rent or own inference hardware

    * Or you pay someone to do all of the above for you, at a premium

    • kcb a day ago ago

      There's no magic bullet for inference on cheap accelerators. Any accelerator will still require large amounts of high bandwidth memory.

      • exabrial a day ago ago

        The way to do it _today_ requires enormous amounts of HBM! However, we've never designed inference accelerators, which is actually a quite "trivial" problem, but we've just never had a need.

        Groq (acqui-hired by NVidia) came up with a different processor architecture: metric shit-tons of SRAM attached to a modest single core deterministic processor. No HBM needed on this card, and 32x faster inference than today's best GPUs at inference!

        These LPUs are pretty useless for training though, which is useful for companies training models! Training is expensive, inference is cheap (someday, not now).

        There's also a Canadian company that _literally burned the model as a silicon mask_ on a chip. It's unbelievably (1000x) fast, but not flexible of course: https://chatjimmy.ai

        • kcb a day ago ago

          The point is metric shit-tons of SRAM is still large amounts of expensive memory.

      • CWwdcdk7h a day ago ago

        Strictly speaking there is that one startup that compiles entire models into huge ASIC. With trade off that entire hardware becomes outdated when new model version is released in 2-3 months.

  • y42 2 days ago ago

    I want to believe. A couple of weeks ago I fell into this "trap", they offered a similar thing. I subscribed to the Pro Plan. Had fun for a couple of weeks and then I entered frustration phase. I love the product, but I hate those up and downs. My rant made it to HN front page - which I am not happy of. I want the stuff I build to be seen on the front page.

  • ilia-a a day ago ago

    Interesting that the 5h limits are raised, but if I understand announcement correctly, the weekly limit is not. So all this means is that you can burn through your weekly limit faster and be locked out entirely, or having to buy tokens

  • dagi3d a day ago ago

    does that mean this data center was way overprovisionedo or that grok is barely used and they could potentially kill it and just use claude?

  • athrow 2 days ago ago

    Anthropic taketh and Anthropic giveth.

    • HarHarVeryFunny 2 days ago ago

      Exactly.

      Today they say this, then tomorrow they'll silently reduce limits and argue with anyone who calls them on it.

      • solenoid0937 2 days ago ago

        This is obviously because of the new compute deal. I don't see them going back unless prosumer demand outgrows compute again.

  • mandeepj 2 days ago ago

    "use all of the compute capacity at their Colossus 1 data center"

    So, they handed out all of their data center to Anthropic; Grok wasn't using it much?

    • jizzywizzy a day ago ago

      Moved to Colossus 2. Though I guess you could still frame it as 'don't they need Collosus 1 AND 2' if you want...

  • everfrustrated 2 days ago ago

    For those who haven't been following the build out.

    xAI has added about 500MW of nvidia gpu capacity in ~April

    and will add another 500MW before the end of the year totaling about 2GW.

  • Geee 2 days ago ago

    For context, xAI GPU utilization is at 11% and they're also expanding.[0] Renting one datacenter to Anthropic doesn't mean that they would be shutting xAI / Grok down.

    [0] https://wccftech.com/xai-using-just-11-percent-gpus-while-me...

    • redox99 2 days ago ago

      11% is the MFU. Not how many GPUs are running. Big misunderstanding.

  • amacbride 2 days ago ago

    As a bonus, it looks like they reset limits a few minutes ago -- I went from 53% of my weekly allotment to 0%.

    • readitalready 2 days ago ago

      Not for me. 2 Claude Max 20x accounts here both at high usage on weekly allotments.

  • Frannky a day ago ago

    I shared a couple of days ago why they were not doing like Google and offering oss models, but damn, offering Anthropic models after all the badmouthing. Next news: OpenAI models live on Colossus 2

  • exabrial a day ago ago

    > Within the month

    To me this is the mind-bending piece. It's not a like a datacenter has a plug-and-play with well written spec and an international standard interface.

  • maelito a day ago ago

    These energy figures are gigantic. It's getting absurd.

  • breakingcups a day ago ago

    I could have used this news 2 days ago. I've been trying out Claude Code for a few days and kept running into the limit, so I wanted to upgrade to Max. In the upgrade-flow they hit me with an identity verification through Persona. No problem, I thought, I'll just cancel the upgrade. Nope, all access to Claude Code on the old plan was now also blocked and can't be unblocked without completing Identity Verification, which I'll never do. What a bad experience.

    On the plus-side, it told me how much cheaper Deepseek is and that it's on parity for reverse engineering work.

  • grim_io a day ago ago

    Not surprising, considering the recent news that xai only utilized 11% of their GPU's.

  • tintor a day ago ago

    So, when you use Claude Code from now on, you will be poisoning people in the rural parts of Memphis with xAIs unpermitted gas generators.

  • stuaxo 2 days ago ago

    Oh is this the polluting gas powered data centre Elon made, that's making local residents unhealthy ?

    This might be a good time to drop Claude.

  • logicalappeals a day ago ago

    Anthropic looking to garner some good will after the recent issues. I’ll gladly take the higher rate limits

  • espeed a day ago ago

    How do you select your data center like you can for AWS and Google Cloud?

  • chillfox a day ago ago

    Well, that's super disappointing :(

    I have got xAI blocked in OpenRouter as I do not want to support any business controlled by Musk.

  • nethunters 2 days ago ago

    Hopefully this filters through to Copilot's recent rate-limits

  • Jackson__ a day ago ago

    Oh what is that, the most "ethical" AI company on the planet making deals with literal democracy undermining fascists?

    I'm starting to think the problem with "ethical" AI was always that no company could ever act ethically in the long term. They are and always will be a cancer to society and AI will only serve to amplify this further.

  • zelon88 a day ago ago

    > We’re very intentional about where we’ll add capacity—partnering with democratic countries whose legal and regulatory frameworks support investments of this scale, and where the supply chain on which our compute depends—hardware, networking, and facilities—will be secure.

    *Buys compute from actual fascist Elon Musk in a failing democracy during the death throes of late state capitalism.

  • kristianp a day ago ago

    What GPUs does Colossus run? Old H100s?

  • Aeolun a day ago ago

    They say usage limits on the 5h increased, but I don’t see a significant difference on the x20 plan.

  • lairv 2 days ago ago

    For a space that supposedly had "no moat", the number of players still competing for frontier models seems to be shrinking pretty fast

    • swader999 2 days ago ago

      What's going to be the hit on our atmosphere when the data centers re enter? I guess it won't matter as the AI will replace the humans by then for the GDP and tax base.

  • 2001zhaozhao 2 days ago ago

    I mean, as someone who has the Max 20x plan and uses it only outside work (so I could not hit anywhere close to the weekly limit at all), I'll gladly take the 5-hour limit doubling.

    My first impression to this post is "what the hell are they thinking?", but actually it seems like a decent move by them.

    They basically made it so that normal users can better utilize their plan while not benefitting the backgroundagentmaxxers and stealth openclaw abusers in the ranks of their subscription audience. Making their plan more attractive to the people they actually want to sell to.

    Hopefully this leads to a loosening of harness restrictions later.

  • Marciplan 2 days ago ago

    If Anthropic and SpaceX and OpenAI are all going public this year then this is a clever move to stick it to OpenAI. However, I'm kinda sus of my Claude subscription now

  • iamleppert 2 days ago ago

    Hopefully they will work on response time. I've been noticing it taking 5+ minutes for each turn, for not complicated requests. Seems to vary based on time of day too.

  • deafpolygon a day ago ago

    What does SpaceX get out of this deal?

  • docmars a day ago ago

    Insert gaping soyjak face.

  • gigatexal a day ago ago

    I would gladly take a worse experience than to have my favored LLM vendor partner with an Elon company.

  • LNSY a day ago ago

    Too little, too late. I'd rather have a consistent, dumber model than sometimes excellent but often miserable Claude.

    Staying with Claude is like going back to the restaurant where you got food poisoning: you kinda get what you deserve next time you get sick.

  • hirvi74 2 days ago ago

    I would have preferred an increase in the weekly limits instead of the 5-hour limits.

  • Rover222 2 days ago ago

    Reading the comments here again surprises me how in an anti-Elon bubble most folks are. They are renting out spare Colossus 1 capacity. Colossus 2 is still coming online. Orbital data centers are really the plan in the next few years. XAi is still behind, but not a disaster considering how late they entered (and Elon’s unfortunate fixation on anime characters).

    SpaceX is extremely uniquely positioned to crush the rest of the world combined in order to orbital data centers.

    • HarHarVeryFunny 2 days ago ago

      > SpaceX is extremely uniquely positioned to crush the rest of the world combined in order to orbital data centers

      Sure, as long as your data center is 3x4m - size of a Starlink satellite (think Spinal Tap Stone Henge) . Anything bigger than that (i.e. actual data center sized) is going to require some assembly.

      I've heard TeslaBot is good at folding shirts, and serving drinks (at least while teleoperated) - perhaps it can help?

      • everfrustrated 2 days ago ago

        Orbital DCs is blocked on starship because they _arent_ doing starlink sized satellites.

        • HarHarVeryFunny 2 days ago ago

          You're not going to fit a data center in a Starship either, unless you are talking a Tiny Corp Exabox "data center in a shipping container" sized one. Even something that small (1MW) would still need 4x the solar capacity of the ISS, and therefore likely some assembly required. Then you've got latency from satellite to satellite ...

          In any case, it appears that Musk can't even generate enough AI demand to utilize his own ground based data center. Maybe he can add "data centers in space" to part of his Mars colonization plan. Maybe have Tesla Bots driving around in Cybertrucks too ?

          • Rover222 a day ago ago

            Okay well you’re clearly not understanding the basic concept of the orbital architecture.

    • hellohello2 2 days ago ago

      I struggle to understand how orbital data centers can make sense. Is it mainly for continuous solar energy? Surely this can't be enough to offset the costs of launching?

      • Geee 2 days ago ago

        See the recent Dwarkesh interview with Elon: https://www.youtube.com/watch?v=BYXbuik3dgA

      • Rover222 a day ago ago

        Continuous 5x solar power (relative to on earth), no earthbound construction red tape or protestors, and yeah it can only possibly pencil out with Starship launching routinely. No other rocket system could even come close to making it work.

    • namnnumbr 2 days ago ago

      AFAIK "orbital data centers" are a bunch of nonsense.

      1. GPUs create heat. There's no efficient way to get rid of the heat in space (vacuum is an insulator). 2. Die-shrink makes modern processors and memory more and more susceptible to radiation; shielding is possible, but adds cost + mass (which adds cost)

  • lossolo 2 days ago ago

    xAI (part of SpaceX) using just 11 percent of GPUs[1]

    1. https://wccftech.com/xai-using-just-11-percent-gpus-while-me...

  • 4b11b4 2 days ago ago

    I mean... seems like a no-brainer

  • stavros 2 days ago ago

    > First, we’re doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and seat-based Enterprise plans.

    Ok I guess, this was a bit of a hassle, but you're not increasing my weekly allowance, you're just not annoying me as often.

    > Second, we’re removing the peak hours limit reduction on Claude Code for Pro and Max accounts.

    It wasn't a limit reduction (as in, I didn't have a lower 5-hour limit), it was "tokens are more expensive" and it ate my weekly limits faster. This should never have been instituted to begin with.

    > Third, we’re raising our API rate limits considerably for Claude Opus models, as shown in the table below:

    Meh.

    This is why I don't care for all the "it's a subscription, you're free to not use it!" arguments here. It's not an all-you-can-eat subscription with some generous fair use limits, it's a "X tokens per month for $Y", and they keep lowering the X unilaterally and in secret.

    • solenoid0937 2 days ago ago

      People are so cynical on HN. Just move to API billing if not getting enough subsidized compute is that big a deal for you?

      • stavros a day ago ago

        Is that what you do when you prepay for a year to get a discount and the supplier just says "oh I'll just give you half of what you paid for"? You "just move to pay again for the rest"?

        • solenoid0937 a day ago ago

          Sorry, were you told you'd be given a specific number of tokens when you subscribed?

          • stavros a day ago ago

            Yes? Weren't you? Did you think you were buying a token lottery, where you'd have a billion tokens one day and zero the next?

            • solenoid0937 a day ago ago

              How many tokens exactly did they guarantee you when you signed up? I don't recall ever seeing a hard number. It was always pretty clear it was flex pricing.

              • stavros a day ago ago

                They guaranteed as many as they were offering when I signed up. I tried it for a month, it worked for me, I signed up for a year. Then they reduced the limits.

                If you think that's fine, I have access to an all-you-can eat buffet to sell you for only $2000 a year, it's a steal.

  • sourcegrift a day ago ago

    Anthropic aligning with the guy who got Trump elected means they are dead to me.

    I'm posting immediately after cancelling my claude subscriptions.

  • AlexCoventry 2 days ago ago

    So now Elon Musk gets to read all of our Claude conversations?? :-(

  • hparadiz 2 days ago ago

    Give them whatever they need. Time to go to the moon.

  • swalsh 2 days ago ago

    Models are a commodity, let's say Elon actually figures out building datacenters in space, or maybe he continues to be the leader of building earth based datacenters. Probably better business to not have yourself as your only customer. Dogfood, and open it to all.

    • driverdan a day ago ago

      > Elon actually figures out

      Elon doesn't figure out anything. He pays people to do it and then tries to take the credit.

    • mplewis 2 days ago ago

      The first is impossible and the second isn't happening and won't happen.

      • croes 2 days ago ago

        I wouldn’t say impossible but not effective

    • nextstep 2 days ago ago

      the leader of building earth-based datacenters lol

      what are we even talking about

  • mark_l_watson 2 days ago ago

    The politics and economics of Musk throwing some support towards Anthropic is interesting (samma is probably pissed).

    But, if you will pardon a little rant: I hate the idea of subscription inference plans and also 'dumping' by subsidizing non-profitable products. Inferencing should be pay as you go and dumping illegal.