It seems like the news is also coming out that Claude was used in planning the Venezuela and Iran operations[0].
Whatever you thoughts on legality or justification, both have been staggering effective in initial goals.
Interms of intel and complex operation planning.
Despite Trump ragging on Anthropic just hours before it was still considered necessary for the operation.
Quite telling. I don't think Pentagon will be satisfied with just Grok and ChatGPT. Which means OpenAI might really be done for in terms of having no moat beyond least ethics (Grok does that anyways).
If all of the AI providers just went bankrupt it would be a fairly solid 1 step back, 2 steps forward for humanity.
We weren’t ready for it. And degrading world conditions and crazy leadership make it scary to be a civilian in a world with highly capable AI. One that will soon contain AI-powered terrestrial robots akin to something out of Terminator (<2 years).
Besides, without AI we’d probably invest in net positive short term objectives that redistribute wealth, like clean tech.
(Extremely unpopular opinion because at least half or more of HN are probably working in AI)
Somewhere along the way we forgot that technology was supposed to serve us, not the shareholders.
I’m still not convinced even computing was a particularly good idea. It has enabled prosperity in some areas, like aiding drug research. It has also massively corroded the social fabric.
The productivity gains from computing were never reflected in wages. In the latter half of last century you could buy a home and support a family on one income. Now, you need two average incomes to even start to think about having a family. And then you surrender your children to be raised by someone else because you have to go to work.
> It has enabled prosperity in some areas, like aiding drug research.
At first I scoffed at your idea that computing itself may not have been a good advancement.
And then I saw your example of where computers have helped and I’m wondering that even there, did the lives saved and the quality of life improvements from the accelerated drug research, outnumber the lives lost and the worsened lives quality of life because of hundreds of millions of people’s work now becoming being stuck in a chair behind a monitor and keyboard all day?
Working at a computer isn't a worse quality of life any more than any other job. It's actually pretty nice versus being a plumber or something. No sewage in the workplace.
Working in "AI" was much more fun before the current hype cycle...
It's definitely turned into another Eternal September of everyone and their brother pretending they've worked in AI for all of human history. I would welcome another long winter
I upvoted you, and mostly agree with you, but: as much as I love using strong small local models sometimes a commercial 'frontier' model is very useful (for me).
I deleted my free OpenAI account (I paid for it until a year ago) and just started a $20/month Anthropic account. My one-year prepaid Gemini account will expire in two months and I will decide then to keep one of Anthropic or Gemini.
Once again: I agree that super-spending on super scaler data centers is a net negative for humanity. For me it is not a matter of price: I am happy paying $20/month and only using energy guzzling models occasionally when I really need them. Sort of like recycling to lesson our burden on the environment: try to minimize AI energy and resource use, but still get work done.
EDIT: we can also use LLMs more efficiently: build software composed of small well tested libraries. It is more energy efficient to write and debug little 200 line libraries than soaking up large projects in your context. Also, working on small composable libraries works better with smaller open models like qwen3.5:35b.
> we can also use LLMs more efficiently: build software composed of small well tested libraries. It is more energy efficient to write and debug little 200 line libraries than soaking up large projects in your context.
So, NPM? In reality AI is making this LESS likely to happen. It's easier to write a small utility function with AI then find and use a library these days
This might be preferable to the ever expanding tangled dependency graph and associated supply chain risk. OTOH, perhaps said graph can be reasonably wrangled with LLMs and enable safe small library re-use?
It's not that we are all fans of AI, it's that it's career suicide to ignore AI. If AI were gone for everybody that might be a net positive, but if it exists, it needs to get into the hands of as many as possible else wealth and power will concentrate even greater than it is now.
> Reality check: ChatGPT is still right behind Claude on the app store charts and has a first-mover advantage on AI.
"This is less newsworthy than you think". Real sizzling copy, Axios.
And exactly what good is a first-mover advantage when you squander it?
Also very strong LLM-ism. Particularly for ChatGPT.
It seems like the news is also coming out that Claude was used in planning the Venezuela and Iran operations[0].
Whatever you thoughts on legality or justification, both have been staggering effective in initial goals.
Interms of intel and complex operation planning.
Despite Trump ragging on Anthropic just hours before it was still considered necessary for the operation.
Quite telling. I don't think Pentagon will be satisfied with just Grok and ChatGPT. Which means OpenAI might really be done for in terms of having no moat beyond least ethics (Grok does that anyways).
[0] https://www.wsj.com/livecoverage/iran-strikes-2026/card/u-s-...
OpenAI will likely end up swallowed by Microsoft anyway.
Looks like ChatGPT wrote this article though
Yeh I couldn’t tell if they were being ironic. I don’t know Axios. Stopped reading anyway.
If all of the AI providers just went bankrupt it would be a fairly solid 1 step back, 2 steps forward for humanity.
We weren’t ready for it. And degrading world conditions and crazy leadership make it scary to be a civilian in a world with highly capable AI. One that will soon contain AI-powered terrestrial robots akin to something out of Terminator (<2 years).
Besides, without AI we’d probably invest in net positive short term objectives that redistribute wealth, like clean tech.
(Extremely unpopular opinion because at least half or more of HN are probably working in AI)
Somewhere along the way we forgot that technology was supposed to serve us, not the shareholders.
I’m still not convinced even computing was a particularly good idea. It has enabled prosperity in some areas, like aiding drug research. It has also massively corroded the social fabric.
The productivity gains from computing were never reflected in wages. In the latter half of last century you could buy a home and support a family on one income. Now, you need two average incomes to even start to think about having a family. And then you surrender your children to be raised by someone else because you have to go to work.
Are computers serving us, or are we serving them?
> It has enabled prosperity in some areas, like aiding drug research.
At first I scoffed at your idea that computing itself may not have been a good advancement.
And then I saw your example of where computers have helped and I’m wondering that even there, did the lives saved and the quality of life improvements from the accelerated drug research, outnumber the lives lost and the worsened lives quality of life because of hundreds of millions of people’s work now becoming being stuck in a chair behind a monitor and keyboard all day?
Working at a computer isn't a worse quality of life any more than any other job. It's actually pretty nice versus being a plumber or something. No sewage in the workplace.
We are hackers. We can make technology that serves us. But why don't we?
Working in "AI" was much more fun before the current hype cycle...
It's definitely turned into another Eternal September of everyone and their brother pretending they've worked in AI for all of human history. I would welcome another long winter
I upvoted you, and mostly agree with you, but: as much as I love using strong small local models sometimes a commercial 'frontier' model is very useful (for me).
I deleted my free OpenAI account (I paid for it until a year ago) and just started a $20/month Anthropic account. My one-year prepaid Gemini account will expire in two months and I will decide then to keep one of Anthropic or Gemini.
Once again: I agree that super-spending on super scaler data centers is a net negative for humanity. For me it is not a matter of price: I am happy paying $20/month and only using energy guzzling models occasionally when I really need them. Sort of like recycling to lesson our burden on the environment: try to minimize AI energy and resource use, but still get work done.
EDIT: we can also use LLMs more efficiently: build software composed of small well tested libraries. It is more energy efficient to write and debug little 200 line libraries than soaking up large projects in your context. Also, working on small composable libraries works better with smaller open models like qwen3.5:35b.
> we can also use LLMs more efficiently: build software composed of small well tested libraries. It is more energy efficient to write and debug little 200 line libraries than soaking up large projects in your context.
So, NPM? In reality AI is making this LESS likely to happen. It's easier to write a small utility function with AI then find and use a library these days
This might be preferable to the ever expanding tangled dependency graph and associated supply chain risk. OTOH, perhaps said graph can be reasonably wrangled with LLMs and enable safe small library re-use?
It's not that we are all fans of AI, it's that it's career suicide to ignore AI. If AI were gone for everybody that might be a net positive, but if it exists, it needs to get into the hands of as many as possible else wealth and power will concentrate even greater than it is now.
For what it's worth, I work in AI and I really hate it and want it to fail :)
[dupe] https://news.ycombinator.com/item?id=47202032