>>> Despite persistent material uncertainty around the global macroeconomic outlook, risky asset valuations have increased and credit spreads have compressed. Measures of risk premia across many risky asset classes have tightened further since the last FPC meeting in June 2025. On a number of measures, equity market valuations appear stretched, particularly for technology companies focused on Artificial Intelligence (AI). This, when combined with increasing concentration within market indices, leaves equity markets particularly exposed should expectations around the impact of AI become less optimistic.
Actually, the quoted 'sudden correction' is not referring specifically to AI, but the market in general
"The market is propped up by these companies for sure. It's propping up the New York Stock Exchange, other exchanges. And so if those companies were just all of a sudden dip, even a little bit, you'll see the effects, feel the effects of that in the stock market."
"You don't necessarily have to care about AI or the future of OpenAI or the future of Nvidia even, to care about this story because this has reached into so many other markets. The financial markets, equity markets, debt markets, real estate markets, because data centers are real places that need to be built. Also your energy bills might be higher if you live next to a data center., So this extends far beyond now at this point, the AI sector."
"A wave of deals and partnerships are escalating concerns that the trillion-dollar AI boom is being propped up by interconnected business transactions."
"Never before has so much money been spent so rapidly on a technology that, for all its potential, remains largely unproven as an avenue for profit-making."
"The recent wave of deals and partnerships involving the two are escalating concerns that an increasingly complex and interconnected web of business transactions is artificially propping up the trillion-dollar AI boom. At stake is virtually every corner of the economy, with the hype and buildout of AI infrastructure rippling across markets, from debt and equity to real estate and energy."
"Actually the quoted 'sudden correction' is not referring specifically to AI, but the market in general"
As I read that quote it states that valuations "particularly" for AI companies "appear stretched"
This suggests the "correction" will apply to those valuations in particular
The report later refers specifically to these so-called "technology" companies
"5: Equity market valuations had increased since Q2, to near all-time highs, partly driven by strong Q2 earnings of US _technology firms_. The price appreciation of the largest _technology firms_ this year had increased the concentration within US equity indices to record levels. The market share of the top 5 members of the S&P 500, at close to 30%, was higher than at any point in the past 50 years."
"6: Equity valuations appeared stretched, particularly in backward-looking metrics in the US. For example, the earnings yield implied by the Cyclically-Adjusted Price-to-Earnings (CAPE) ratio was close to the lowest level in 25 years - comparable to the peak of the dot com bubble. ... Some _technology companies_ were trading at valuation ratios which implied high future earnings growth, and concentrations within US equity indices meant that any _AI-led_ price adjustment would have a high level of pass-through into the returns for investors exposed to the aggregate index."
Given their "stretched" valuations according to this report, any "sudden correction" would apply to these so-called "technology" companies
Aside from the instances where the report refers to overvalued "tecchnology" companies particularly, the parent comment is correct that the report does not refer to "AI" specifically
AI is a risk. The thing we know is going to bite us in the butt is our continued massive sovereign debt burden and lack of any political will whatsoever to either increase taxes or reduce spending. The dollar is not going to do well this century and creditors confidence is already starting to decline.
In fact, the further we go into debt - the more we are implicitly betting our society on an AI hail mary.
I see this sentiment a lot, they are not equivalent. The US must reduce spending, if it wants to protect the dollar. Tax increases may also help.
The relationship between tax rates, GDP, government revenue, the market value of new US debt, and the value of the dollar, is complicated and depends on uncertain estimates and models of the economy. Increasing taxes can reduce GDP, which needs to increase to outgrow the debt, there is an optimal tax rate, more doesn't always help. Decreasing spending is a more straightforward relationship, no new debt, no new dollars.
If the US reduces the debt, it removes pressure to monetize and removes market expectation that we will monetize, which directly boosts the dollar. I also think that "rich people are scamming us" is a politically more advantageous message than "old people are scamming us".
The most important thing is eliminating the annual deficit. That sends more of a signal about the future of the country and it's currency than the total amount of debt.
How it gets done is separate from that. Given that the only demographic that can comfortably weather a recession is also starting to collect social security, paid for by younger generations who would be meaningfully affected by a recession, "old people are scamming us" may actually be an effective message.
Social Security is not really relevant to the deficit, that's just a thing some politicians say because they want to dismantle the pie to take their piece. Every time someone points to SS as the source of our fiscal woes, we should be immediately skeptical.
I don't live in a costal state, but when I do consulting work typically at charity rates alongside my standard full-time job, I have to pay 24% federal tax, 15.3% FICA, and 7.85% state tax. I am already taxed whenever I want to help anyone at 47.15%. That's before the required tax structures and consulting for doing all the invoicing legally. God himself only wanted 10%, so it seems a government playing God is awfully expensive.
You can't raise taxes any further before I'm done, and I don't think I'm alone, businesses and consultants are already crushed in taxes. I have to bill $40K to hopefully take home $20K; at which point, is it even worth my time? But if I don't consult because it isn't worth it, are small businesses suddenly going to afford an agency or a dedicated software developer? Of course not, so their growth is handicapped, and I wonder what the effects of that tax-wise are.
You're talking about your marginal rate and we simply are far to the left on the laffer curve, raising taxes will raise revenue. I'm not unsympathetic, my marginal is close to the same - but generally I think people's claims that they will stop working are generally more bark than bite and the evidence largely backs that up.
If you don't want a tax-based solution, I do hope you are agitating for SS and medicare cuts.
>I think people's claims that they will stop working are generally more bark than bite
They stop paying taxes and work off the books instead but you don't announce that publicly for obvious reasons.
The incentive to do this increases with tax pressure. The willingness of people to pay for tax-free work equally increases because you'll pay less.
There's also an increasing asymmetry of what the government gains from a tax hike versus how oppressive it becomes that becomes unfavorable as tax rates go up.
The type of people who will work off book to avoid 50% taxes will generally also work off book to avoid 25% taxes, so increasing the tax rate does not have a large effect on the hidden economy ratio.
> we simply are far to the left on the laffer curve, raising taxes will raise revenue
I don't believe this, actually. I think that we will raise more revenue, yes, by squeezing more from the Fortune 500; but you will absolutely crush small business and consultancy work further. It's kind of like how an 80% tax rate on everyone making over $100K would do a fantastic job of raising revenue, but it's fundamentally stupid and would kill all future golden geese.
(On that note, I see this comment a lot about how we had huge tax rates, 91% in the 1950s; but this is misleading. The effective tax rate for those earners was only 41%, due to the sheer number of exemptions, according to modern analysis. We have never had an actual effective 91% tax rate, or anywhere close to it. Those rates were theater, never reality.)
Well, if we include property tax, sales tax, SALT deduction cap changes, compliance costs, regulatory burdens, state and local taxes... higher.
On that note, you have no evidence that economists focus solely on tax rates on the curve independently of the economy at large. By definition, the curve is determined from external factors and economic measurements, none of which currently resemble 2012. If the economy crashed and there was 20% unemployment, do you still think they'd stand behind the same curve?
Just a reminder that professional macro-economists are paid to justify political decisions. That's the job. Find data that can arguably make this policy (made for other reasons) make sense to the voters, who have a much worse understanding of economics.
As always, the question with economists is "why aren't you rich?". You would get much better answers about macro-economic counterfactuals by going to a macro-trading firm like Bridgewater and asking the employees "what do you think would happen if..."
putting aside the fact that that is not really true about bias in the economics profession, I have good friends who are ex-Bridgewater who would agree with me... and listen to what Ray Dalio says about our fiscal trajectory.
Wanted 10% but offered nothing real in return. At least you get some services from your taxes, like unlawful detention/extradition of suspicious people.
If you're at a 24% marginal rate then you're at least approaching the point you stop paying Social Security taxes. Sounds like you just need to work a little more to keep 12% more of your money. It's funny how making more money reduces your tax rate. You just don't make enough to benefit.
If they're married and paying 24% federal tax rate on any of their income, they almost certainly aren't paying any social security taxes on their consulting income. That would mean their adjusted gross income is in the $200-400k range for their full-time day job which exceeds the Social Security cap by a good margin, it's $176k.
They'd still have to pay for Medicare, but it knocks 12.4% off their estimated taxes for consulting.
If they're single, then the math is different. 24% for single people starts at just over $100k and runs to about $200k so they may have to pay those taxes. It's always frustrating when people whine about taxes but giving insufficient information to evaluate their complaint.
So all the entities that want to hold the debt (social security administration, mutual funds, pension funds etc) where should they go instead? Riskier assets is what you're saying right? Is that a great idea?
I'm not giving investment advice, just commenting that our current fiscal trajectory has become completely unsustainable & dangerous and very few people seem to be seriously discussing it.
Probably the closest US bond equivalent would be debt from well-run Asian countries. I would avoid fixed-income dollar denominated assets.
in what way? as a sovereign currency issuer, the US can't ever be made to default or can it?
What definition of unsustainable fits?
What event could cause public debt growth to reach some kind of insurmountable maximum?
It's not like private debt, when you run out of money, that is the end of the road. There is no such limit for a sovereign currency issuer. The complete settlement of outstanding public debt could be executed tomorrow without collecting another penny in taxes. I wouldn't recommend it, but it could be done.
Leveraging your power as "sovereign currency issuer" means monetizing the debt, aka inflating away the debt, which is disastrous in terms of what it does to purchasing power but also in terms of creditor confidence.
Please, Stephanie Kelton didn't discover some secret hack to get money for free - I would recommend learning traditional macro before going on the MMT train.
All investors should choose gold over the dollar because paper money is always debased. Organizations like Apple, Microsoft, and Google bought government bonds 10 years ago when the price of gold was $1100 and have watched their investments erode while gold has increased to $4000.
The surface way it's wrong is that investors could have invested in Nvidia 10 years ago instead of gold. Because they didn't, their investments "eroded" even more.
The deeper way it's wrong is that people who say this almost always have the unstated premise that gold is "real" money, that every price should be measured against it. That premise is false.
When gold was allowed to float in terms of the US dollar, it went up to $200, then dropped down to $100. When it dropped to $100, the dollar didn't become worth twice as much. Or, to use a more recent example, there has not been a factor of 4 inflation over the last 10 years. So gold is not a fixed measuring stick, against which all other things are measured.
There is only 1 solution to the global debt crisis and thats inflating the currency. They did it after WW2 and they will have to do it now. There is no other option. They can do it sneaky through fake measures of inflation, keeping a lid on cost of living adjustments, but ultimately they soak bond holders and standard of living.
You see it everywhere in things they can’t inflate. The price of houses and gold most obviously, but you see it in commodities that can’t expand production quickly as well. The solution is to buy assets of course.
Monetizing a debt of this magnitude would be disastrous, but agreed this appears to be the path we are going on by default - given that we are consistently above the inflation mandate yet still lowering rates.
It's no longer the early 20th, there are other competitive & well-run jurisdictions for creditors to dump their money in if they lose faith in the US.
> It's no longer the early 20th, there are other competitive & well-run jurisdictions for creditors to dump their money in if they lose faith in the US.
Where, pray tell are these competitive and well-run jurisdictions?
China has capital controls so that probably won't work. The EU might work if they ever get their sh*t together and centralise their bonds and markets, otherwise no.
Like, I too believe that the US is on an unsustainable path, but I just don't see where all that money is gonna go (specifically referring to the foreign investment in the US companies/markets here).
I think there are many smaller jurisdictions that are getting their shit together and might absorb demand - southeast Asia, Singapore obviously (but small), the Gulf. Some subsets of the EU, particularly Eastern Europe.
Plus, even worse-run higher yield jurisdictions become more appealing as the US fails.
It's more about volume than anything else. You or I could invest elsewhere but where do all the foreign holders of Treasuries and US stocks put their money?
Yes, I understand your point very well. My point is two-fold, 1. that many small and stable jurisdictions can absorb excess capital even if there isn't a single stable player as large as the US, 2. it makes higher yield less stable jurisdictions more appealing on the margin. Ultimately, capital will flow away from the US if our fiscal stability is increasingly in question.
Cost to service the debt is about 60% of what it was in the 1980's. All those bonds are long since paid off. This is a meme. Should we adjust to something more sustainable? Yes. Is the "burden" too high to bear? No, it's just not.
if you know enough to know that cost to service was higher in the 1980s as a % of govt revenue (not sure about 60%, we're actually pretty close AFAIU) then you should understand enough to know why that is not nearly the full picture.
Approximately half the S&P500 is in the Magnificent Seven. It doesn't matter what they sell, there is just too much money there. Calling this situation an 'AI risk' is disingenuous, or at best blinkered.
Everyone outside of the American empire knows that the gig is up. When Uncle Sam has his money printing press on full blast, the American people don't feel the full effect, but everyone in the global majority, where there are no dollar printing machines, gets to see too many dollars chasing the same goods, a.k.a. inflation.
The day when the American people elect a fiscally prudent government, for Americans to work hard, pay their taxes and get that deficit to a manageable number is never going to happen. But that is not a problem, the situation is out of America's hands now.
It was the 2022 sanctions on Russia that made the BRICS alliance take note. Freezing their foreign reserves was not well received. Hence we now have China trading in their own currency with their trading partners happy with that.
Soon we will have a situation where there is no 'exorbitant privilege' (reserve currency, which can only ever end up with massive deficits), instead the various BRICS currencies will be anchored to valuable commodities such as rare earth metals, gold and everything else that is 'proof of work' and important to the future. So that means no more 'petro-dollar', the store of value won't be hydrocarbons.
This sounds better than going back to a gold standard. As I see it, the problem with the gold standard is that you kind of know already who has all the gold and we don't want them to be the masters of the universe, because it will be the same bankers.
As for an AI 'Hail Mary', I do hope so. The money printed by Uncle Sam to end up in the Magnificent Seven means that it will be relatively easy to write this money off.
> It was the 2022 sanctions on Russia that made the BRICS alliance take note.
IMO, it was the barriers imposed on the trade of oil, mostly from Iran and Syria. Not really Russia, because they adapted quickly. The countries on the group's name all had alternatives at that time.
Either way, the BRICS trading system wasn't a serious thing until this year. And what really kicked it out was Trump.
For me the question is who is going to subscribe who hasnt already. And that is before we consider the next gen hardware that can run this stuff locally.
But from what I see of the economy around me here, people just dont have the spare funds for LLM luxuries. It feels like 15+ years of wage deflation, and company streamlining, has removed what little spare spending power people had here. Not forgetting the inflation we have seen in the euro zone.
Even if the bet is now an 'all in' on AGI, I see that more as an existential threat than an economic golden egg bailout.
I think there will be an increase in subscribers as people get more used to them. But there's probably also people like me who just dropped 2k on a new system to self host my own to customise the pipeline, and integrate it into my house without sending data offsite.
This is fair. We're now evaluating open-source LLMs to develop our in-house solutions, adding them to our products and services. As soon as they released the models, the moat was, depending on the context, somewhat gone.
We're testing different models depending on the business case. Our initial tests using 3, 7, and 8B models are working fine. We're not using the big ones since our use cases don't demand them.
For non-brits, Bank of England the UKs central bank and is a lot like the US Fed. Their comments carry a lot of weight and do impact government policy.
Not enough central banks were making comments about the sub-prime bubble that led to the 2008 crisis. Getting warnings about a possible AI bubble by a central bank is both significant and, in performing the functions of monetary and financial stability for a country, the prudent thing to do.
I like that the Bank of England spells out the "sudden correction" this time.
In 1996 Fed Chair Alan Greenspan warned about irrational exuberance, in 1999 he warned Congress about "the possibility that the recent performance of the equity markets will have difficulty in being sustained". The crash came in 2000.
The warning seems to have gone unnoticed. AMD just behaves exactly like Juniper in 1999.
The mistake central banks made in 2007-2009* was keeping monetary policy far too tight for far too long, for no real discernable reason.
Offering commentary on which particular sectors they feel are a 'bubble' is outside their purview and not particularly productive IMO, the state is not very good at picking winners.
Why is that obvious? Even with effectively complete stagnation and just existing technology + limited RLVR, I can see how this could be trillion-dollars level useful.
The monetization behind AI is on shakey grounds. Nobody is actually making any money off of it, and when they propose how to make money, we all get very scared.
It's either world-ending hard to believe conjecture, like the death of scarcity, or it's... ads. Ads. You know, the thing we're already doing?
So, it's not looking great. Maybe we will find monetization strategies, but they're certainly not present now, even by the largest players with the most to lose.
It is the financial risk that is obvious. The big players are struggling to show meaningful revenue from the investment. Because the investment is so high, the revenue numbers need to be equally high, and growing fast. The 'correction' is when (ok, if) the markets realise that the returns aren't there. The worldwide risk is that AI-led growth has been a large chunk of the US stock market growth. If it 'corrects' US growth disappears overnight and takes everyone down with it. It is not an issue about the usefulness of AI, but the returns on investment and the market shocks caused by such large sums of money sloshing around one market.
I think we have only scratched the surface of what we can do with the existing technology. A much more present risk from stagnation IMO is that if we stagnate, it is almost certain that the value of the tech will not be able to be enclosed /captured by its creators.
Imho it will take off in animation/illustration as soon as Adobe (or some competitor) figures out how to make good tooling for artists. Not for idiot wantrepeneurs who want to dump fully-generated-slop onto Amazon, but so that a person can draw rough pencil sketches and storyboards and reference character sheets and get back proper illustrations. Basically, don't replace the penciler but replace the inker and the colourist (and, in animation, the in-betweener).
That's more of a UI problem than a limitation in Diffusion tech.
That's a customer who'll pay, it might be worth a lot. But a $trillion per year?
There's a free addon for free Krita that did pretty much that when I tried it, last year.
The glaring issue with it back then was that unlike an LLM that can be understanding of what you try to explain and bit more consistent the diffusion models ability to read and understand your prompt wasn't really there yet, you're more shotgunning keywords and hope the seed lottery gives you something nice.
But recent image generation models are significantly better in stable output. Something like qwen image will care a lot more about your prompt and not entirely redraw the scene into something else just because you change the seed.
Meaning that the UI experiments already exist but the models are still a bit away from maturity.
On the other hand, when looking at how models are actually evolving I'm not entirely convinced we'll need particularly many classically trained artists in roles where they draw static images with some AI acceleration. I expect people to talk to an LLM interface that can take the dumbest of instructions and carefully adjust a picture, sound, music or an entire two hour movie. Where the artist would benefit more by knowing the terminology and the granular abilities of the system than by being able to hold a pencil.
The entertainment and media industry is worth trillions on an annual basis, if AI can eat a fraction of that in addition to some other work-roles it will easily be worth the current valuations.
Complete stagnation would mean that hundreds of billions earmarked for datacenter and chip production in the next few years would have to be cancelled.
The promise of this future demand is what is driving the inflation of the stock market, with investors happy to ignore the deep losses accruing to every AI software player...for now. Pulling the plug on the capacity-building deals is effectively an admission that demand was overestimated, and the market will tank accordingly.
It says it all about current market mania that Nvidia (who sells most of the future chip capacity) is valued at $4 trillion, more than every publicly traded pharmaceutical company (who have decades of predictable future cash flows) combined.
The existing technology can’t even replace customer support systems, which seems like the lowest bar for a role that’s perfectly well suited to LLMs. How are you justifying the trillion dollar value?
I disagree that customer support is the lowest bar for LLMs. Companies have been trying to reduce customer support spend for decades, and yet it still exists. Why? Because the types of questions and types of callers that fall on-to the remaining customer support requests are not easy to automate. Either the question itself is a complex edge-case that requires human intervention, or the person calling wants to talk to a human and good documentation was not going to change their action.
I think with a bit of engineering, the existing tech can replace customer support systems - especially as the boomers are going away. But I realize this is an uphill battle on HN
> I think with a bit of engineering, the existing tech can replace customer support system
That's the lowest of the low and even you accept it doesn't work (yet), how can LLMs be worth 50% of the last years of gdp growth if it's that bad. Do you think customer support represents 50% of newly created value ? I bet it isn't event .5%
But the point is the tech obviously isn't there yet. LLMs are still too prone to giving falsehoods and in that case a raw text-search of the support DB would be more useful anyways.
Maybe if companies would wire up their "oh a customer is complaining try and talk them out of canceling their account offer them a mild discount in exchange for locking in for a year contract" API to the LLM? Okay, but that's not a trillion-dollar service.
Where is all the productivity ? Everyone says they became a 100x employee thanks to LLMs yet not one company has seen any out of the ordinary growth or profit besides AI hyped companies.
What if the amount of slop generated counteracts the amount of productivity gained ? For every line of code it writes it also writes some BS paragraph in a business plan, a report, &c.
I can't think of any tech with this kind of crazy yearly investment in infrastructure with no success stories.
Maybe it's because I find writing easy, but I find the text generation broadly useless except for scamming. The search capabilities are interesting but the falsehoods that come from LLM questions undermine it.
The programming and visual art capabilities are most impressive to me... but where's the companies making killings on those? Where's the animation studio cranking out Pixar-quality movies as weekly episodes?
The animation stuff is about to happen but not there yet.
I work in the industry and I know that ad agencies are already moving onto AI gen for social ads.
For VFX and films the tech is not there yet, since OpenAI believes they can build the next TikTok on AI (a proposition being tested now) and Google is just being Google - building amazing tools but with little understanding (so far) of how to deploy them on the market.
Still Google is likely ahead in building tools that are being used (Nano Banana and Veo 3) while the Chinese open source labs are delivering impressive stuff that you run locally or increasingly on a rented H100 on the cloud.
> But it's not trillion-dollars useful, and it probably won't be.
The market disagrees.
But if you are sure of this, please show your positions. Then we can see how deeply you believe it.
My guess is you’re short the most AI-exposed companies if you think they’re overvalued? Hedged maybe? You’ve found a clever way to invest in bankruptcy law firms that handle tech liquidations?
You’ve just made a comment that “wow, things are going up!” That’s not spotting bubble, that’s my non-technical uncle commenting at a dinner party, “wow this bitcoin thing sure is crazy huh?”
Talk is cheap. You learn what someone really believes by what they put their money in. If you really believe we’re in a bubble, truly believe it based on your deep understanding of the market, then you surely have invested that way.
I truly believe we are in a bubble. I truly believe that AI will exist on the other side of that bubble, just as internet companies and banks existed on the other side of the dotcom crash and the housing crisis.
I don't know how to invest to avoid this bubble. My money is where my mouth is. My investments are conservative and long-term. Most in equity index funds, some bonds, Vanguard mutual funds, a few hand-picked stocks.
No interest in shorting the market or trying to time the crash. I would say I 90% believe a correction of 25% or more will happen in the next 12 months. No idea where my money might be safe. Palantir? Northrup Grumman?
Surely you can spot a bubble if you see that it is rapidly expanding and ultimately unsustainable. Being able to predict when it finally pops would be equivalent to winning a lottery and people would be able to make a lot of money from that, but ultimately no-one can reliably predict when a bubble will pop - doesn't mean that they weren't bubbles.
One can be skeptical about the overall value of various technologies while also being conservative about specific bets in specific timeframes against them.
I think you’re making my point without realizing it.
If you are skeptical but also not willing to place a bet, you shouldn’t say “AI is overvalued” because you don’t actually believe it. You should say, “I think it might be overvalued, but I’m not really sure? And I don’t have enough experience in markets or confidence to make a bet on it, so I will go with everyone else’s sentiment and make the ‘safe’ bet of being long the market. But like… something feels weird to me about how much money is being poured into this? But I can’t say for sure whether it is overvalued or not.”
Not at all. I may think $TECH is overvalued but some companies may well make it out the other side, some aspects of the $TECH may play out (or not), and the bubble may pop in 1 year or 5. So the sensible process may be to invest in broader indexes and let things play out at the more micro level (that may not be possible to invest in anyway).
I certainly had unease about the dot-com market and should have shifted more investments to the conservative side. But I made the "‘safe’ bet of being long the market" even after things started going south.
FWIW, I do think AI is overvalued for the relatively near term. But I'm not sure what to do about that other than being fairly conservatively invested which makes sense for me at this point anyway.
I generally buy index funds but I put some into AMD a while back as the "less-AI-part-of-tech". Will probably get out of that as they've been sucked into that vortex and shift more into global indexes instead of CAN/USA.
I'll leave shorting to the pros. The whole "double-your-money-or-infinite-losses" aspect of shorting is not a game I'm into.
I used to scoff at the idea of the AI-bubble (or any recently called-for tech bubble) being like the 90s given the way technology/the internet is now so integrated into our lives, but the way he spelled it out it does seem similar.
I used to believe in AGI but the more AI has advanced the more I’ve come to realize that there’s no magic level of intelligence that can cure cancer and figure out warp drives. You need data, which requires experimentation, which requires labor and resources of which there is a finite supply. If you had AGI tomorrow and asked it to cure cancer, it would just ask for more experimental data and resources. Isn’t that what the greatest minds in cancer research would say as well? Why do we think that just being more rational or being able to compute better than humans would be sufficient to solve the problem?
It’s very possible that human beings today are already doing the most intelligent things they can given the data and resources they have available. This whole idea that there’s a magic property called intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with, increasingly just seems like the fantasy of people who think they’re very intelligent.
What’s your point? I’m saying there’s no level of smartness that can cure cancer, the bottleneck is data and experimentation not a shortage of smartness/intelligence
Eliezer’s short story “That Alien Message” providing a convincing argument that humans are cognitively limited, not data-limited, through the device of a fictional world where people think faster: https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...
> Yes. There is. The theoretical limit is that every time you see 1 additional bit, it cannot be expected to eliminate more than half of the remaining hypotheses (half the remaining probability mass, rather). And that a redundant message, cannot convey more information than the compressed version of itself. Nor can a bit convey any information about a quantity, with which it has correlation exactly zero, across the probable worlds you imagine.
> But nothing I've depicted this human civilization doing, even begins to approach the theoretical limits set by the formalism of Solomonoff induction.
This is also a commonplace in behavioral economics; the whole foundation of the field is that people in general don't think hard enough to fully exploit the information available to them, because they don't have the time or the energy.
——
Of course, that doesn't mean that great intelligence could figure out warp drives. Maybe warp drives are actually physically impossible! https://en.wikipedia.org/wiki/Warp_drive says:
> A warp drive or a drive enabling space warp is a fictional superluminal (faster than the speed of light) spacecraft propulsion system in many science fiction works, most notably Star Trek,[1] and a subject of ongoing real-life physics research. (...)
> The creation of such a bubble requires exotic matter—substances with negative energy density (a violation of the Weak Energy Condition). Casimir effect experiments have hinted at the existence of negative energy in quantum fields, but practical production at the required scale remains speculative.
——
Cancer, however, is clearly curable, and indeed often cured nowadays. It wouldn't be terribly surprising if we already had enough data to figure out how to solve it the rest of the time. We already have complete genomes for many species, AlphaFold has solved the protein-folding problem, research oncology studies routinely sequence tumors nowadays, and IHEC says they already have "comprehensive sets of reference epigenomes", so with enough computational power, or more efficient simulation algorithms, we could probably simulate an entire human body much faster than real time with enough fidelity to simulate cancer, thus enabling us to test candidate drug molecules against a particular cancer instantly.
Also, of course, once you can build reliable nanobots, you can just program them to kill a particular kind of cancer cell, then inject them.
Understanding this does not require believing that "intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with", which I think is a strawman you have made up. It doesn't even require believing that sufficient intelligence can solve every problem if it has sufficient data and resources to work with. It only requires understanding that being able to do the same thing regular humans do, but much faster, would be sufficient to cure cancer.
——
There does seem to be an open question about how general intelligence is. We know that there isn't much difference in intelligence between people; 90+% of the human population can learn to write a computer program, make a pit-fired pot from clay, haggle in a bazaar, paint a realistic portrait, speak Chinese, fix a broken pipe, interrogate a suspect and notice when he contradicts himself, fletch an arrow, make a convincing argument in courts, program a VCR, write poetry, solve a Rubik's cube, make a béchamel sauce, weave a cloth, sing a five-minute lullaby, sew a seam, or machine a screw thread on a lathe. (They might not be able to learn all of them, because it depends on what they spend time on.)
And, as far as we know, no other animal species can do any of those things: not chimpanzees, not dolphins, not octopodes, not African grey parrots. And most of them aren't instinctive activities even in humans—many didn't exist 1000 years ago, and some didn't exist even 100 years ago.
So humans clearly have some fairly flexible facility that these other species lack. "Intelligence" is the usual name for that facility.
But it's not perfectly general. For example, it involves some degree of ability to imagine three-dimensional space. Some of the humans can also reason about four- or five-dimensional spaces, but this is a much slower and more difficult process, far out of proportion to the underlying mathematical difficulty of the problem. And it's plausible that this is beyond the cognitive ability of large parts of the population. And maybe there are other problems that some other sort of intelligence would find easy, but which the humans don't even notice because it's incomprehensible to them.
Regarding "Alien Message", I don't find that story particularly convincing. I think it's muddled and contrived.
The basic issue is that we have to deduce stuff about the world we live in, using resources from the world we live in. In the story, the data bandwidth is contrived to be insanely smaller than the compute bandwidth, but that's not realistic. In reality, we are surrounded by chaotic physical systems that operate on raw hardware. They are, in fact, quite fast, and probably impossible to simulate efficiently. For instance, we can obviously never build a computer that can simulate the behavior of its own circuitry, using said circuitry, faster than it operates. But I think there's a lot of physical systems that are just like that.
Being data-limited means that we get data slower than we can analyze and process it. It is certainly possible to improve our ability to analyze data, but I don't think we can assume that the best physically realizable intelligence would overcome data limitation, nor that it would be cost-effective in the first place, compared to simply gathering more data and experimenting more.
You seem to be agreeing with the story's thesis, rather than disagreeing. The story claims that we get an enormous amount of data from which we could compute much more than we do. You, too, are claiming that we get an enormous amount of data from which we could compute much more than we do. If that's true, then we aren't limited by our data, which is what I meant by "data-limited"—although you seem to mean the opposite, "we get data slower than we can analyze and process it", in which we are limited not by the data but by the processing. This tends to rebut the claim above, "If you had AGI tomorrow and asked it to cure cancer, it would just ask for more experimental data and resources."
It may very well be true that you could cure cancer even faster or more cheaply with more experimental data, but that's irrelevant to the claim that more experimental data is necessary.
It may also be the case that there's no "shortcut" to simulating a human body well enough to test drugs against a simulated tumor faster than real time—that is, that you need to have enough memory to track every simulated atom. (The success of AlphaFold suggests that this is not the case, as does the ability of humans to survive things like electric shocks, but let's be conservative.) But a human body only contains on the order of 10²⁴ atoms, so you can just build a computer with 10²⁸ words of memory, and processing power to match. It might be millions of times larger than a human body, but that's okay; there's plenty of mass out there to turn into computronium. It doesn't make it physically unrealizable.
> Regarding "Alien Message", I don't find that story particularly convincing. I think it's muddled and contrived.
Well, yes. it's from Eliezer Yudkowsky. The kind of people who who generally find him persuasive, will do so. Those who don't find him convincing or even find him somewhat of a crank, like the other self-proclaimed "rationalists", will do do. "muddled" is correct, he lacks rigour in everything, but certainly brings the word count.
It's not a strawman, it's a thought experiment: if the premise of AGI is that a superintelligence could do all these amazing things, what could it do today if it existed but only had its superintelligence? My suggestion is that even something a billion times more intelligent than a human being might not be able to cure cancer with the information it has available today. Yes it could build simulations and throw a lot of computing power at these problems, but is the bottleneck intelligence or computing power to run the algorithms and simulations? You're conflating the two, no one disagrees that one billion times more computing power could solve big problems, the disagreement is whether one billion times more intelligence has any meaningful value which was the point of isolating that variable in my thought experiment.
Generally, I agree, but it also depends on perspective. Intelligence exists on many levels and manifests differently across species. From a monkey's standpoint, if they were capable of such reflection they might perceive themselves as the most capable creatures in their environment. Yet, humans possess cognitive abilities that go far beyond that, abstract reasoning, cumulative culture, large scale cooperation etc
A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.
As humans, we can easily visualize and reason about 2D and 3D spaces, it's natural because our sensory systems evolved to navigate a 3D world. But can we truly conceive of a million dimensions, let alone visualize them? We can describe them mathematically, but not intuitively grasp them. Our brains are not built for that kind of complexity.
Now imagine a form of intelligence that can directly perceive and reason about such high dimensional structures. Entirely new kinds of understanding and capabilities might emerge. If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.
Of course that's speculative, but it just illustrates how deeply intelligence is shaped and limited by its biological foundation.
> If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.
It likely couldn't, though, that's the problem.
At a basic level, whatever abstract system you can think of, there must be an optimal physical implementation of that system, the fastest physically realizable implementation of it. If that physical implement was to exist in reality, no intelligence could reliably predict its behavior, because that would imply that they have access to a faster implementation, which cannot exist.
The issue is that most physical systems are arguably the optimal implementation of whatever it is that they do. They aren't implementations of simple abstract ideas like adders or matrix multipliers, they're chaotic systems that follow no specifications. They just do what they do. How do you approximate chaotic systems which, for all you know, may depend on any minute details? On what basis do we think it is likely that there exists a computer circuit that can simulate their outcomes before they happen? It's magical thinking.
Note that intelligence has to simulate outcomes, because it has to control them. It has to prove to itself that its actions will help achieve its goals. Evolution doesn't have this limitation: it's not an agent, it doesn't have goals, it doesn't simulate outcomes, stuff just happens. In that sense it's likely that certain things can evolve that cannot be intelligently designed (as in designed, constructed and then controlled). It's quite possible intelligence itself falls in that category and we can't create and control AGI, and AGI can't improve itself and control the outcome either, and so on.
I agree that computational irreducibility and chaos impose hard limits on prediction. Even if an intelligence understood every law of physics, it might still be unable to simulate reality faster than reality itself, since the physical world is effectively its own optimal computation.
I guess where my speculation comes in is that "simulation" doesn’t necessarily have to mean perfect 1:1 physical emulation. Maybe a higher intelligence could model useful abstractions/approximations, simplified but still predictive frameworks that are accurate enough for control and reasoning even in chaotic domains.
After all, humans already do this in a primitive way, we can't simulate every particle of the atmosphere, but we can predict weather patterns statistically. So perhaps the difference between us and a much higher intelligence wouldn't be breaking physics, but rather having much deeper and more general abstractions that capture reality's essential structure better.
In that sense, it's not "magical thinking", I just acknowledge that our cognitive compression algorithms (our abstractions) are extremely limited. A mind that could discover higher order abstractions might not outrun physics, but it could reason about reality in qualitatively new ways.
I think I see what you’re getting at, but the difference between apes and humans isn’t that we can reason in 3D. If someone could actually articulate the intellectual breakthrough that makes humans smarter than apes, then maybe I would accept there’s some intellectual ability AI could achieve that we don’t have, but I don’t see how it could be higher dimensional reasoning.
> A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.
Humans existed in the world for hundreds of thousands of years before they did any of those things, with the exception of wooden hut, which took less time than that. But also wasn't instant.
Your example doesn't entirely contradict the argument that it takes time and experimentation as well, that intellect isn't the only limiting factor.
My point wasn't so much about how fast humans achieved these things, but about what's possible at all given a certain cognitive architecture. Chimpanzees could live for another million years and still wouldn't build a factory, not because they don't have enough time, but because they lack the cognitive and cultural mechanisms to accumulate and transmit abstract knowledge.
So I completely agree that intelligence alone isn't the only factor, it's the whole foundation.
> Chimpanzees could live for another million years and still wouldn't build a factory, not because they don't have enough time, but because they lack the cognitive and cultural mechanisms
And, if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory.
Isn’t that what the greatest minds in physics would say as well? Yes, yes it is.
No debate will be entered into on this topic by me today.
Actually, no, it isn't. They say it isn't necessarily possible, but not self-contradictory as far as we know. It's good that you aren't going to debate this.
"(...) if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory."
"Isn’t that what the greatest minds in physics would say as well? Yes, yes it is."
That is not in fact what the greatest minds in physics would say. Your meta-knowledge of physics has failed you here, resulting in you posting embarrassing misinformation. I'm just having to correct it to prevent you from misleading anyone else.
You failed to realise that I'm not debating you, I'm berating you. Some people see statements like "not debating" as a personal challenge, a reason to get aggressive. Lets be clear: they are not nice people, and you don't want to be trolls like them.
Yes, I can see that you're just trolling, not debating. I appreciate the fact that you aren't debating, because I don't want to have to correct more of your misinformation. I don't think your berating is productive either, although it does demonstrate that—as you said—you are not a nice person.
There needs to be break through papers or hardware that can expand context size in exponential way. Or a new model that can address long term learning.
Humans. There are arrangements of atoms that if constructed and activated, act perfectly like human intelligence. Because they are human intelligence.
Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. If human intelligence is deterministic, then it can be written in software.
Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen. Failures to date can be attributed to various factors, but the gist is that we haven't yet identified the principles of intelligent software.
My guess is that we need less than 5 million years further development time even in a worst-case scenario. With luck and proper investment, we can get it down well below the 1 million year mark.
"Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. "
Determinism is a metaphysical concept like mathematical platonism or ghosts.
> Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen.
~200 years of industrial revolution and we already fucked up beyond the point of no return, I don't think we'll have resources to continue on this trajectory for 1m years. We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall
>We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall
We've already set the course for human extinction, we're about 6-8 generations away from absolute human extinction. We became functionally extinct 10-15 years ago. Still, if we had another 5 million years, I'm one hundred percent certain we could crack AGI.
> Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term.
No, not all processes follow deterministic Newtonian mechanics. It could also be random, unpredictable at times. Are the there random processes in the human brain? Yes, there are random quantum processes in every atom, and there are atoms in the brain.
Yes, this is no less materialistic: Humans are still proof that either you believe in souls or such, or that human level intelligence can be made from material atoms. But it's not deterministic.
But also, LLMs are not anywhere close to becoming human level intelligence.
Current valuations are based on the belief genuine AGI is around the corner. It’s not. LLMs are an interesting technology with many use cases, but they can’t reason in the usual sense of the word and are a dead end for the type of AGI needed to justify current investments.
you can almost always go to archive.is or one of the other mirrors and paste in the original link. it will get you past the paywall and also give you a link that will get others past the paywall. it seems to be a monkey-see monkey-do part of the hackernews microculture that if a link is paywalled a commenter will throw up the archive link
this is how capitalism does things. no one wants to overinvest but no one wants to be left behind and everyone is sure that either there's not gonna be a pop or they can sell before it pops.
it has been educational to see how quickly the financier class has moved when they saw an opportunity to abandon labor entirely, though. that's worth remembering when they talk about how this system is the best one for everyone.
They basically want to be like the spacers in Asimov's robots novels: a handful of supremely wealthy people living in vast domains where every single one of their needs and wants are provided by machines. There is literally no lower (human) class in this society.
This is what's making me laugh a bit about Ford's brazen "we're firing all the white-collar workers" nonsense. Ok, go for it. Who are you going to get to buy a $80,000 F-150?
I feel like a lot of people aren't fully examining what AGI would mean for labor. As of right now labor exists separate from capital, which is to say the economy is made of workers, stuff and money. Workers get stuff, put labor into it and turn it into more valuable stuff, capital owns that stuff so they sell it to other workers (usually) and give their workers some portion of the increase in value. AGI would mean that capital is labor. The stuff can go get more stuff and refine it. Capital won't make stuff to sell, they'll just make stuff they want and stuff to go get and make stuff they want. It will, of course, be wildly bad for political stability but I feel like a lot of people think they've found some sort of catch 182 in AGI when labor has no money to buy stuff. They think "That'll shut the whole economy down" but what would really happen is instead of building a machine that makes boots, hiring someone to run it, selling boots and using the money to buy a yacht they'll just build a machine that makes yachts and another machine that kills anyone who interferes with the yacht machine. An economy made of workers, stuff and money will become an economy just made of stuff, as workers will be replaced by stuff and money was only ever useful as a way to induce people to work.
To whom does one sell when they've deleted their workforce? Seeing company after company add to unemployed workers shows they have no forward-thinking ecomonomists advising them. Further, AI for all of its positive potential is NOT going to be free... or even "cheap" once the investors dry up.
Isn't it a self-fulfilling prophecy at that point ? I have been hearing so many "it's going to crash, sell" from all sorts of sources since mid-August...
From the actual report[1]
>>> Despite persistent material uncertainty around the global macroeconomic outlook, risky asset valuations have increased and credit spreads have compressed. Measures of risk premia across many risky asset classes have tightened further since the last FPC meeting in June 2025. On a number of measures, equity market valuations appear stretched, particularly for technology companies focused on Artificial Intelligence (AI). This, when combined with increasing concentration within market indices, leaves equity markets particularly exposed should expectations around the impact of AI become less optimistic.
Actually, the quoted 'sudden correction' is not referring specifically to AI, but the market in general
[1] https://www.bankofengland.co.uk/financial-policy-committee-r...
Yes but I think they have noted ai/tech companies are particularly exposed/stretched despite second order effects likely to impact the whole market.
"The market is propped up by these companies for sure. It's propping up the New York Stock Exchange, other exchanges. And so if those companies were just all of a sudden dip, even a little bit, you'll see the effects, feel the effects of that in the stock market."
"You don't necessarily have to care about AI or the future of OpenAI or the future of Nvidia even, to care about this story because this has reached into so many other markets. The financial markets, equity markets, debt markets, real estate markets, because data centers are real places that need to be built. Also your energy bills might be higher if you live next to a data center., So this extends far beyond now at this point, the AI sector."
https://www.bloomberg.com/news/articles/2025-10-08/the-circu...
"A wave of deals and partnerships are escalating concerns that the trillion-dollar AI boom is being propped up by interconnected business transactions."
"Never before has so much money been spent so rapidly on a technology that, for all its potential, remains largely unproven as an avenue for profit-making."
"The recent wave of deals and partnerships involving the two are escalating concerns that an increasingly complex and interconnected web of business transactions is artificially propping up the trillion-dollar AI boom. At stake is virtually every corner of the economy, with the hype and buildout of AI infrastructure rippling across markets, from debt and equity to real estate and energy."
https://www.bloomberg.com/news/features/2025-10-07/openai-s-...
"Actually the quoted 'sudden correction' is not referring specifically to AI, but the market in general"
As I read that quote it states that valuations "particularly" for AI companies "appear stretched"
This suggests the "correction" will apply to those valuations in particular
The report later refers specifically to these so-called "technology" companies
"5: Equity market valuations had increased since Q2, to near all-time highs, partly driven by strong Q2 earnings of US _technology firms_. The price appreciation of the largest _technology firms_ this year had increased the concentration within US equity indices to record levels. The market share of the top 5 members of the S&P 500, at close to 30%, was higher than at any point in the past 50 years."
"6: Equity valuations appeared stretched, particularly in backward-looking metrics in the US. For example, the earnings yield implied by the Cyclically-Adjusted Price-to-Earnings (CAPE) ratio was close to the lowest level in 25 years - comparable to the peak of the dot com bubble. ... Some _technology companies_ were trading at valuation ratios which implied high future earnings growth, and concentrations within US equity indices meant that any _AI-led_ price adjustment would have a high level of pass-through into the returns for investors exposed to the aggregate index."
Given their "stretched" valuations according to this report, any "sudden correction" would apply to these so-called "technology" companies
Aside from the instances where the report refers to overvalued "tecchnology" companies particularly, the parent comment is correct that the report does not refer to "AI" specifically
Thus, ...
AI is a risk. The thing we know is going to bite us in the butt is our continued massive sovereign debt burden and lack of any political will whatsoever to either increase taxes or reduce spending. The dollar is not going to do well this century and creditors confidence is already starting to decline.
In fact, the further we go into debt - the more we are implicitly betting our society on an AI hail mary.
> either increase taxes or reduce spending
I see this sentiment a lot, they are not equivalent. The US must reduce spending, if it wants to protect the dollar. Tax increases may also help.
The relationship between tax rates, GDP, government revenue, the market value of new US debt, and the value of the dollar, is complicated and depends on uncertain estimates and models of the economy. Increasing taxes can reduce GDP, which needs to increase to outgrow the debt, there is an optimal tax rate, more doesn't always help. Decreasing spending is a more straightforward relationship, no new debt, no new dollars.
If the US reduces the debt, it removes pressure to monetize and removes market expectation that we will monetize, which directly boosts the dollar. I also think that "rich people are scamming us" is a politically more advantageous message than "old people are scamming us".
The most important thing is eliminating the annual deficit. That sends more of a signal about the future of the country and it's currency than the total amount of debt.
How it gets done is separate from that. Given that the only demographic that can comfortably weather a recession is also starting to collect social security, paid for by younger generations who would be meaningfully affected by a recession, "old people are scamming us" may actually be an effective message.
Social Security is not really relevant to the deficit, that's just a thing some politicians say because they want to dismantle the pie to take their piece. Every time someone points to SS as the source of our fiscal woes, we should be immediately skeptical.
> Tax increases may also help.
I don't live in a costal state, but when I do consulting work typically at charity rates alongside my standard full-time job, I have to pay 24% federal tax, 15.3% FICA, and 7.85% state tax. I am already taxed whenever I want to help anyone at 47.15%. That's before the required tax structures and consulting for doing all the invoicing legally. God himself only wanted 10%, so it seems a government playing God is awfully expensive.
You can't raise taxes any further before I'm done, and I don't think I'm alone, businesses and consultants are already crushed in taxes. I have to bill $40K to hopefully take home $20K; at which point, is it even worth my time? But if I don't consult because it isn't worth it, are small businesses suddenly going to afford an agency or a dedicated software developer? Of course not, so their growth is handicapped, and I wonder what the effects of that tax-wise are.
You're talking about your marginal rate and we simply are far to the left on the laffer curve, raising taxes will raise revenue. I'm not unsympathetic, my marginal is close to the same - but generally I think people's claims that they will stop working are generally more bark than bite and the evidence largely backs that up.
If you don't want a tax-based solution, I do hope you are agitating for SS and medicare cuts.
>I think people's claims that they will stop working are generally more bark than bite
They stop paying taxes and work off the books instead but you don't announce that publicly for obvious reasons.
The incentive to do this increases with tax pressure. The willingness of people to pay for tax-free work equally increases because you'll pay less.
There's also an increasing asymmetry of what the government gains from a tax hike versus how oppressive it becomes that becomes unfavorable as tax rates go up.
The type of people who will work off book to avoid 50% taxes will generally also work off book to avoid 25% taxes, so increasing the tax rate does not have a large effect on the hidden economy ratio.
okay, i hope you are pushing for social security and medicare cuts then. hard choices are hard for a reason.
> we simply are far to the left on the laffer curve, raising taxes will raise revenue
I don't believe this, actually. I think that we will raise more revenue, yes, by squeezing more from the Fortune 500; but you will absolutely crush small business and consultancy work further. It's kind of like how an 80% tax rate on everyone making over $100K would do a fantastic job of raising revenue, but it's fundamentally stupid and would kill all future golden geese.
(On that note, I see this comment a lot about how we had huge tax rates, 91% in the 1950s; but this is misleading. The effective tax rate for those earners was only 41%, due to the sheer number of exemptions, according to modern analysis. We have never had an actual effective 91% tax rate, or anywhere close to it. Those rates were theater, never reality.)
pretty much all modern economists disagree with you https://kentclarkcenter.org/surveys/laffer-curve/
... in 2012?
are effective tax rates higher or lower than in 2012?
Well, if we include property tax, sales tax, SALT deduction cap changes, compliance costs, regulatory burdens, state and local taxes... higher.
On that note, you have no evidence that economists focus solely on tax rates on the curve independently of the economy at large. By definition, the curve is determined from external factors and economic measurements, none of which currently resemble 2012. If the economy crashed and there was 20% unemployment, do you still think they'd stand behind the same curve?
okay, believe what you want. i just hope you are pushing for SS+medicare cuts
Just a reminder that professional macro-economists are paid to justify political decisions. That's the job. Find data that can arguably make this policy (made for other reasons) make sense to the voters, who have a much worse understanding of economics.
As always, the question with economists is "why aren't you rich?". You would get much better answers about macro-economic counterfactuals by going to a macro-trading firm like Bridgewater and asking the employees "what do you think would happen if..."
putting aside the fact that that is not really true about bias in the economics profession, I have good friends who are ex-Bridgewater who would agree with me... and listen to what Ray Dalio says about our fiscal trajectory.
>> God himself only wanted 10%
Wanted 10% but offered nothing real in return. At least you get some services from your taxes, like unlawful detention/extradition of suspicious people.
It sort of depends on how much time you are putting in when you bill $40k, no?
You say you consult at charity rates and then point to taxation as the sole reason it isn't worth your time...
If you're at a 24% marginal rate then you're at least approaching the point you stop paying Social Security taxes. Sounds like you just need to work a little more to keep 12% more of your money. It's funny how making more money reduces your tax rate. You just don't make enough to benefit.
If they're married and paying 24% federal tax rate on any of their income, they almost certainly aren't paying any social security taxes on their consulting income. That would mean their adjusted gross income is in the $200-400k range for their full-time day job which exceeds the Social Security cap by a good margin, it's $176k.
They'd still have to pay for Medicare, but it knocks 12.4% off their estimated taxes for consulting.
If they're single, then the math is different. 24% for single people starts at just over $100k and runs to about $200k so they may have to pay those taxes. It's always frustrating when people whine about taxes but giving insufficient information to evaluate their complaint.
Also, FICA cap goes off gross and a 24% rate is based off AGI, just to muddle the numbers even more.
>> sovereign debt burden
So all the entities that want to hold the debt (social security administration, mutual funds, pension funds etc) where should they go instead? Riskier assets is what you're saying right? Is that a great idea?
I'm not giving investment advice, just commenting that our current fiscal trajectory has become completely unsustainable & dangerous and very few people seem to be seriously discussing it.
Probably the closest US bond equivalent would be debt from well-run Asian countries. I would avoid fixed-income dollar denominated assets.
>> completely unsustainable
in what way? as a sovereign currency issuer, the US can't ever be made to default or can it?
What definition of unsustainable fits?
What event could cause public debt growth to reach some kind of insurmountable maximum?
It's not like private debt, when you run out of money, that is the end of the road. There is no such limit for a sovereign currency issuer. The complete settlement of outstanding public debt could be executed tomorrow without collecting another penny in taxes. I wouldn't recommend it, but it could be done.
Leveraging your power as "sovereign currency issuer" means monetizing the debt, aka inflating away the debt, which is disastrous in terms of what it does to purchasing power but also in terms of creditor confidence.
Please, Stephanie Kelton didn't discover some secret hack to get money for free - I would recommend learning traditional macro before going on the MMT train.
All investors should choose gold over the dollar because paper money is always debased. Organizations like Apple, Microsoft, and Google bought government bonds 10 years ago when the price of gold was $1100 and have watched their investments erode while gold has increased to $4000.
Do you also give forward looking investment advice, or strictly limit to looking what would have worked 10 years ago?
I see this kind of idea a lot, and it's wrong.
The surface way it's wrong is that investors could have invested in Nvidia 10 years ago instead of gold. Because they didn't, their investments "eroded" even more.
The deeper way it's wrong is that people who say this almost always have the unstated premise that gold is "real" money, that every price should be measured against it. That premise is false.
When gold was allowed to float in terms of the US dollar, it went up to $200, then dropped down to $100. When it dropped to $100, the dollar didn't become worth twice as much. Or, to use a more recent example, there has not been a factor of 4 inflation over the last 10 years. So gold is not a fixed measuring stick, against which all other things are measured.
There is only 1 solution to the global debt crisis and thats inflating the currency. They did it after WW2 and they will have to do it now. There is no other option. They can do it sneaky through fake measures of inflation, keeping a lid on cost of living adjustments, but ultimately they soak bond holders and standard of living.
You see it everywhere in things they can’t inflate. The price of houses and gold most obviously, but you see it in commodities that can’t expand production quickly as well. The solution is to buy assets of course.
Monetizing a debt of this magnitude would be disastrous, but agreed this appears to be the path we are going on by default - given that we are consistently above the inflation mandate yet still lowering rates.
It's no longer the early 20th, there are other competitive & well-run jurisdictions for creditors to dump their money in if they lose faith in the US.
> It's no longer the early 20th, there are other competitive & well-run jurisdictions for creditors to dump their money in if they lose faith in the US.
Where, pray tell are these competitive and well-run jurisdictions?
China has capital controls so that probably won't work. The EU might work if they ever get their sh*t together and centralise their bonds and markets, otherwise no.
Like, I too believe that the US is on an unsustainable path, but I just don't see where all that money is gonna go (specifically referring to the foreign investment in the US companies/markets here).
I think there are many smaller jurisdictions that are getting their shit together and might absorb demand - southeast Asia, Singapore obviously (but small), the Gulf. Some subsets of the EU, particularly Eastern Europe.
Plus, even worse-run higher yield jurisdictions become more appealing as the US fails.
It's more about volume than anything else. You or I could invest elsewhere but where do all the foreign holders of Treasuries and US stocks put their money?
Yes, I understand your point very well. My point is two-fold, 1. that many small and stable jurisdictions can absorb excess capital even if there isn't a single stable player as large as the US, 2. it makes higher yield less stable jurisdictions more appealing on the margin. Ultimately, capital will flow away from the US if our fiscal stability is increasingly in question.
Switzerland?
If you avoid their banks, then maybe.
Still not big enough though. I feel like eurobonds or remnibi bonds are the only options, but both don't work for various reasons.
Will have to do it now? There was a huge amount of money printed off after covid.
> massive sovereign debt burden
Cost to service the debt is about 60% of what it was in the 1980's. All those bonds are long since paid off. This is a meme. Should we adjust to something more sustainable? Yes. Is the "burden" too high to bear? No, it's just not.
if you know enough to know that cost to service was higher in the 1980s as a % of govt revenue (not sure about 60%, we're actually pretty close AFAIU) then you should understand enough to know why that is not nearly the full picture.
Approximately half the S&P500 is in the Magnificent Seven. It doesn't matter what they sell, there is just too much money there. Calling this situation an 'AI risk' is disingenuous, or at best blinkered.
Everyone outside of the American empire knows that the gig is up. When Uncle Sam has his money printing press on full blast, the American people don't feel the full effect, but everyone in the global majority, where there are no dollar printing machines, gets to see too many dollars chasing the same goods, a.k.a. inflation.
The day when the American people elect a fiscally prudent government, for Americans to work hard, pay their taxes and get that deficit to a manageable number is never going to happen. But that is not a problem, the situation is out of America's hands now.
It was the 2022 sanctions on Russia that made the BRICS alliance take note. Freezing their foreign reserves was not well received. Hence we now have China trading in their own currency with their trading partners happy with that.
Soon we will have a situation where there is no 'exorbitant privilege' (reserve currency, which can only ever end up with massive deficits), instead the various BRICS currencies will be anchored to valuable commodities such as rare earth metals, gold and everything else that is 'proof of work' and important to the future. So that means no more 'petro-dollar', the store of value won't be hydrocarbons.
This sounds better than going back to a gold standard. As I see it, the problem with the gold standard is that you kind of know already who has all the gold and we don't want them to be the masters of the universe, because it will be the same bankers.
As for an AI 'Hail Mary', I do hope so. The money printed by Uncle Sam to end up in the Magnificent Seven means that it will be relatively easy to write this money off.
> It was the 2022 sanctions on Russia that made the BRICS alliance take note.
IMO, it was the barriers imposed on the trade of oil, mostly from Iran and Syria. Not really Russia, because they adapted quickly. The countries on the group's name all had alternatives at that time.
Either way, the BRICS trading system wasn't a serious thing until this year. And what really kicked it out was Trump.
I saw this comment the other day on Reddit, and I think it sums up the current state pretty well.
> Last month my parents decided to invest their extra cash into “AI” by paying a broker to buy “the AI stocks” they keep hearing about on the news.
For me the question is who is going to subscribe who hasnt already. And that is before we consider the next gen hardware that can run this stuff locally.
But from what I see of the economy around me here, people just dont have the spare funds for LLM luxuries. It feels like 15+ years of wage deflation, and company streamlining, has removed what little spare spending power people had here. Not forgetting the inflation we have seen in the euro zone.
Even if the bet is now an 'all in' on AGI, I see that more as an existential threat than an economic golden egg bailout.
"...who is going to subscribe who hasnt already."
I think there will be an increase in subscribers as people get more used to them. But there's probably also people like me who just dropped 2k on a new system to self host my own to customise the pipeline, and integrate it into my house without sending data offsite.
But from what I see of the economy around me here, people just dont have the spare funds for LLM luxuries.
If you have to pick between Disney+ for your kids and a chatbot subscription, it's a pretty easy choice.
More and more people are making choices like that.
for the chatbot, right? can never tell on the orange site..
IMO they can raise prices 10x and developers will happily continue paying.
This is fair. We're now evaluating open-source LLMs to develop our in-house solutions, adding them to our products and services. As soon as they released the models, the moat was, depending on the context, somewhat gone.
Which models have you found most valuable? Are they still worse than the proprietary ones?
We're testing different models depending on the business case. Our initial tests using 3, 7, and 8B models are working fine. We're not using the big ones since our use cases don't demand them.
Like Qwen, or Tulu3, or what?
Testing LLama, DeepSeek, and Mistral atm.
Awesome, thanks!
Also reported in the Guardian.
https://www.theguardian.com/business/2025/oct/08/bank-of-eng...
For non-brits, Bank of England the UKs central bank and is a lot like the US Fed. Their comments carry a lot of weight and do impact government policy.
Not enough central banks were making comments about the sub-prime bubble that led to the 2008 crisis. Getting warnings about a possible AI bubble by a central bank is both significant and, in performing the functions of monetary and financial stability for a country, the prudent thing to do.
I like that the Bank of England spells out the "sudden correction" this time.
In 1996 Fed Chair Alan Greenspan warned about irrational exuberance, in 1999 he warned Congress about "the possibility that the recent performance of the equity markets will have difficulty in being sustained". The crash came in 2000.
The warning seems to have gone unnoticed. AMD just behaves exactly like Juniper in 1999.
The mistake central banks made in 2007-2009* was keeping monetary policy far too tight for far too long, for no real discernable reason.
Offering commentary on which particular sectors they feel are a 'bubble' is outside their purview and not particularly productive IMO, the state is not very good at picking winners.
*edited to 2007
Sorry you think the government wasn't pumping the 2006 economy enough?
2006 was too early, fair enough. But we were way too tight by late 2007 at least. We should never have let AD fall as much as it did.
Seems obvious.
AI is useful. But it's not trillion-dollars useful, and it probably won't be.
Why is that obvious? Even with effectively complete stagnation and just existing technology + limited RLVR, I can see how this could be trillion-dollars level useful.
The monetization behind AI is on shakey grounds. Nobody is actually making any money off of it, and when they propose how to make money, we all get very scared.
It's either world-ending hard to believe conjecture, like the death of scarcity, or it's... ads. Ads. You know, the thing we're already doing?
So, it's not looking great. Maybe we will find monetization strategies, but they're certainly not present now, even by the largest players with the most to lose.
It is the financial risk that is obvious. The big players are struggling to show meaningful revenue from the investment. Because the investment is so high, the revenue numbers need to be equally high, and growing fast. The 'correction' is when (ok, if) the markets realise that the returns aren't there. The worldwide risk is that AI-led growth has been a large chunk of the US stock market growth. If it 'corrects' US growth disappears overnight and takes everyone down with it. It is not an issue about the usefulness of AI, but the returns on investment and the market shocks caused by such large sums of money sloshing around one market.
I think we have only scratched the surface of what we can do with the existing technology. A much more present risk from stagnation IMO is that if we stagnate, it is almost certain that the value of the tech will not be able to be enclosed /captured by its creators.
Imho it will take off in animation/illustration as soon as Adobe (or some competitor) figures out how to make good tooling for artists. Not for idiot wantrepeneurs who want to dump fully-generated-slop onto Amazon, but so that a person can draw rough pencil sketches and storyboards and reference character sheets and get back proper illustrations. Basically, don't replace the penciler but replace the inker and the colourist (and, in animation, the in-betweener).
That's more of a UI problem than a limitation in Diffusion tech.
That's a customer who'll pay, it might be worth a lot. But a $trillion per year?
There's a free addon for free Krita that did pretty much that when I tried it, last year.
The glaring issue with it back then was that unlike an LLM that can be understanding of what you try to explain and bit more consistent the diffusion models ability to read and understand your prompt wasn't really there yet, you're more shotgunning keywords and hope the seed lottery gives you something nice.
But recent image generation models are significantly better in stable output. Something like qwen image will care a lot more about your prompt and not entirely redraw the scene into something else just because you change the seed.
Meaning that the UI experiments already exist but the models are still a bit away from maturity.
On the other hand, when looking at how models are actually evolving I'm not entirely convinced we'll need particularly many classically trained artists in roles where they draw static images with some AI acceleration. I expect people to talk to an LLM interface that can take the dumbest of instructions and carefully adjust a picture, sound, music or an entire two hour movie. Where the artist would benefit more by knowing the terminology and the granular abilities of the system than by being able to hold a pencil.
The entertainment and media industry is worth trillions on an annual basis, if AI can eat a fraction of that in addition to some other work-roles it will easily be worth the current valuations.
> big players are struggling to show meaningful revenue from the investment
ChatGPT's $10b per year is not insignificant tho.
It is when compared with their capex, and where is that revenue coming from? It’s predominantly coming from other AI hopefuls incinerating capital.
Complete stagnation would mean that hundreds of billions earmarked for datacenter and chip production in the next few years would have to be cancelled.
The promise of this future demand is what is driving the inflation of the stock market, with investors happy to ignore the deep losses accruing to every AI software player...for now. Pulling the plug on the capacity-building deals is effectively an admission that demand was overestimated, and the market will tank accordingly.
It says it all about current market mania that Nvidia (who sells most of the future chip capacity) is valued at $4 trillion, more than every publicly traded pharmaceutical company (who have decades of predictable future cash flows) combined.
The existing technology can’t even replace customer support systems, which seems like the lowest bar for a role that’s perfectly well suited to LLMs. How are you justifying the trillion dollar value?
I disagree that customer support is the lowest bar for LLMs. Companies have been trying to reduce customer support spend for decades, and yet it still exists. Why? Because the types of questions and types of callers that fall on-to the remaining customer support requests are not easy to automate. Either the question itself is a complex edge-case that requires human intervention, or the person calling wants to talk to a human and good documentation was not going to change their action.
I think with a bit of engineering, the existing tech can replace customer support systems - especially as the boomers are going away. But I realize this is an uphill battle on HN
> I think with a bit of engineering, the existing tech can replace customer support system
That's the lowest of the low and even you accept it doesn't work (yet), how can LLMs be worth 50% of the last years of gdp growth if it's that bad. Do you think customer support represents 50% of newly created value ? I bet it isn't event .5%
But the point is the tech obviously isn't there yet. LLMs are still too prone to giving falsehoods and in that case a raw text-search of the support DB would be more useful anyways.
Maybe if companies would wire up their "oh a customer is complaining try and talk them out of canceling their account offer them a mild discount in exchange for locking in for a year contract" API to the LLM? Okay, but that's not a trillion-dollar service.
Because it primarily replaces existing value rather creating new value worth $1T.
What creates more value - 1 developer or 1 developer working at 10x pace?
Where is all the productivity ? Everyone says they became a 100x employee thanks to LLMs yet not one company has seen any out of the ordinary growth or profit besides AI hyped companies.
What if the amount of slop generated counteracts the amount of productivity gained ? For every line of code it writes it also writes some BS paragraph in a business plan, a report, &c.
How are you evaluating the phrase "yet not one company?"
I can't think of any tech with this kind of crazy yearly investment in infrastructure with no success stories.
Maybe it's because I find writing easy, but I find the text generation broadly useless except for scamming. The search capabilities are interesting but the falsehoods that come from LLM questions undermine it.
The programming and visual art capabilities are most impressive to me... but where's the companies making killings on those? Where's the animation studio cranking out Pixar-quality movies as weekly episodes?
The animation stuff is about to happen but not there yet.
I work in the industry and I know that ad agencies are already moving onto AI gen for social ads.
For VFX and films the tech is not there yet, since OpenAI believes they can build the next TikTok on AI (a proposition being tested now) and Google is just being Google - building amazing tools but with little understanding (so far) of how to deploy them on the market.
Still Google is likely ahead in building tools that are being used (Nano Banana and Veo 3) while the Chinese open source labs are delivering impressive stuff that you run locally or increasingly on a rented H100 on the cloud.
You can easily google "generative AI success stories" and read about them.
There are always a few comments that make it seem like LLMs have done nothing valuable despite massive levels of adoption.
I realize this is a cheap shot but
> You can easily google "generative AI success stories" and read about them.
notice you suggested asking Google and not chatgpt.
I don't understand why you think this is important to mention.
Search engines are better at certain tasks than others.
If I said should FLY to Spain is it a cheap shot against sailing because I didn't mention it?
> But it's not trillion-dollars useful, and it probably won't be.
The market disagrees.
But if you are sure of this, please show your positions. Then we can see how deeply you believe it.
My guess is you’re short the most AI-exposed companies if you think they’re overvalued? Hedged maybe? You’ve found a clever way to invest in bankruptcy law firms that handle tech liquidations?
Have you ever heard that "the market can stay irrational longer that you can stay solvent"?
The thing about bubbles is, you can often easily spot them, but can't so easily say when they'll pop.
No. Then you haven’t spotted a bubble.
You’ve just made a comment that “wow, things are going up!” That’s not spotting bubble, that’s my non-technical uncle commenting at a dinner party, “wow this bitcoin thing sure is crazy huh?”
Talk is cheap. You learn what someone really believes by what they put their money in. If you really believe we’re in a bubble, truly believe it based on your deep understanding of the market, then you surely have invested that way.
If not, it’s just idle talk.
I truly believe we are in a bubble. I truly believe that AI will exist on the other side of that bubble, just as internet companies and banks existed on the other side of the dotcom crash and the housing crisis.
I don't know how to invest to avoid this bubble. My money is where my mouth is. My investments are conservative and long-term. Most in equity index funds, some bonds, Vanguard mutual funds, a few hand-picked stocks.
No interest in shorting the market or trying to time the crash. I would say I 90% believe a correction of 25% or more will happen in the next 12 months. No idea where my money might be safe. Palantir? Northrup Grumman?
That’s silly. I can spot a crashing plane (engines on fire, wings torn half off) without being able to predict where and exactly when it’ll crash.
We can spot a bubble without being able to predict when it’ll pop.
Surely you can spot a bubble if you see that it is rapidly expanding and ultimately unsustainable. Being able to predict when it finally pops would be equivalent to winning a lottery and people would be able to make a lot of money from that, but ultimately no-one can reliably predict when a bubble will pop - doesn't mean that they weren't bubbles.
One can be skeptical about the overall value of various technologies while also being conservative about specific bets in specific timeframes against them.
I think you’re making my point without realizing it.
If you are skeptical but also not willing to place a bet, you shouldn’t say “AI is overvalued” because you don’t actually believe it. You should say, “I think it might be overvalued, but I’m not really sure? And I don’t have enough experience in markets or confidence to make a bet on it, so I will go with everyone else’s sentiment and make the ‘safe’ bet of being long the market. But like… something feels weird to me about how much money is being poured into this? But I can’t say for sure whether it is overvalued or not.”
Those are two wildly different things.
Not at all. I may think $TECH is overvalued but some companies may well make it out the other side, some aspects of the $TECH may play out (or not), and the bubble may pop in 1 year or 5. So the sensible process may be to invest in broader indexes and let things play out at the more micro level (that may not be possible to invest in anyway).
I certainly had unease about the dot-com market and should have shifted more investments to the conservative side. But I made the "‘safe’ bet of being long the market" even after things started going south.
FWIW, I do think AI is overvalued for the relatively near term. But I'm not sure what to do about that other than being fairly conservatively invested which makes sense for me at this point anyway.
I generally buy index funds but I put some into AMD a while back as the "less-AI-part-of-tech". Will probably get out of that as they've been sucked into that vortex and shift more into global indexes instead of CAN/USA.
I'll leave shorting to the pros. The whole "double-your-money-or-infinite-losses" aspect of shorting is not a game I'm into.
Scott Galloway had a podcast episode about this topic just over a week ago. https://www.youtube.com/watch?v=Oeepx2ZLrCA
I used to scoff at the idea of the AI-bubble (or any recently called-for tech bubble) being like the 90s given the way technology/the internet is now so integrated into our lives, but the way he spelled it out it does seem similar.
Have there been other stock categories/industries receiving similar flags in the past?
A lot investment is banking on agi. There’s no sign agi is going to happen this decade.
What's a sign it's going to happen ever?
I used to believe in AGI but the more AI has advanced the more I’ve come to realize that there’s no magic level of intelligence that can cure cancer and figure out warp drives. You need data, which requires experimentation, which requires labor and resources of which there is a finite supply. If you had AGI tomorrow and asked it to cure cancer, it would just ask for more experimental data and resources. Isn’t that what the greatest minds in cancer research would say as well? Why do we think that just being more rational or being able to compute better than humans would be sufficient to solve the problem?
It’s very possible that human beings today are already doing the most intelligent things they can given the data and resources they have available. This whole idea that there’s a magic property called intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with, increasingly just seems like the fantasy of people who think they’re very intelligent.
AGI isn't a synonym for smarter-than-human.
What’s your point? I’m saying there’s no level of smartness that can cure cancer, the bottleneck is data and experimentation not a shortage of smartness/intelligence
And I'm saying that AGI doesn't imply a level of smartness at all.
Eliezer’s short story “That Alien Message” providing a convincing argument that humans are cognitively limited, not data-limited, through the device of a fictional world where people think faster: https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien...
> Yes. There is. The theoretical limit is that every time you see 1 additional bit, it cannot be expected to eliminate more than half of the remaining hypotheses (half the remaining probability mass, rather). And that a redundant message, cannot convey more information than the compressed version of itself. Nor can a bit convey any information about a quantity, with which it has correlation exactly zero, across the probable worlds you imagine.
> But nothing I've depicted this human civilization doing, even begins to approach the theoretical limits set by the formalism of Solomonoff induction.
This is also a commonplace in behavioral economics; the whole foundation of the field is that people in general don't think hard enough to fully exploit the information available to them, because they don't have the time or the energy.
——
Of course, that doesn't mean that great intelligence could figure out warp drives. Maybe warp drives are actually physically impossible! https://en.wikipedia.org/wiki/Warp_drive says:
> A warp drive or a drive enabling space warp is a fictional superluminal (faster than the speed of light) spacecraft propulsion system in many science fiction works, most notably Star Trek,[1] and a subject of ongoing real-life physics research. (...)
> The creation of such a bubble requires exotic matter—substances with negative energy density (a violation of the Weak Energy Condition). Casimir effect experiments have hinted at the existence of negative energy in quantum fields, but practical production at the required scale remains speculative.
——
Cancer, however, is clearly curable, and indeed often cured nowadays. It wouldn't be terribly surprising if we already had enough data to figure out how to solve it the rest of the time. We already have complete genomes for many species, AlphaFold has solved the protein-folding problem, research oncology studies routinely sequence tumors nowadays, and IHEC says they already have "comprehensive sets of reference epigenomes", so with enough computational power, or more efficient simulation algorithms, we could probably simulate an entire human body much faster than real time with enough fidelity to simulate cancer, thus enabling us to test candidate drug molecules against a particular cancer instantly.
Also, of course, once you can build reliable nanobots, you can just program them to kill a particular kind of cancer cell, then inject them.
Understanding this does not require believing that "intelligence that can solve every problem when it reaches a sufficient level, regardless of what data and resources it has to work with", which I think is a strawman you have made up. It doesn't even require believing that sufficient intelligence can solve every problem if it has sufficient data and resources to work with. It only requires understanding that being able to do the same thing regular humans do, but much faster, would be sufficient to cure cancer.
——
There does seem to be an open question about how general intelligence is. We know that there isn't much difference in intelligence between people; 90+% of the human population can learn to write a computer program, make a pit-fired pot from clay, haggle in a bazaar, paint a realistic portrait, speak Chinese, fix a broken pipe, interrogate a suspect and notice when he contradicts himself, fletch an arrow, make a convincing argument in courts, program a VCR, write poetry, solve a Rubik's cube, make a béchamel sauce, weave a cloth, sing a five-minute lullaby, sew a seam, or machine a screw thread on a lathe. (They might not be able to learn all of them, because it depends on what they spend time on.)
And, as far as we know, no other animal species can do any of those things: not chimpanzees, not dolphins, not octopodes, not African grey parrots. And most of them aren't instinctive activities even in humans—many didn't exist 1000 years ago, and some didn't exist even 100 years ago.
So humans clearly have some fairly flexible facility that these other species lack. "Intelligence" is the usual name for that facility.
But it's not perfectly general. For example, it involves some degree of ability to imagine three-dimensional space. Some of the humans can also reason about four- or five-dimensional spaces, but this is a much slower and more difficult process, far out of proportion to the underlying mathematical difficulty of the problem. And it's plausible that this is beyond the cognitive ability of large parts of the population. And maybe there are other problems that some other sort of intelligence would find easy, but which the humans don't even notice because it's incomprehensible to them.
Regarding "Alien Message", I don't find that story particularly convincing. I think it's muddled and contrived.
The basic issue is that we have to deduce stuff about the world we live in, using resources from the world we live in. In the story, the data bandwidth is contrived to be insanely smaller than the compute bandwidth, but that's not realistic. In reality, we are surrounded by chaotic physical systems that operate on raw hardware. They are, in fact, quite fast, and probably impossible to simulate efficiently. For instance, we can obviously never build a computer that can simulate the behavior of its own circuitry, using said circuitry, faster than it operates. But I think there's a lot of physical systems that are just like that.
Being data-limited means that we get data slower than we can analyze and process it. It is certainly possible to improve our ability to analyze data, but I don't think we can assume that the best physically realizable intelligence would overcome data limitation, nor that it would be cost-effective in the first place, compared to simply gathering more data and experimenting more.
You seem to be agreeing with the story's thesis, rather than disagreeing. The story claims that we get an enormous amount of data from which we could compute much more than we do. You, too, are claiming that we get an enormous amount of data from which we could compute much more than we do. If that's true, then we aren't limited by our data, which is what I meant by "data-limited"—although you seem to mean the opposite, "we get data slower than we can analyze and process it", in which we are limited not by the data but by the processing. This tends to rebut the claim above, "If you had AGI tomorrow and asked it to cure cancer, it would just ask for more experimental data and resources."
It may very well be true that you could cure cancer even faster or more cheaply with more experimental data, but that's irrelevant to the claim that more experimental data is necessary.
It may also be the case that there's no "shortcut" to simulating a human body well enough to test drugs against a simulated tumor faster than real time—that is, that you need to have enough memory to track every simulated atom. (The success of AlphaFold suggests that this is not the case, as does the ability of humans to survive things like electric shocks, but let's be conservative.) But a human body only contains on the order of 10²⁴ atoms, so you can just build a computer with 10²⁸ words of memory, and processing power to match. It might be millions of times larger than a human body, but that's okay; there's plenty of mass out there to turn into computronium. It doesn't make it physically unrealizable.
Relatedly, you may be interested in seeing Mr. Rogers confronting the paperclip maximizer: https://www.youtube.com/watch?v=T-zJ1spML5c
> Regarding "Alien Message", I don't find that story particularly convincing. I think it's muddled and contrived.
Well, yes. it's from Eliezer Yudkowsky. The kind of people who who generally find him persuasive, will do so. Those who don't find him convincing or even find him somewhat of a crank, like the other self-proclaimed "rationalists", will do do. "muddled" is correct, he lacks rigour in everything, but certainly brings the word count.
It's not a strawman, it's a thought experiment: if the premise of AGI is that a superintelligence could do all these amazing things, what could it do today if it existed but only had its superintelligence? My suggestion is that even something a billion times more intelligent than a human being might not be able to cure cancer with the information it has available today. Yes it could build simulations and throw a lot of computing power at these problems, but is the bottleneck intelligence or computing power to run the algorithms and simulations? You're conflating the two, no one disagrees that one billion times more computing power could solve big problems, the disagreement is whether one billion times more intelligence has any meaningful value which was the point of isolating that variable in my thought experiment.
Generally, I agree, but it also depends on perspective. Intelligence exists on many levels and manifests differently across species. From a monkey's standpoint, if they were capable of such reflection they might perceive themselves as the most capable creatures in their environment. Yet, humans possess cognitive abilities that go far beyond that, abstract reasoning, cumulative culture, large scale cooperation etc
A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.
As humans, we can easily visualize and reason about 2D and 3D spaces, it's natural because our sensory systems evolved to navigate a 3D world. But can we truly conceive of a million dimensions, let alone visualize them? We can describe them mathematically, but not intuitively grasp them. Our brains are not built for that kind of complexity.
Now imagine a form of intelligence that can directly perceive and reason about such high dimensional structures. Entirely new kinds of understanding and capabilities might emerge. If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.
Of course that's speculative, but it just illustrates how deeply intelligence is shaped and limited by its biological foundation.
> If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.
It likely couldn't, though, that's the problem.
At a basic level, whatever abstract system you can think of, there must be an optimal physical implementation of that system, the fastest physically realizable implementation of it. If that physical implement was to exist in reality, no intelligence could reliably predict its behavior, because that would imply that they have access to a faster implementation, which cannot exist.
The issue is that most physical systems are arguably the optimal implementation of whatever it is that they do. They aren't implementations of simple abstract ideas like adders or matrix multipliers, they're chaotic systems that follow no specifications. They just do what they do. How do you approximate chaotic systems which, for all you know, may depend on any minute details? On what basis do we think it is likely that there exists a computer circuit that can simulate their outcomes before they happen? It's magical thinking.
Note that intelligence has to simulate outcomes, because it has to control them. It has to prove to itself that its actions will help achieve its goals. Evolution doesn't have this limitation: it's not an agent, it doesn't have goals, it doesn't simulate outcomes, stuff just happens. In that sense it's likely that certain things can evolve that cannot be intelligently designed (as in designed, constructed and then controlled). It's quite possible intelligence itself falls in that category and we can't create and control AGI, and AGI can't improve itself and control the outcome either, and so on.
I agree that computational irreducibility and chaos impose hard limits on prediction. Even if an intelligence understood every law of physics, it might still be unable to simulate reality faster than reality itself, since the physical world is effectively its own optimal computation.
I guess where my speculation comes in is that "simulation" doesn’t necessarily have to mean perfect 1:1 physical emulation. Maybe a higher intelligence could model useful abstractions/approximations, simplified but still predictive frameworks that are accurate enough for control and reasoning even in chaotic domains.
After all, humans already do this in a primitive way, we can't simulate every particle of the atmosphere, but we can predict weather patterns statistically. So perhaps the difference between us and a much higher intelligence wouldn't be breaking physics, but rather having much deeper and more general abstractions that capture reality's essential structure better.
In that sense, it's not "magical thinking", I just acknowledge that our cognitive compression algorithms (our abstractions) are extremely limited. A mind that could discover higher order abstractions might not outrun physics, but it could reason about reality in qualitatively new ways.
I think I see what you’re getting at, but the difference between apes and humans isn’t that we can reason in 3D. If someone could actually articulate the intellectual breakthrough that makes humans smarter than apes, then maybe I would accept there’s some intellectual ability AI could achieve that we don’t have, but I don’t see how it could be higher dimensional reasoning.
> A chimpanzee can use tools and solve problems, but it will never construct a factory, design an iPhone, or build even a simple wooden house. Humans can, because our intelligence operates at a qualitatively different level.
Humans existed in the world for hundreds of thousands of years before they did any of those things, with the exception of wooden hut, which took less time than that. But also wasn't instant.
Your example doesn't entirely contradict the argument that it takes time and experimentation as well, that intellect isn't the only limiting factor.
My point wasn't so much about how fast humans achieved these things, but about what's possible at all given a certain cognitive architecture. Chimpanzees could live for another million years and still wouldn't build a factory, not because they don't have enough time, but because they lack the cognitive and cultural mechanisms to accumulate and transmit abstract knowledge.
So I completely agree that intelligence alone isn't the only factor, it's the whole foundation.
> Chimpanzees could live for another million years and still wouldn't build a factory, not because they don't have enough time, but because they lack the cognitive and cultural mechanisms
Given a million years, that could change.
Agreed.
And, if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory.
Isn’t that what the greatest minds in physics would say as well? Yes, yes it is.
No debate will be entered into on this topic by me today.
Actually, no, it isn't. They say it isn't necessarily possible, but not self-contradictory as far as we know. It's good that you aren't going to debate this.
https://en.wikipedia.org/wiki/Alcubierre_drive
You failed reading comprehension.
You think I'm the one who's failing here?
You said:
"(...) if you had AGI tomorrow and asked it to figure out FTL warp drives, it would just explain to you how it's not going to happen. It is impossible, the end. In fact the request is fantasy, nigh nonsensical and self-contradictory."
"Isn’t that what the greatest minds in physics would say as well? Yes, yes it is."
That is not in fact what the greatest minds in physics would say. Your meta-knowledge of physics has failed you here, resulting in you posting embarrassing misinformation. I'm just having to correct it to prevent you from misleading anyone else.
You failed to realise that I'm not debating you, I'm berating you. Some people see statements like "not debating" as a personal challenge, a reason to get aggressive. Lets be clear: they are not nice people, and you don't want to be trolls like them.
Yes, I can see that you're just trolling, not debating. I appreciate the fact that you aren't debating, because I don't want to have to correct more of your misinformation. I don't think your berating is productive either, although it does demonstrate that—as you said—you are not a nice person.
There needs to be break through papers or hardware that can expand context size in exponential way. Or a new model that can address long term learning.
Humans. There are arrangements of atoms that if constructed and activated, act perfectly like human intelligence. Because they are human intelligence.
Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. If human intelligence is deterministic, then it can be written in software.
Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen. Failures to date can be attributed to various factors, but the gist is that we haven't yet identified the principles of intelligent software.
My guess is that we need less than 5 million years further development time even in a worst-case scenario. With luck and proper investment, we can get it down well below the 1 million year mark.
"Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term. "
Determinism is a metaphysical concept like mathematical platonism or ghosts.
> Thus, if we continue to strive to design/create/invent such, it is inevitable that eventually it must happen.
~200 years of industrial revolution and we already fucked up beyond the point of no return, I don't think we'll have resources to continue on this trajectory for 1m years. We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall
>We might very well be accelerating towards a brick wall, there is absolutely no guarantee we'll hit AGI before hitting the wall
We've already set the course for human extinction, we're about 6-8 generations away from absolute human extinction. We became functionally extinct 10-15 years ago. Still, if we had another 5 million years, I'm one hundred percent certain we could crack AGI.
> Human intelligence must be deterministic, any other conclusion is equivalent to the claim that there is some sort of "soul" for lack of better term.
No, not all processes follow deterministic Newtonian mechanics. It could also be random, unpredictable at times. Are the there random processes in the human brain? Yes, there are random quantum processes in every atom, and there are atoms in the brain.
Yes, this is no less materialistic: Humans are still proof that either you believe in souls or such, or that human level intelligence can be made from material atoms. But it's not deterministic.
But also, LLMs are not anywhere close to becoming human level intelligence.
>It could also be random, unpredictable at times.
It isn't. But if it were, we can also write that into the algorithm.
>But also, LLMs are not anywhere close to becoming human level intelligence.
They're no farther than about 5 million years distant.
> if deterministic, then can be done in software.
You just need a few Dyson spheres and someone omniscient to give you all the parameter values. Easy peazy.
Just like cracking any encryption: you just brute force all possible passwords. Perfectly deterministic decryption method.
</s>
That's what people have said about technologies in every decade, Sam
Current valuations are based on the belief genuine AGI is around the corner. It’s not. LLMs are an interesting technology with many use cases, but they can’t reason in the usual sense of the word and are a dead end for the type of AGI needed to justify current investments.
It’s going to be a gruesome train wreck.
Any non paywalled links, please?
https://archive.ph/BNUzu
I don't see a paywall
you can almost always go to archive.is or one of the other mirrors and paste in the original link. it will get you past the paywall and also give you a link that will get others past the paywall. it seems to be a monkey-see monkey-do part of the hackernews microculture that if a link is paywalled a commenter will throw up the archive link
Wow position 140 on HN after about about 3 hours. Brutal lol
Top ten on https://news.ycombinator.com/active right now.
I use https://hckrnews.com/ so I can see the stories in chronological order. Makes the "front page" effect basically disappear.
this is how capitalism does things. no one wants to overinvest but no one wants to be left behind and everyone is sure that either there's not gonna be a pop or they can sell before it pops.
it has been educational to see how quickly the financier class has moved when they saw an opportunity to abandon labor entirely, though. that's worth remembering when they talk about how this system is the best one for everyone.
Leaving large portions of the population jobless surely can't be good for business and political stability.
They basically want to be like the spacers in Asimov's robots novels: a handful of supremely wealthy people living in vast domains where every single one of their needs and wants are provided by machines. There is literally no lower (human) class in this society.
This is what's making me laugh a bit about Ford's brazen "we're firing all the white-collar workers" nonsense. Ok, go for it. Who are you going to get to buy a $80,000 F-150?
Chevy workers?
I feel like a lot of people aren't fully examining what AGI would mean for labor. As of right now labor exists separate from capital, which is to say the economy is made of workers, stuff and money. Workers get stuff, put labor into it and turn it into more valuable stuff, capital owns that stuff so they sell it to other workers (usually) and give their workers some portion of the increase in value. AGI would mean that capital is labor. The stuff can go get more stuff and refine it. Capital won't make stuff to sell, they'll just make stuff they want and stuff to go get and make stuff they want. It will, of course, be wildly bad for political stability but I feel like a lot of people think they've found some sort of catch 182 in AGI when labor has no money to buy stuff. They think "That'll shut the whole economy down" but what would really happen is instead of building a machine that makes boots, hiring someone to run it, selling boots and using the money to buy a yacht they'll just build a machine that makes yachts and another machine that kills anyone who interferes with the yacht machine. An economy made of workers, stuff and money will become an economy just made of stuff, as workers will be replaced by stuff and money was only ever useful as a way to induce people to work.
Zero labor cost is the dream!
To whom does one sell when they've deleted their workforce? Seeing company after company add to unemployed workers shows they have no forward-thinking ecomonomists advising them. Further, AI for all of its positive potential is NOT going to be free... or even "cheap" once the investors dry up.
I never said it was a smart dream. It all seems somewhat (at best) shortsighted to me.
I'm pretty sure they all see the it as someone else's problem to solve.
yeah.... always someone else's problem to solve. Just like a pyramid scheme.
Isn't it a self-fulfilling prophecy at that point ? I have been hearing so many "it's going to crash, sell" from all sorts of sources since mid-August...