258 comments

  • dwohnitmok 2 days ago ago

    This is an extremely confusing snippet from the interview for Patel to put as the title.

    Amodei does not mean that things are plateauing (i.e. the exponential will no longer hold), but rather uses "end" closer to the notion of "endgame," that is we are getting to the point where all benchmarks pegged to human ability will be saturated and the AI systems will be better than any human at any cognitive task.

    Amodei lays this out here:

    > [with regards to] the “country of geniuses in a data center”. My picture for that, if you made me guess, is one to two years, maybe one to three years. It’s really hard to tell. I have a strong view—99%, 95%—that all this will happen in 10 years. I think that’s just a super safe bet. I have a hunch—this is more like a 50/50 thing—that it’s going to be more like one to two [years], maybe more like one to three.

    This is why Amodei opens with

    > What has been the most surprising thing is the lack of public recognition of how close we are to the end of the exponential. To me, it is absolutely wild that you have people — within the bubble and outside the bubble — talking about the same tired, old hot-button political issues, when we are near the end of the exponential.

    Whether you agree with him is of course a different matter altogether, but a clearer phrasing would probably be "We are near the endgame."

    • adrian_b a day ago ago

      Regarding "AI systems will be better than any human at any cognitive task", while I believe that this may become possible in a distant future, with systems having a quite different structure, I do not see any evidence of such a thing becoming possible during the next decade or two.

      Nothing that I have seen described here on HN or elsewhere, by the most enthusiast users of AI, who claim that their own productivity has been multiplied, does not demonstrate performance in cognitive tasks even remotely comparable with that of a competent human, much less better performance.

      All that I see is that the AI systems outperform humans for various tasks only because they had access in their training data to much more information than most humans are allowed to access, because they do not have enough money to obtain such access, both because the various copyright paywalls and also because of the actual cost of storage and retrieval systems.

      Using an AI agent may be faster than if you were given access to the training data and you would use conventional search tools on it, but the speed may be illusory, because when I search something and I have access to the original sources I can validate the search results faster and with much more certainty than when I try to ponder about the correctness of what AI has provided, e.g. whether a program produced by it really does what I have requested and it is bug free (in comparison with having access to its training programs and being able to choose myself what to copy and paste).

      I hope that paid access to AI tools gives better results, but the AI replies that popular search engines, like Google and Bing, force upon their users have made Internet searches much worse not better, as their answers always contain something else than I want, and this is in the best case, when the answers are not plainly wrong.

      • generallyjosh 3 hours ago ago

        One of the first skills I made for Claude was a research skill.

        I give it a question (narrow or really broad), and the model does a bunch of web searches using subagents, to try and get a comprehensive answer using current results

        The important part is, when the model answers, I have it cite its sources, using direct links. So, I can directly confirm the accuracy and quality of any info it finds

        It's been super helpful. I can give it super broad questions like "Here's the architecture and environment details I'm planning for a new project. Can you see if there's any known issues with this setup". Then, it'll give me direct links + summaries to any relevant pages.

        Saves a ton of time manually searching through the haystack, and so far, the latest models are pretty good about not missing important things (and catching plenty of things I missed)

    • rramadass a day ago ago

      Nicely articulated and thanks for pointing this out.

      It is a 2+hrs video and hence a summary of main themes is welcome.

  • bakibab 2 days ago ago

    One of my friends and I started building a PaaS for a niche tech stack, believing that we could use Claude for all sorts of code generation activities. We thought, if Anthropic and OpenAI are claiming that most of the code is written by LLMs in new product launches, we could start using it too.

    Unsurprisingly, we were able to build a demo platform within a few days. But when we started building the actual platform, we realized that the code generated by Claude is hard to extend, and a lot of replanning and reworking needs to be done every time you try to add a major feature.

    This brought our confidence level down. We still want to believe that Claude will help in generating code. But I no longer believe that Claude will be able to write complex software on its own.

    Now we are treating Claude as a junior person on the team and give it well-defined, specific tasks to complete.

    • nsoonhui 2 days ago ago

      From my experience the biggest difference between AI and junior programmer is that, AI can churn out code very fast, but you need to do the testing and verify the fix. Junior, on the other hand, is very slow in writing code but can do the verification and testing on his own.

      Usually the verification and testing is the most time consuming part.

      I am working on graphical application like AutoCAD, for the context.

      • generallyjosh 3 hours ago ago

        I'm finding the latest models are pretty good at debugging, if you give them the tools to debug properly

        If they can run a tool from the terminal, see all the output in text format, and have a clear 'success' criteria, then they're usually able to figure out the issue and fix it (often with spaghetti code patching, but it does at least fix the bug)

        I think the testing/verification part is going to keep getting better, as we figure out better tools the AI can use here (ex, parsing the accessibility tree in a web UI to click around in it and verify)

      • bigstrat2003 2 days ago ago

        And the junior learns when you teach him stuff. This is a huge advantage that humans have which LLMs do not have at all right now.

        • argee 2 days ago ago

          I looked into AI scribes when they were new, finding them interesting, and spoke to many doctors. Across the board, the preference was for a human scribe, the reason being that they actually take away cognitive load by learning to work with you over time, to the point where eventually your scribing problems are wholly solved by having them around and you need not think about it.

          AI scribes have their place since many doctors and nurses can’t afford a human scribe, but as of now they don’t *replace* people. They’re a tool that still needs wielding, and can’t be held accountable for anything.

        • samrus 2 days ago ago

          Live learning has actually been a pretty interesting idea in ML for a long time that i dont know why doesnt get more effort put into it. Probably cost. But itd be really cool to have an LLM that gets finetuned on your data and RLs from your HF everytime you ask it to do something and give it feedback

          • r_lee 2 days ago ago

            because it's not easy to identify exactly when to r/w memory accordingly, especially when you'd need to have an LLM decide when and if to do that

            and to scale it in a way where you don't need a whole custom model loaded for 1 user (financially unviable)

            just my immediate thoughts, could be wrong though.

      • hackable_sand a day ago ago

        That's the biggest difference?

        Seriously?

        Between a human and a malformed humunculus of piddling intelligence?

    • epolanski 2 days ago ago

      I don't think this is much of a problem with the tools rather than with your approach.

      We have successfully put Claude in huge multi-thousands pr long with projects.

      But this meant that:

      1. Solid architectural and design decisions were made already after much trial and error

      2. They were further refined and refactored

      3. Countless hours have been spent in documenting, writing proper skills and architectural and best practice documents

      Only then Claude started paying off, and even then it's an iterative process where you need to understand why it tries to hack his way out, etc, what to check, what to supervise.

      Seriously if you think you can just Claude create some project..

      Just fork an existing one that does some larger % of what you need and spend most of the initial time scaffolding it to be ai friendly.

      Also, you need to invest in harnessing, giving tools and ways to the LLM to not go off rails.

      Strongly typed languages, plenty of compilation and diagnostics tools, access to debuggers or browser mcps, etc.

      It's not impossible, but you need to approach it with an experimentation approach, not drinking Kool aid.

      • samrus 2 days ago ago

        See thats the thing. A human is slower but doesnt need all this handholding.

        The idea of AI being able to "code" is that it is able to do all this planning and architectural work. It cant. But its sold as though it is. Thats where the bubble is

        • jakubtomanik a day ago ago

          Because when human comes to the team they already have internal repository with skills. They may need to update them on-the-job or create new ones but they never start fresh. LLM in the other hand starts clean, they are literally blank slates and it’s your job to equip them with the right skills and knowledge. As programmers we must transition from being coders to being trainers/managers if we want to still have premium paid jobs in this brave new world

          • samrus a day ago ago

            My counter argument is that thay manual training, while beneficial, wont lead to the scaling factors being thrown around. It wont lead to the single person unicorn that keeps being talked about excitedly.

            For that, the model needs to learn all this architecture and structure itself from the huge repositories of human knowledge like the internet

            Until then, reality will be below expectations, and the bubble will head towards popping

          • toldnotmywrath a day ago ago

            There are no premium paid jobs for prompting in a brave new world.

        • wan23 21 hours ago ago

          AI can plan and do architectural work - just not amazingly well. Treat it as an intern or a new grad at best. Though this capability has been increasing pretty rapidly, so who knows where we'll be in a few years.

      • mythrwy 2 days ago ago

        I guess I'd rather just complete one tiny part at a time with Claude and understand the output then do all that. It seems like less effort and infrastructure. And a lot more certain in outcome.

      • Madmallard 2 days ago ago

        Sounds like the amount of work you put into that is not worth the pay-off.

        • epolanski a day ago ago

          I have the opinion it was well worth it for many reasons.

          Not only the agents can complete trivial tasks on their own, leaving us just with reviewing (and often just focusing on the harnessing), but the new setup is very good for onboarding technical and non-technical staff: you can ask any question about both the product or its architecture or decisions.

          Everything's documented/harnessed/E2Ed, etc.

          Doing all of this work has much improved the codebase in general, proper tests, documentation and design documents do make a difference per se, and it further compounds with LLMs.

          Which is my point in any case: if you start a new project just by prompting trivialities it will go off rail and create soups. But if you work on an established and well scaffolded project, the chances of going off rails and creating soups is very small.

          And thus my conclusion: just fork existing projects that already do many of the things you need (plenty of them from compilers to native applications to anything really), focus on the scaffolding and understanding the project, then start iterating by adding features, examples and keeping the hygiene high.

      • cxvwK 2 days ago ago

        OK, you have a go and show us how its done.

        These posts are tiresome. Show us something. Go on.

    • jstummbillig a day ago ago

      > But I no longer believe that Claude will be able to write complex software on its own.

      "on its own" is doing a lot of work here. Dario went into the differences in this very podcast: "Most code is written by agents" is not the same as "most code is written without or independent of human input".

      I suspect that is how different outcomes can be explained (even without having to assume that Anthropic/OpenAI engineers are outright lying.)

    • ares623 2 days ago ago

      Is it worth starting from scratch and adding a "make it easily extensible" to the initial prompts? Maybe with the recently released models it'll do an even better job. Just keep rebuilding from scratch every time a new model version is released.

  • crossbody 2 days ago ago

    The concept of the "end of the exponential" sounds like a tech version of Fukuyama's much mocked "End of History". Amodei seems to think we’ll solve all the "useful" problems and then hit a ceiling of utility.

    But if you’ve read David Deutsch’s The Beginning of Infinity, Amodei’s view looks like a mistake. Knowledge creation is unbounded. Solving diseases/coding shouldn't result in a plateau, but rather unlock totally new, "better" problems we can't even conceive of yet.

    It's the begining of Inifinity, no end in sight!

    • joshjob42 a day ago ago

      I really don't see how that is true.

      For instance, once you develop atomically precise manufacturing ala Drexler and have a complete model of biology, etc., drive solar panel efficiency to very near the upper theoretical bound for infinitely many junction cells for a raw panel of ~68%, then there isn't really anywhere to go that matters for humans. Material production would be solved, anything you could desire would be manufacturable in minutes to hours, a km^2 of solar panels could power 10-20k people's post-scarcity lives.

      You eventually reach the upper bounds on compute efficiency and human upload model efficiency -- unknown but given estimates on upper bound for like rod logic (~1e-34Js/op), reasonably bounds on op speed (100MHz), and low estimates for functional uploading (1e16 flops), you get something in the zone of 0.1nW/upload, or several trillion individuals on 1m^2 of solar panel in space. When you put a simulated Banks Orbital around every star in the Milky Way in a grand sim running on a system of solar panels in space where the entire simulated galaxy has a 15ms ping to any other point in the simulated galaxy, what exactly is this infinite stream of learning? You've pushed technology to the the limits of physical law subject to the constraint of being made of atoms.

      Are you envisioning that we'd eventually be doing computation using the entirety of a neutron star or (if they can exist) a quark star? Even then, you eventually hit a wall where physics constrains you from making significant further gains.

      There is an ultimate end to the s-curve of technology.

      • crossbody a day ago ago

        I see your point, however, consider this: to a farmer in 1900, our modern food system is already "end of history" post-scarcity sci-fi. Back then, one farmer fed ~4 people. Today, thanks to automation, GMO and fertilizers, one farmer feeds ~170. We effectively solved the "calorie problem" for the developed world.

        But the economy didn't flatline just because we hit THAT manufacturing ceiling. Value simply migrated from manufacturing (growing wheat, assembling cars) to services (Michelin dining, DoorDash, TikTok influencers). Radio did not turn out to be the last useful invention it was predicted to be. Knowledge generation has sped up dramatically.

        Your point is fair regarding hardware - eventually you do run out of stars or hit the Landauer limit. But this is exactly Deutsch’s distinction between resources (finite) and knowledge (infinite). Even in a bounded physical system, the "software" (the art, explanations, and social structures) isn't bounded by the clock speed. We don't need infinite atoms to have infinite creativity and knowledge

    • skybrian 2 days ago ago

      Haven't watched the video, but the end of exponential growth isn't the end of growth. It means the percentage growth per year decreases. The Internet also went through an exponential growth phase at the beginning.

      • crossbody 2 days ago ago

        You're describing a standard S-curve (logistic growth), which is definitely what happens to parameter counts or user adoption (like The Internet). But Amodei is applying this to scientific discovery itself. He’s effectively saying the "S-curve of Science" flatlines because we figure out everything that matters (curing aging, mental health, etc.). My whole point was that science doesn't have a top to the S-curve - it’s an infinite ladder (as per Deutsch).

      • trhway 2 days ago ago

        >the end of exponential growth

        we're on the verge of getting to Moon and Mars in more than rare tourist numbers and with notable payloads. Add to that advancements in robotics, which will change things here on Earth as well as in space. The growth is only starting.

        >The Internet also went through an exponential growth phase at the beginning.

        If we consider general Internet as all the devices connected i think the exponential growth is still on as for example ARM CPUs shipments:

          2002: Passed 1 billion cumulative chips shipped.
          2011: Surpassed 1 billion units shipped in a single year.
          2015: Running at ~12 billion units per year.
          2020 (Q4): Record 6.7 billion chips shipped in one quarter (842 chips per second).
          2020: Total cumulative shipments crossed 150 billion.
          2024 (FY): Nearly 29 billion ARM chips shipped in 12 months.
          2025: Total cumulative shipments exceeded 250 billion.
        • skybrian 2 days ago ago

          I was thinking about user traffic, but sure, it depends what you look at.

        • refulgentis 2 days ago ago

          > we're on the verge of getting to Moon and Mars in more than rare tourist numbers

          Cross-country full-self driving, too

    • kjkjadksj 2 days ago ago

      Not infinity. Only the path to make steady returns in a few short years. Take disease research. Pharmaceutical companies are not interested in curing disease. They would like to treat disease. That means recurring revenue. They would like to focus on the diseases with the most patients to maximize the market for their product. This is why a dozen plus pharma companies are pursuing glp1 while cutting internal r and d jobs and offshoring everything not specifically bolted to this country by the FDA to India.

      This is what depressed me as an early career scientist. Money to do the work to advance our species is not being distributed. Only money to generate more money for a sliver of the ownership class is distributed.

      The incentives are broken. We aren’t getting Star Trek in our future. We are getting CHOAM.

      • inglor_cz a day ago ago

        "Pharmaceutical companies are not interested in curing disease."

        In practice, quite a lot of new drugs are curative. Gene therapy, for example, usually fixes the underlying problem once and for all. Even monoclonal antibodies are rarely of the type that needs to be used for the rest of your life.

        If you succeed in putting someone's cancer into remission, that patient has to be monitored for the rest of their life, but they usually don't consume any expensive drugs anymore. The expenses are more on the necessary personnel side.

        There is this unpleasant fact that most chronic diseases worsen in the last 2 decades of our lives, when our systems are already seriously dysregulated by aging. Hard to fix anything reliably in a house that is already halfway down.

        • kjkjadksj 17 hours ago ago

          How many gene therapies are approved vs treatments?

      • weregiraffe a day ago ago

        >Pharmaceutical companies are not interested in curing disease. They would like to treat disease

        This is nonsense. Pharma are never in a position where they can choose between curing and treating. 90% of clinical trials fail. Pharma is throwing things at the wall and picking whatever sticks.

        • kjkjadksj 17 hours ago ago

          Then explain the herd mentality if they were truly all trying all posibilities. No, same old same old. Pharma is not removed from the usual incentives of capitalism. FWIW the line about treatments not cures is pretty much a direct quote from a product manager at a major pharma company I heard speak at an internal presentation. Straight from the horses mouth.

  • supergilbert 2 days ago ago

    I find myself coding a lot with Claude Code.. but then it's very hard to quantify the productivity boost. The first 80% seem magical, the last ones are painful. I have to basically get the mental model of the codebase in my head no matter what.

    • viking123 2 days ago ago

      I have the issue that I run into some bug that it just cannot fix. Bear in mind I am developing an online game. And then I have to get into the weeds myself which feels such an gargantuan effort after having used the LLM, that I just want to close the IDE and go do something else. Yes, I have used Opus 4.6 and Codex 5.3 and they cannot just solve some issues no matter how I twist it. Might be the language and the fact that it is a game with custom engine and not a react app.

      I talked with my coworker today and asked which model he uses, he said Opus 4.6 but he said he doesn't use any AI stuff much anymore since he felt it makes him not learn and build the mental model which I tend to agree a bit with.

      • 2 days ago ago
        [deleted]
      • robkop 2 days ago ago

        I get this at least once a week. And then once you have to dig in and understand the full mental model it’s not really giving you any uplift anyway.

        I will say that doing this for enough months has made my ability to pick up the mental model quickly and to scope how much need to absorb much quicker. It seems possible that with another year you’d become very rapid at this.

    • bluegatty 2 days ago ago

      " I have to basically get the mental model of the codebase in my head no matter what."

      This is a key insight, I'm unable to get around this.

      It's the thing I require to have before I let go, and I want to make sure it's easy to grasp again aka clear in the docs.

      Basically - the sys architecture, the mental model for key things, even the project structure, you have to have a pretty good feel for.

    • svnt 2 days ago ago

      It's the intermittent reward model, and the reward hits, but the reward might be hallucinated in the fuller sense.

    • epolanski 2 days ago ago

      Provide better context to LLMs: more documentation, more skills, better Claude files, more ways to harness (tests, compilers, access to artifacts etc).

      • kubb a day ago ago

        The codebase itself was supposed to be the context.

        • epolanski a day ago ago

          Yes, and a codebase with good documentation is better than one without.

          • kubb a day ago ago

            Why? Isn’t documentation just approximation of the code and therefore less informative for inference than the code itself?

            I understand that the code doesn’t contain the architectural intent, but if the LLM writing it can’t provide that then it will never replace the architect.

            • epolanski a day ago ago

              I'm not sure what are you trying to get at.

              Of course an LLM can make a thorough design analysis and extract architectural patterns.

              But it doesn't have infinite memory and context.

              On top of that, it may recognize patterns, but not their intent and scope.

              Documentation is gold for humans and LLMs. But LLMs have been the very first major moment in this field that has very little, to no, engineering practices to focus on documentation and specs.

              • kubb a day ago ago

                Its about the mental model of the codebase, mentioned by the GP.

                Somehow my experience is that no matter how much documentation or context there is, eventually the model will do the wrong thing because it won't be able to figure out something that makes sense in context of the design direction, even if it's painstakingly documented. So eventually the hardest work - that of understanding everything down to the smallest detail - will have to be done anyway.

                And if all it was missing was more documentation... Then the agent should have been able to generate that as the first step. But somehow it can't do it in a way that helps it suceed at the task.

    • tasuki 2 days ago ago

      > I have to basically get the mental model of the codebase in my head no matter what.

      Ah yes, I feel this too! And that's much harder with someone else's code than with my own.

      I unleashed Google's Jules on my toy project recently. I try to review the changes, amend the commits to get rid of the worst, and generally try to supervise the process. But still, it feels like the project is no longer mine.

      Yes, Jules implemented in 10 minutes what would've taken me a week (trigonometry to determine the right focal point and length given my scene). And I guess it is the right trigonometry, because it works. But I fear going near it.

      • dosinga 2 days ago ago

        ah, but you can always just ask the LLM questions about how it works. it's much easier to understand complex code these days than before. and also much easier to not take the time to do it and just race to the next feature

        • tasuki a day ago ago

          Indeed. But Jules is not really questions-based (it likes to achieve stuff!) and the free version of Codeium is terrible and does not understand a thing. I think I'll have to get into agentic coding, but I've been avoiding it for the time being (I rather like my computer and don't want it to execute completely random things).

          Plus, I like the model of Jules running in a completely isolated way: I don't have to care about it messing up my computer, and I can spin up as many simultaneous Juleses as I like without a fear of interference.

    • co_king_3 2 days ago ago

      This is my experience, which is why I stopped altogether.

      I think I'm better off developing a broad knowledge of design patterns and learning the codebases I work with in intricate, painstaking detail as opposed to trying to "go fast" with LLMs.

      • GoatInGrey 2 days ago ago

        It's the evergreen tradeoff between the short and long terms. Do I get the nugget of information I need right now but lose in a month, or do I spend the time and energy that leads to deeper understanding and years-long retention of the knowledge?

        There is something about our biology that makes us learn better when we struggle. There are many concepts on this dynamic: generation effect, testing effect, spacing effect, desirable difficulties, productive failure...it all converges on the same phenomenon where the easier it is to learn, the worse we learn.

        Take K-12 for instance. As computing technology is further and further integrated into education, cognitive performance decreases in a near-linear relationship. Gen Z is famously the first generation to perform worse in every cognitive measure than previous generations, for as long as we've been recording since the 19th century. An uncomfortable truth emerging from studies on electronics usage in schools is that it isn't just the phones driving this. It's more so the Duolingo effect of software overall emulating the sensation of learning without actually changing the brain state. Because the software that actually challenges you is not as engaging or enjoyable.

        How you learn, and your ability to parse, infer, and derive meaning from large bodies of information, is increasingly a differentiator in both the personal and professional worlds. It's even more so the case when many of your peers are now learning through LLM-generated summaries averaging just 300 words, perhaps skimming outputs around 1,000 words in length for "important information". The immediate benefits are obvious, but the cost of outsourcing that cognitive work gets lost in the convenience.

        Because remember, this isn't just about your ability to recall specific regex, follow a syntax convention, or how much code you ship in an hour. Your brain needs exercise, and deep learning is one of the most reliable ways to get it. Doubly true if you're not even writing your own class names.

        What I am speaking to is not far away or hypothetical, either. Because as of 2023, one in four young adults in the United States is functionally illiterate.

        https://www.the74million.org/article/many-young-adults-barel...

        • zozbot234 2 days ago ago

          Effective learning and memorizing is actually at the narrow edge of struggling: it's neither "too easy" nor "too hard and painful". SRS systems do a very good job of tuning this: by the time a question comes back to you it will feel difficult, but you'll be able to recall the information and answer it with some effort. It's a matter of recognizing this feeling and acknowledging as "the right kind of effort" as opposed to a hopeless task.

          If you ask the AI "please quiz me about the proper understanding of issues x y z and tell me if I got it all right. iterate for anything I get seriously wrong, then provide a summary at the end and generate SRS cards for me to train on" it will generally do a remarkably good job at that.

        • r0b05 a day ago ago

          I agree with all of this. The brain needs exercise, just like the body.

      • le-mark 2 days ago ago

        I agree and to address this I’ve tried using them to understand large code bases, I haven’t worked out how to prompt this effectively yet. Has anyone gone this route?

  • lancebeet 2 days ago ago

    Is "the end of the exponential" an established expression? There's no singularity in an exponential so the expression doesn't make sense to me. To me, it sounds like "the end of the exponential part", meaning it's a sigmoid, but that's obviously not what he means.

    • liamconnell 2 days ago ago

      I’m guessing that Amodei meant it as a humorous inside joke.

      It’s also shorthand for “the end of massive R&D capex” and “the transition to market capture”. The final stage, what McKinsey types call “harvesting”, is probably not on Amodei’s radar. Based on what I’ve seen of his public personality, he would see it as too philistine and will hand it off to another custodial exec.

      • r0b05 a day ago ago

        Cool insight!

    • incrudible 2 days ago ago

      Why should it be obvious that this is not what he means? I struggle to think how he could mean anything else.

      • lancebeet a day ago ago

        Well, he says

        >To me, it is absolutely wild that you have people — within the bubble and outside the bubble — talking about the same tired, old hot-button political issues, when we are near the end of the exponential.

        My interpretation is "It's pointless to discuss the old political issues, because they're not going to be relevant once AGI is achieved". So if he does believe in a plateau, it either contradicts his other prediction (that AGI will be reached in a year or two), or he believes it will plateau after AGI is already reached, which means it's kind of a pointless statement. The important thing w.r.t. all our problems being solved would the advent of AGI, not the plateau.

        • tylervigen a day ago ago

          I think he believe in a plateau on the y axis instead of the x axis… which is AGI.

          • basket_horse a day ago ago

            I took the “end” to mean the part of the exponential where it quickly trends towards infinity. So let’s say the x axis is time (by which you get more training data and more compute) and the y axis is model ability. So far, if we think we are in the beginning of the exponential, adding data/compute looks almost linear to the untrained eye in terms of model capability. But once you hit a threshold, where he thinks the model will start to generalize, a small amount of data/compute will result in a massive increase in model ability.

            • tylervigen 16 hours ago ago

              Exactly. If you “plateau” on the y axis you increase model capability to infinity in no time.

    • 2 days ago ago
      [deleted]
  • polotics 2 days ago ago

    Referring to a curve with a derivative everywhere equal to its value as something that has an end gives the game away: pure fanciful nominalization with no grounding in any kind of concrete modelling of any constraints.

    IMHO this is really silly: we already know that IQ is useful as a metric in the 0 to about 130 range. For any value above the delta fails to provide predictive power on real-world metrics. Just this simple fact makes the verbiage here moot. Also let's consider the wattage involved...

  • atomic128 2 days ago ago

    Anthropic's interests are not aligned with the interests of the human species.

    Quoting the Anthropic safety guy who just exited, making a bizarre and financially detrimental move: "the world is in peril" (https://www.forbes.com/sites/conormurray/2026/02/09/anthropi...)

    There are people in the AI industry who are urgently warning you. Myself and my colleagues, for example: https://www.theregister.com/2026/01/11/industry_insiders_see...

    Regulation will not stop this. It's time to build and deploy weapons if you want your species to survive. See earlier discussion here: https://news.ycombinator.com/item?id=46964545

    • alan-stark a day ago ago

      Can you elaborate on the "mode of peril"? Is it:

      (a) Top labs quietly signing deals for military deployment of frontier models in unmanned strike weapons?

      (b) Top labs agreeing to license LLMs for social engineering/propaganda ops?

      (c) Models that vastly exceed human intelligence and have capacity to pursue own agenda (i.e. runaway intelligence)?

      (d) Something else?

      It looks like dangers of AGI are overblown (perhaps partially due to grant funding and ability to get political traction/investment/competitive advantage), while (a) and (b) are severely underdiscussed. Would love to get other perspectives.

    • rishabhaiover 2 days ago ago

      Calm down, hysteria doesn't serve any side well.

      • atomic128 2 days ago ago

        Geoffrey Hinton's assessment of the situation may sound hysterical to you but we have come to believe that he is largely correct (https://en.wikipedia.org/wiki/Geoffrey_Hinton).

        Hinton understands the dire nature of the threat but overestimates the value of regulation in a world where the threatening technology is under development world-wide. We think regulation is basically impotent and large-scale information weapons are more viable as a solution.

        This is not the time to be a docile onlooker and we urge you to take action.

        • UltraSane 2 days ago ago

          LLMs will have to improve drastically before I take these hysterical warning seriously.

      • rramadass a day ago ago
      • hackable_sand a day ago ago

        Stop gaslighting people

  • sidewndr46 2 days ago ago

    I am always reminded of this article when the topic of 'the exponential' comes up:

    https://www.julian.ac/blog/2025/09/27/failing-to-understand-...

    • moregrist 2 days ago ago

      This is written with the idea that the exponential part keeps going forever.

      It never does. The progress curve always looks sigmoidal.

      - The beginning looks like a hockey stick, and people get excited. The assumption is that the growth party will never stop.

      - You start to hit something that inherently limits the exponential growth and growth starts to be linear. It still kinda looks exponential and the people that want the party to keep growing will keep the hype up.

      - Eventually you saturate something and the curve turns over. At this point it’s obvious to all but the most dedicated party-goers.

      I don’t know where we are on the LLM curve, but I would guess we’re in the linear part. Which might keep going for a while. Or maybe it turns over this year. No one knows. But the party won’t go on forever; it never does.

      I think Cal Newport’s piece [0] is far more realistic:

      > But for now, I want to emphasize a broader point: I’m hoping 2026 will be the year we stop caring about what people believe AI might do, and instead start reacting to its real, present capabilities.

      [0] Discussed here: https://news.ycombinator.com/item?id=46505735

    • integricho 2 days ago ago

      Written by an anthropic employee, I mean how seriously can you take that piece? pure hype

      • neom 2 days ago ago

        I think it's coupled differential equations where each growth factor amplifies the others, I posted about it in 2024 - https://b.h4x.zip/ce/ - sent it around a bit but everyone thought I was nuts, look at that post from 2025 and think about what was happening IRL under the graphs line, then go look at where METR is today. I'm not trying to brag, I don't work for anthropic, but I do think I'm probably right.

        • munksbeer 11 hours ago ago

          Access blocked, suspicious site.

          • neom 8 hours ago ago

            Firewall blocking it? Weird, dns is all fine.

      • sidewndr46 2 days ago ago

        I only take it partially seriously. I view it as a serious presentation that is misinformed. What I find unique is that people have become so interested in "the exponential" it's almost become like an axiom, or even a near religious belief in AI. It is a subtle admission that while current AI capabilities are impressive, it requires additional years of exponential growth for AI to reach the fantastic claims some people are making.

        All glory to the exponential!

    • PollardsRho 2 days ago ago

      > Given consistent trends of exponential performance improvements over many years and across many industries, it would be extremely surprising if these improvements suddenly stopped.

      This is the part I find very strange. Let's table the problems with METR [1], just noting that benchmarking AI is extremely hard and METR's methodology is not gospel just because METR's "sole purpose is to study AI capabilities". (That is not a good way to evaluate research!)

      Taking whatever idealized metric you want, at some point it has to level off. That's almost trivially true: everyone should agree that unrestricted exponential growth forever is impossible, if only for the eventual heat death of the universe. That makes the question when, and not if. When do external forces dominate whatever positive feedback loops were causing the original growth? In AI, those positive feedback loops include increased funding, increased research attention and human capital, increased focus on AI-friendly hardware, and many others, including perhaps some small element of AI itself assisting the research process that could become more relevant in the future.

      These positive feedback loops have happened many times, and they often do experience quite sharp level-offs as some external factor kicks in. Commercial aircraft speeds experienced a very sharp increase until they leveled off. Many companies grow very rapidly at first and then level off. Pandemics grow exponentially at first before revealing their logistic behavior. Scientific progress often follows a similar trajectory: a promising field emerges, significant increased attention brings a bevy of discoveries, and as the low-hanging fruit is picked the cost of additional breakthroughs surges and whatever fundamental limitations the approach has reveal themselves.

      It's not "extremely surprising" that COVID did not infect a trillion people, even though there are some extremely sharp exponentials you can find looking at the first spread in new areas. It isn't extremely surprising that I don't book flights at Mach 3, or that Moore's Law was not an ironclad law of the universe.

      Does that mean the entire field will stop making any sort of progress? Of course not. But any analysis that fundamentally boils down to taking a (deeply flawed) graph and drawing a line through it and simplifying the whole field of AI research to "line go up" is not going to give you well-founded predictions for the future.

      A much more fruitful line of analysis, in my view, is to focus on the actual conditions and build a reasonable model of AI progress that includes current data while building in estimations of sigmoidal behavior. Does training scaling continue forever? Probably not, given the problems with e.g., GPT-4.5 and the limited amount of quality non-synthetic training data. It's reasonable to expect synthetic training data to work better over time, and it's also reasonable to expect the next generation of hardware to also enable an additional couple orders of magnitude. Beyond that, especially if the money runs out, it seems like scaling will hit a pretty hard wall barring exceptional progress. Is inference hardware going to get better enough that drastically increased token outputs and parallelism won't matter? Probably not, but you can definitely forecast continued hardware improvements to some degree. What might a new architectural paradigm be for AI, and would that have significant improvements over current methodology? To what degree is existing AI deployment increasing the amount of useful data for AI training? What parts of the AI improvement cycle rely on real-world tasks that might fundamentally limit progress?

      That's what the discussion should be, not reposting METR for the millionth time and saying "line go up" the way people do about Bitcoin.

      [1] https://www.transformernews.ai/p/against-the-metr-graph-codi...

      • neom 2 days ago ago

        "everyone should agree that unrestricted exponential growth forever is impossible, if only for the eventual heat death of the universe." - why is this a good/useful framing?

        • PollardsRho 2 days ago ago

          All models are wrong; some are useful. Cognizance of that is even more critical for a model like exponential growth that often leads to extremely poor predictions quickly if uncritically extrapolated.

          I think "are the failures of a simple linear regression on the METR graph relevant" is a much better framing than "does seeing a line if you squint extrapolate forever." As I said, I'd much rather frame the discussion around the actual material conditions of AI progress, but if you are going to be drawing lines I'd at least want to start by acknowledging that no such model will be perfect.

    • bendbro a day ago ago

      tl;dr The best ~AI's~ LLM's slop asymptote is 10 hours.

      Restated, if you let the best LLM chomp on a task for 10 hours, the output becomes slop.

      * These tasks are of the type that you spend 1% of your SWE career working on.

      * Each task is primed with an essay length prompt.

      * You must play needle in the haystack for bugs in 10 hours worth of AI generated slop.

      My experience trying AI coding at work and my observations of AI evangelists makes me believe AI coding is exclusively the purview of people who willing to handhold an AI at half pace to achieve the same result while working on software which amounts to greenfield/toy problems.

      The danger of LLMs to thought work is enormously overstated and intentionally overhyped. AI : StackOverflow :: StackOverflow : graybeard in basement

      It would be cool if AI kills all thought work, but what will actually happen is a undersupply of SWEs and a second golden age of SWE salaries in like 15y.

      https://github.com/METR/public-tasks/tree/main

    • co_king_3 2 days ago ago

      Are they failing to understand, or are they manipulating their audience?

      • viking123 2 days ago ago

        They are manipulating and fear mongering gullible people, it's their new business model.

        • neom 2 days ago ago

          I'm friends with one of the Anthropic founders for over 15 years now, and I just find this line of thinking so sad. They are not manipulative fear mongering people, they're actually very decent people who you might consider listening to.

          • bigstrat2003 2 days ago ago

            If that were true, they wouldn't publish hype results that then turn out to be completely unsubstantiated. Remember the "agents built a web browser"? I can't personally judge your friend as I don't know him. But the company is consistently lying about how good their product is in order to hype it up.

            • neom 2 days ago ago

              I don't talk to said friend about their work, so I genuinely have no insight here, but if I were a betting man, I'd bet what they have internally is considerably disparate from what is currently available in their consumer product.

              • viking123 a day ago ago

                The stuff they have internally might be slightly better than what they have now lmao. You have to super dense to believe otherwise.

                Also I don't need the Anthropic ghouls telling me what I can or can not ask their stupid bot. At least Elon doesn't play this sad censorship game where you cannot say "boob" to it without it locking down.

          • viking123 a day ago ago

            Yeah and my dad works at Nintendo. If they want us to listen they really need to stop with releasing all the bullshit and over exaggeration what their chat bot does. And stop freaking whining about "MUHHH CHINA". Those ghouls stole almost all the books in the world, I hope China steals all from them and keeps releasing the free models.

            • neom a day ago ago

              Well, if your dad isn't one of the founder of Nintendo your point is moot. Given I was on the founding team of digitalocean as head of strategy till the IPO, and one of the founders of anthropic is a former tech journalist who covered my startups, maybe my friend is a founder of Anthropic?! Sorry your dad didn't work anywhere cool tho. :(

              • viking123 a day ago ago

                Who cares about these journalists. My point is that Amodei is a complete ghoul and loves fear mongering normies and being racist towards Chinese while HIS company stole all the damn books in the world. I hope the Chinese steal all their data and keep doing public models. This guy can't even figure a solution for his balding head let alone making an AGI lmao. But let Anthropic keep tricking midwits along. Elon has 100x the backbone that these fraudsters have btw.

                For God's sake these guys are selling the doubling of human life span to some desperate elderly investors. Really going for people's deepest fears there. Oh yes just invest in us so you can get double the life span and don't have to die!

                • neom a day ago ago

                  Alright well I can tell you're grumpy about this so how about we agree to disagree? I don't know Dario so I couldn't say, but I do trust Jack a lot. That aside: this is the 3rd time I've heard the racist towards Chinese thing, what exactly is that all about if you'd be willing to save me a google?

  • almostdeadguy 2 days ago ago

    Is no one disturbed by this? At the rate this seems to be happening its going to cause massive disruptions to society and endanger a lot of people.

    • lkbm 2 days ago ago

      Uh...there's constant talk from people being disturbed by it. One of the Democratic candidates in 2020 had his platform based around this, and I can assure you that it's not gotten less attention since ChatGPT came out.

    • coffeefirst 2 days ago ago

      That’s the point.

      AI marketing is dystopian. They describe a world where most people are suddenly starving and homeless, and just when you start to think “hey this sounds like the conditions to create something like a French Revolution but where Bastille is a data center” they pivot to BUY MY PRODUCT SO YOU DON'T GET LEFT BEHIND.

      It’s advertising straight through the amygdala.

      I have no idea if they actually believe this. But it’s repulsive behavior.

      • KellyCriterion a day ago ago

        EXCELLENT analogy! :-) ++1

        >French Revolution but where Bastille is a data center”<

      • almostdeadguy 2 days ago ago

        I'm legitimately terrified by these people, and seriously worried now that this is not just hype and that they truly don't care about what will happen. And that they may use these models to insulate themselves from the consequences when that time comes.

        The fact that Nick Land has taken hold as a philosopher in some circles in Silicon Valley truly scares me: https://www.compactmag.com/article/the-faith-of-nick-land/

        • gom_jabbar 2 days ago ago

          Nick Land is arguably the most influential philosopher in SV (at least over the past 3 years). Marc Andreessen's acknowledgment of Land in his 2023 The Techno-Optimist Manifesto has brought his underground influence more to the surface.

          Land's explicit anti-humanism can be repulsive to some on first encounter, but some of his ideas -- e.g. about the identity of capitalism and AI, the autonomization of capital, the technological singularity as capitalism's inherent teleology -- are interesting and can provide a very unique perspective.

          It's also important to note that he tends to resonate more with creative types. Historically, these were mostly artists. Today, they are also founders (who are psychometrically similar to artists at the population level).

          • almostdeadguy 2 days ago ago

            Congrats, guy who shows up to every post about Nick Land. Fuck your fascist hero.

            • coffeefirst 2 days ago ago

              Jesus Fucking Christ I don’t know what I was expecting when I looked this up but it definitely was not open rejection of freedom/western civilization.

    • co_king_3 2 days ago ago

      If you're disturbed by this your comment gets flagged and removed by Anthropic astroturfers.

  • holtkam2 2 days ago ago

    No matter how fast and accurately your AI apps can spit out code (or PowerPoints, or excel spreadsheets, or business plans, etc) you will still need humans to understand how stuff works. If it’s truly business critical software, you can’t get around the fact that humans need to deeply understand how and why it works, in case something goes wrong and they need to explain to the CEO what happened.

    Even in a world where the software is 100% written by AI in 1 millisecond by a country of geniuses in a data center, humans still need to have their hands firmly on the wheel if they won’t want to risk their businesses well being. That means taking the time to understand what the AI put together. That will be the bottleneck regardless of how fast and smart AI is. Because unless the CEO wants to be held accountable for what the AI builds and deploys, humans will need to be there to take the responsibility for its output.

    • nemo1618 2 days ago ago

      > humans still need to have their hands firmly on the wheel if they won’t want to risk their businesses well being

      What happens when businesses run by AIs outperform businesses run by humans?

      • entech a day ago ago

        The humans will still own the business (unless you are proposing some alternative version of AI ownership), so in effect there will be always a human who is concerned about their business’s well being.

        I doubt that we would get into a world where a company would be allowed to run without human involvement (AI directors and AI management) as you will have nobody to hold accountable.

        • KellyCriterion a day ago ago

          Well, wasnt this what are all these blockchain DAO entites where supposed for? :D

          • gom_jabbar a day ago ago

            Yes, I was just about to bring this up as well. One could argue that they were simply too early. It will be interesting to watch things like ERC-8004.

  • readitalready 2 days ago ago

    LLMs alone aren't the way to AGI. Perhaps something involving a merge of diffusion or other models that are based on more sensory elements, like images, time, and motion, but LLMs alone aren't going to get us there.

    The end of the exponential means the start of other models.

    • rishabhaiover 2 days ago ago

      > LLMs alone aren't the way to AGI

      Pretraining + RL works, there is no clear evidence that it doesn't scale further.

      • readitalready 2 days ago ago

        Pretraining + RL itself is the scaling limit. If you feed it the entire dataset before 1905, LLMs aren't going to come up with general relativity. It has no concept of physics, or time even.

        AGI happens when you DON'T need to scale pertaining + RL.

        • acuozzo 2 days ago ago

          > If you feed it the entire dataset before 1905, LLMs aren't going to come up with general relativity.

          Link?

          • Jensson 11 hours ago ago

            You don't need a source for that, an LLM with such little data is barely able to form proper sentences.

        • rishabhaiover 2 days ago ago

          AGI maybe not, but it is reaching disruption level intelligence in the SWE domain.

  • dude250711 2 days ago ago

    I think there is a parallel universe where tools like Claud Code actually truly work as advertised but I am not allowed into it...

    Yet news and opinions from that world somehow seep through into my reality...

    • co_king_3 2 days ago ago

      They probably have the ability to give executives higher quality/more expensive models in the backend.

  • thadk 2 days ago ago

    "We're not perfectly good at preventing some of these other [model] companies from using our models internally." — well maybe this says something about how Opus 4.5 and Opus 4.6 have the same SWE bench score.

    • refulgentis 2 days ago ago

      Also explains why GLM 4.x+ and 5 always think they're Claude. Gives me a smile but, not cool.

  • AIorNot 2 days ago ago

    Its Dario's job to hype the product and he hypes the product to get the billons they need- a bit more engineering focused than Altman, but no fundamental difference.

    A large language model like GPT runs in what you’d call a forward pass. You give it tokens, it pushes them through a giant neural network once, and it predicts the next token. No weights change. Just matrix multiplications and nonlinearities. So at inference time, it does not “learn” in the training sense

    we need some kind of new architecture to get to next gen wow stuff e.g differentiable memory systems. ie instead of modifying weights, the model writes to a structured memory that is itself part of the computation graph. More dynamic or modular architectures not bigger scalling and spending all our money on data centers

    anybody in the ML community have an answer for this? (besides better RL and RHLF and World Models)

    • kubb a day ago ago

      Yes, 100% this. But it won’t get funded. Everything will get eaten up by the impressive LLM dead end.

    • senordevnyc 17 hours ago ago

      They talk about this at length in the interview

  • GorbachevyChase 2 days ago ago

    Does anyone know who Dwarkesh’s patron is that boosted him in podcast world? He isn’t otherwise highly distinguished and admitted does his show prep with AI which sometimes shows in his questions. I feel like there are a very large number of tech podcasts, but there’s some marketing effect around this guy that I just don’t understand.

    • DarkCrusader2 2 days ago ago

      Yeah I also don't understand how he is able to get such high profile guests. His interview with Jeff Dean and Noam Shazeer last year[1] is so hilariously bad. Jeff and Noam kept trying to give really insightful answers on how they see AI development shaping in coming years and he was just steering the conversation to shallow and silly tabloid gossip (why don't you "just" let AI improve the next version in a loop so we can quickly have singularity, Jeff Dean AI running in a DC, evil Jeff Dean AI escaping containment and on and on). It was just embarrassing. The interview would have been so much better with just Jeff and Noam without him.

      [1] https://www.youtube.com/watch?v=v0gjI__RyCY

    • kshahkshah 2 days ago ago

      He introduced me and many others to Sarah Paine. I think that helped launch him. Maybe that was already after he had an audience though.

      The funny thing is his questions to her were terrible. But she rescued it anyway.

      But I think he has improved markedly as an interviewer I will say

      • ekelsen 2 days ago ago

        Same thing was true of his interview with Tony Blair. It was such a night and day difference between the two. Tony's skill, knowledge and polish saved the interview and made it enjoyable despite the interviewer.

    • diego_sandoval 2 days ago ago

      In my opinion, he asks the right questions and lets the guests speak, which is something that can't be said about the rest of tech podcasts.

      For example, at some point I grew very tired of the superficiality of the questions that Lex Friedman asks his very technical guests. He seems to be more interested into taking the conversation into a philosophy freshman's essay about technology instead of talking about technology itself.

      Hearing the Dwarkesh podcast was a breath of fresh air in that regard.

      • 2 days ago ago
        [deleted]
    • observationist 2 days ago ago

      He knew people, caught a wave, and was roommates with Dylan Patel of SemiAnalysis. They networked, got to meet the right people, developed a web of contacts and sources, and the rest is history. Treat your friends well, and it often comes back multiplied.

      The marketing effect was them catching the wave at the right time, and they're just surfing the hell out of it.

      • AIorNot 2 days ago ago

        yeah no offense against Dwarkesh, and good luck to him, but he's just a kid -it would have cool to have someone with some real ML chops or industry knowledge and good comm skills to host these talks..anyway I am impressed by scope of his guests.. but I do think he doesn't have the experience to counter beyond fawning over them or else steering the topic to less insightful conversation

        He kinda reminds me the of the Alex O Conner -same age group -very smart but inexperienced with the heavy hitters

      • moralestapia 2 days ago ago

        [flagged]

        • observationist 2 days ago ago

          It's one of the most popular "inside baseball" blogs in AI. Dylan Patel covers the people, tech, hardware, business analytics, and has amazing insight and access to people. "Blog" isn't quite right, but if you subscribe, you get a ton of useful analysis and reporting and writing.

          https://semianalysis.com/about/

        • mnky9800n 2 days ago ago

          Max Mustermann, he is everywhere. Everyone knows Max Mustermann. How can you not?

        • seydor 2 days ago ago

          An AI-famous podcast guest.

          Maybe they practiced interviews as roommates

    • sosodev 2 days ago ago

      Isn't it just the usual feedback loop that happens with popular podcasters? They have connections and get a few highly popular guests on. As long as their demeanor is agreeable and they keep the conversation interesting other high profile guests will agree to be on and thus they've created a successful show.

    • jstummbillig 2 days ago ago

      If you think there are as or more interesting podcasts out there, feel free to name them.

      • NitpickLawyer 2 days ago ago

        > more interesting podcasts out there

        For deep dives into AI stuff google deep mind's podcast with Hannah Fry is very good (but obviously limited to goog stuff). I also like Lex for his tech / AI podcasts. Much better interviewer IMO, Dwarkesh talks way too much, and injects his own "insights" for my taste. I'm listening to a podcast to hear what the guests have to say, not the host.

        For more light-weight "news-ish" type of podcast that I listen to while walking/driving/riding the train, in no particular order: AI & I (up to date trends, relevant guests), The AI Daily Brief (formerly The AI Breakdown - this is more to keep in touch with what's released in the past month) and any other random stuff that yt pops up for me from listening to these 4 regularly.

        • FiberBundle a day ago ago

          Lex as in Lex Fridman? I'm baffled that anyone would say that Lex Fridman is a better interviewer than Dwarkesh. Fridman is the one who continuously rambles some incoherent nonsense and completely lacks the intelligence and knowledge to ask reasonable questions.

        • boredtofears 2 days ago ago

          I can't think of an interviewer who interjects their viewpoint more and tries to get his guest to acknowledge/agree to his typically shallow level analysis than Lex. The only redeeming quality about his podcast are the guests he gets. I don't think Dwarkesh is great but he's leagues better.

          • fatherwavelet a day ago ago

            I just don't understand this view on Lex Fridman at all.

            Fridman is quite good at letting the guest speak. The whole show is exceptionally good at keeping a conversation moving.

            I think there are technical haters on Lex but that is stupid because Lex is in sales. He is selling a podcast. From a sales perspective, Lex is incredibly good.

            It is like saying the chef is only a good cook because of the quality of the ingredients. Yes, exactly. The chef isn't a farmer growing their own organic vegetables for the dishes. The art is in the choice and ability to source quality ingredients and then bring it all together as a full dish.

            A podcast is not a lecture or audio book.

            • boredtofears a day ago ago

              I guess you're right - getting your podcast big enough that it becomes a necessary checkbox for book/media tours is a skill. You're correct that he brings absolutely nothing to the podcast, but he interrupts plenty - usually with superficial pet theories about the "oneness of the universe" or "how all we need is love, actually". He never seems well prepared for his guest beyond a chatgpt summary, never gets any kind of interesting answer out of a guest that they weren't already going to give, just absolutely zero criticality to anything in the interview.

              A podcast with guests is an interview. Interviewing is a skill. The difference between a good and bad interviewer is night and day.

      • 2 days ago ago
        [deleted]
      • moralestapia 2 days ago ago

        [flagged]

        • dang 2 days ago ago

          Please don't do this here.

    • werahsg 2 days ago ago

      It seems that AI people have moved on from Lex Fridman to Dwarkesh. A couple of years ago the YouTube algorithm spammed Fridman in response to basically anything, now it is Dwarkesh. Maybe they need a new face periodically.

      The IPO hype is in full swing.

      • reducesuffering 2 days ago ago

        It's because Fridman is a bad interviewer. He gets great guests, and then literally adds nothing and no pushback at all or critical digging.

    • Uhhrrr 2 days ago ago

      The thing which distinguished him was getting good guests, before the hype hit. And he generally asks good questions and then shuts up while his guests talk.

    • small_model 2 days ago ago

      Dwarkesh is not a very good interviewer he kept asking the same question even though Dario patiently answered it about 6 times. Wasted an hour on "Why don't you spend 5 trillion on compute build out" Should have moved on but he didn't seem to grasp what Dario was saying, either not listening or not sharp enough.

    • 2 days ago ago
      [deleted]
    • qassiov 2 days ago ago

      He knew Bryan Caplan, and interviewing him early on kickstarted things, I believe.

    • big_toast 2 days ago ago

      There was a small network of AI-intellectualism (and rationality) that grew highly relevant when AI took off post chatgpt. It feels adjacent to Tyler Cowen's network + tpot + hn/lesswrong. (I can't remember if Tyler specifically gave him a fast grant, but his first few interviews were GMU-centric.)

      I personally liked that he stayed away from navel-gazing in politics when the blogosphere/podcasts went pretty heavy into that.

      It did very well on twitter with a large number of high-follower-count tech people, and soon to be high-follower-count (basically AI employees). He had followed the zeitgeists general wisdom well (bat signal, work in public, you-can-just-do-things, move-to-the-arena, You-Are-the-Average-of-the-Five-People-You-Spend-the-Most-Time-With, high-horsepower). And he's just executed very well. Other people have interviewed similar people and generally gotten lower signal content. This moxie marlinspike interview is great though - https://www.youtube.com/watch?v=cPRi7mAGp7I .

    • sciolizer 2 days ago ago

      His Sarah Paine episodes have way more listeners than his normal fare. I doubt that's the whole story, but that's surely part of it.

    • Recursing 2 days ago ago

      You could look at his early guests and see what many of them have in common

    • moralestapia 2 days ago ago

      Same with that "MIT" interviewer who wasn't even at MIT.

      And that girl Altoff ...

      Literal nobodies suddenly interviewing Elon Musk, etc... within weeks.

      Things rarely go "viral" on their own these days, everything is controlled, even who gets the stage, how the message is delivered, etc... as you have noticed.

      With regards to who's behind, well, we might never know. However, as arcane as it might sound, gradient descent can take you close to the answer, or at least point you towards it.

      I like this recent meme of Christof from Truman Show saying things like "now tell them that there's aliens" or crap like that.

      • TMWNN 2 days ago ago

        > Same with that "MIT" interviewer who wasn't even at MIT.

        Lex Fridman is a research scientist at MIT. <https://web.mit.edu/directory/?id=lexfridman&d=mit.edu>

        • orochimaaru 2 days ago ago

          The question is why? He is mostly focused on his podcast and lives in Texas.

          I doubt there are any notable research contributions from him. His actual PhD is from Drexel - Not MIT.

          • TMWNN 2 days ago ago
            • comfysocks a day ago ago

              Lex’s position at MIT would make sense for a grad student or perhaps someone early in their career as an academic. But Lex is neither a student nor faculty member at MIT. So what’s he doing? This type of thing is usually unpaid or low paying for non-faculty.

              Lex got his PhD at Drexel over a decade ago. If he had pursued an academic career, he would most likely be an associate professor by now. Working as a researcher at a lab at a university that you aren’t a faculty member of is basically “failure to launch” at this stage.

              But Lex is a successful podcaster. His dad is a successful academic and scientist (at Drexel.) Lex is not that, but he plays one on the internet.

            • orochimaaru 2 days ago ago

              His paper on Tesla was widely panned as being not academically rigorous and more of an advertisement.

              The rest are at least 6 years old.

              So what is he doing as a research scientist. Don’t get me wrong - I like his podcast. I think he gets good guests. But he’s not doing any level of research.

            • aipatselarom 2 days ago ago

              Whatever you do please DO NOT look up these links on the Internet Archive.

              Not just that but I would also suggest to stop using the Internet Archive in general, as it is obviously not a reliable source of truth like Wikipedia or many news outlets with specialized people that spend a non-trivial amount of their time carefully checking all of this information.

        • aipatselarom 2 days ago ago

          I checked the guy's Wikipedia page and the opening paragraph says he's linked to MIT like five times, lmao.

          Very normal stuff.

          • TMWNN 2 days ago ago

            A lot of people believe that Fridman is not affiliated with MIT even though the university says it is. <https://lex.mit.edu/> It's a recurring thing in the Talk page for the Wikipedia article.

            • NitpickLawyer 2 days ago ago

              > A lot of people

              Nah, that's just reddit. At this point it's safer to take anything that's popular on reddit as either outright wrong or so heavily out of context that it's not relevant.

              • TMWNN 2 days ago ago

                Oh, sure, I learned a long time ago that Reddit is a very reliable anti-indicator. But given that HN isn't nearly as bad (but there are moments), it's still strange that people would just repeat something about someone else that they could disprove for themselves in 30 seconds.

    • squidbeak 2 days ago ago

      Similar wonderings occurred to me at that point in the vid where he struggled to understanding Amodei's explanation of the economics, which was pretty straightforward. Unless he was just being deliberately arsey.

    • pllbnk 2 days ago ago

      I never knew about him until a few months ago when he started appearing in my YouTube recommendations, and naturally I thought the same thing because a 'nobody' like him (not in a derogatory sense) started doing interviews with the top AI bros. And the interviews are terribly boring because they feel like a cheap PR campaign. You could sit Lex Fridman instead of Dwarkesh Patel and it would feel exactly the same.

    • co_king_3 2 days ago ago

      Who's he doing the interview with?

      • schmidtleonard 2 days ago ago

        Exactly, it's the Lex Fridman gambit: a reputation for asking safe questions to powerful people tends to snowball because "safe, popular interview platform" is something they are all looking to self-promote on.

        If you want to see the mask slip, watch Lex's interview with Zelensky.

        • HDThoreaun 2 days ago ago

          Friedman's Zelensky interview was terrible but I think that has more to do with him being a russian nationalist than a bad podcast host

    • alephnerd 2 days ago ago

      > who Dwarkesh’s patron is that boosted him in podcast world

      The Indian consumer market.

      Unlike China, Indians use western social media platforms so Indian tastes and trends are becoming increasingly common on the internet.

      This is also why you see entirely different trends on TikTok (banned in India, allowed elsewhere), Western Social Media (banned in China, allowed elsewhere), and Chinese social media (only used by Chinese and the diaspora).

      What Ben Thompson predicted with his "Four Internets" theory 6 years ago has started playing out [0].

      Over the next decade, more Indian media like Dwarkesh will leak into Western social media.

      [0] - https://stratechery.com/2020/india-jio-and-the-four-internet...

      • atonse 2 days ago ago

        > The Indian consumer market.

        You've said this a couple times in this thread now. Do you have any evidence that most of his audience is in India, to make that claim that his ethnicity matters?

      • rramadass a day ago ago

        > The Indian consumer market.

        Nope.

        Dwarkesh is smart enough to have patrons and savvy enough to network and market himself with his choice of domains and guests.

  • knivets 2 days ago ago

    The closer the bubble to popping the more desperate these people sound.

    > 100% of today’s SWE tasks are done by the models.

    Maybe that’s why the software is so shitty nowadays.

    • cxvwK 2 days ago ago

      Correct. Trust me if they felt really confident the thing they are working on would up-end society, these jokers would go full steam ahead and not tell you anything.

      This experiment is going to fail. I only hope SWEs finally grab their balls and accept the social contract has been fundamentally broken and that they should not treat their employers so kindly next time.

    • JohnnyMarcone 2 days ago ago

      > 100% of today’s SWE tasks are done by the models.

      I do think he was overstating the current state of the models by a bit, but this is taken out of context. He is not saying this is where the models are at today.

      He gives a spectrum [18:30] of the models taking over the SWE jobs:

      - Model writes 90% of code (today)

      - Model writes 100% of code

      - Model does 90% of today's SWE tasks (end-to-end)

      - Model does 100% of today's SWE tasks

      - The SWE job creates new tasks that didn't exist before

      - Model does the new SWE tasks as well (90% reduction in demand for SWE)

    • ponector 2 days ago ago

      This and popular trend to layoff whole QA department.

      • le-mark 2 days ago ago

        That’s been a trope long before AI. QA coverage has always been cyclical in my experience. In good times there is hiring and QA. Lean times QA is the first to go.

  • reducesuffering 2 days ago ago

    It's difficult to understate how wrong HN has been on AI since the founding of OpenAI and how consistently right Dario and AI X-riskers have been.

    • nemo1618 2 days ago ago

      I think it's a combination of a) reflexive dislike of any hyped-up tech, mainly due to the crypto era, and b) subconscious ego protection ("this can't be legit, otherwise everything I've built my identity around will be thrown into question").

      The best models already produce better code than a significant fraction of human programmers, while also being orders of magnitude faster and cheaper. And the trendlines are stark. Sure, maybe AI can't replace you today. Maybe it will hit that "wall" people are always forecasting, just before it gets good enough to threaten your job. But that's a rather uncomfortable proposition to bet a career on.

  • 2 days ago ago
    [deleted]
  • seydor 2 days ago ago

    We ll need a new word after 'genius'

  • viking123 2 days ago ago

    I have said that Amodei is by far worse than Sam Altman. Altman wants money but this guy wants the money AND to be your dad by censoring the shit out of the model and wagging his finger at you what you can say or what you cannot. And lobbying for legislation to block competition. Also the constant "muh china" whining while these guys stole all the books in the world.

    Every time I read something from Dario, it seems like he is grifting normies and other midwits with his "OHHH MY GOD CLAUDE WAS KILLING TO KILL SOMEONE! MY GOD IT WANTS TO BREAK OUT!" Then they have all their Claude constitution bullshit and other nonsense to fool idiots. Yeah bro the model with static weights is truly going to take over.

    He knows what he is doing, it's all marketing and they have put shit ton of money into it if you have been following the media for the last few months.

    Btw, it wasn't many months ago that this guy was hawking doubling of human life span at a group of some boomer investors. Oh yeah I wonder why he decided to bring it up there? Maybe because the audience is old and desperate and that scammers play on this weaknesses.

    Truly of one of the more obnoxious people in the AI space and frankly by extension Anthropic is scammy too. I rather pay Altman than give these guys a penny and that says a lot.

    • cxvwK 2 days ago ago

      Agreed. I find it bizarre who people cant see through the act.

      I trust altman more, at least hes not really pretending about who he is.

    • tedsanders 2 days ago ago

      Amodei isn't a grifter; the difference is that he really believes powerful AI is imminent.

      If you truly believe powerful AI is imminent, then it makes perfect sense to be worried about alignment failures. If a powerless 5 year old human mewls they're going to kill someone, we don't go ballistic because we know they have many years to grow up. But if a powerless 5 year old alien says they're going to kill someone, and in one year they'll be a powerful demigod, then it's quite logical to be extremely concerned about the currently harmless thoughts, because soon they could be quite harmful.

      I myself don't think powerful AI is 1-2 years away, but I do take Amodei and others as genuine, and I think what they're saying does make logical sense if you believe powerful AI is imminent.

      • micik a day ago ago

        how long has he believed i? i only watched the first couple of minutes of the interview before coming to my senses, but something something about not having changed his outlook since 2017.

        maybe if he can really (but really really) keep believing for 10 more years, we can have this discussion again around that time.

      • viking123 a day ago ago

        Yeah but I don't need that ghoul to be my dad and told me what I can ask the bot or what I cannot.

      • hackable_sand a day ago ago

        The veil is shifting

        He will get more violent with his rhetoric

  • surgical_fire 2 days ago ago

    Eat meat, said the butcher

  • jaredcwhite 2 days ago ago

    "Nobody disagrees we'll achieve AGI this century."

    Citation needed please.

    • ponector 2 days ago ago

      It's easy to say as almost no one of working age is going to live to the end of the century.

      Also the same as with saying that "nuclear fussion unlimited energy is 20 years away"

    • dude250711 2 days ago ago

      Nobody in their little bubble.

  • deathanatos 2 days ago ago

    > Nobody at this point disagrees we’re going to achieve AGI this century.

    Nobody. Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.

    > 100% of today’s SWE tasks are done by the models.

    Thank God, maybe I can go lie in the sun then instead of having to solve everyone's problems with ancient tech that I wonder why humanity is even still using.

    Oh, no? I'm still untying corporate Gordian knots?

    > There is no reason why a developer at a large enterprise should not be adopting Claude Code as quickly as an individual developer or developer at a startup.

    My company tried this, then quickly stopped: $$$

    • dang 2 days ago ago

      Can you please make your substantive points without snark? We're trying for a quite different kind of discussion here. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

      You may not owe AGI enthusiasts better, but you owe this community better if you're participating in it.

      • deathanatos 2 days ago ago

        (Since I cannot edit the post.)

        These posts are so tiring. The statement is an outright and blatant lie, because it's grift. The grifter wants to silence dissent by rendering it "non-existent", so that the grift can take the position of being a foregone conclusion. There is no dissent. The statement is outrageous, given the obvious amount of dissent in the comments, and the positive reaction of my fellow commenters to it. "AI built a browser from scratch." It did not. "AI built a compiler." It can't compile hello world. "AGI is coming & nobody disagrees." But the truth takes its time getting its shoes on while a lie already spread across the world.

        It's doubly tiring since I (and I suspect, many of this) are having AI stuffed down our gullets by our respective management chains. Any honest evaluation of AI comes to the result that it's nowhere near capable, routinely misses the mark, and probably takes more time to verify its answer than it does to use. But I suspect many people are just skipping the verification step.

        & it's disappointing to see low-quality articles like this make it, time and again, and it feels like thoughtful discussion no longer moves minds these days.

        I'll try to express this without the snark going forward, though.

        • le-mark 2 days ago ago

          > Any honest evaluation of AI comes to the result that it's nowhere near capable, routinely misses the mark, and probably takes more time to verify its answer than it does to use.

          Before the era of code reviews as best practice I was surprised to learn many developers I worked with knowingly check partially or completely broken features and code. That’s what AI is when I’ve used it on large complex applications so far.

        • rramadass a day ago ago

          Agreed. While there has been a lot of progress with AI, the full-throated bleating of its superlatives is tiresome and often verges on outright falsehoods.

          That said, Dario is orders of magnitude better than other AI tech ceos who are outright bullshitters/liars. He generally makes/raises good points which are worth thinking over.

    • stego-tech 2 days ago ago

      > Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.

      This captures my chief irk over these sorts of "interviews" and AI boosterism quite nicely.

      Assume they're being 100% honest that they genuinely believe nobody disagrees with their statement. That leaves one of two possible outcomes:

      1) They have not ingested data from beyond their narrow echo chamber that could challenge their perceptions, revealing an irresponsible, nay, negligent amount of ignorance for people in positions of authority or power

      OR

      2) They do not see their opponents as people.

      Like, that's it. They're either ignorant or they view their opposition as subhuman. There is no gray area here, and it's why I get riled up when they're allowed to speak unchallenged at length like this. Genuinely good ideas don't need this much defense, and genuinely useful technologies don't need to be forced down throats.

      • semiinfinitely 2 days ago ago

        option 3: reject the premise that they're being 100% honest

        this third option seems like the most reasonable option here? the way you worded this makes it seems like there are only these two options to reach your absurd conclusion

        > like thats it

        > There is no gray area here

        re-examine your assumptions

        • stego-tech 2 days ago ago

          ...did you just skip the first part where I literally preface my argument with this line?

          > Assume they're being 100% honest that they genuinely believe nobody disagrees with their statement.

          That's the core assumption. It's meant to give them the complete benefit of the doubt, and show that doing so means their argument is either ignorant or their perspective that opponents aren't people.

          Obviously they're being dishonest little shits, but calling that out point-blank is hostile and results in blind dismissal of the toxicity of their position. Asking someone to complete the thought experiment ("They're behaving honestly, therefore...") is the entire exercise.

          • semiinfinitely 2 days ago ago

            Yeah i read that part. Thats the part thats wrong.

          • danaris 2 days ago ago

            To be honest, I kind of think it's a combination of both #2 and #3.

            They know they're lying. But they also believe, and they want everyone else to believe, that anyone who disagrees with them is subhuman, inconsequential.

      • palmotea 2 days ago ago

        > 2) They do not see their opponents as people.

        > Like, that's it. They're either ignorant or they view their opposition as subhuman.

        I'm going to go a bit off topic, but tech people often just inhale sci-fi, and I think we ought to reckon the problems with that, especially when tech people get into position of power.

        Take Dune, for instance. Everyone know Vladimir Harkonnen is a bad guy, but even the good-guy Atreides seem to be spending their time fighting and assassinating, Paul's jihad kills 60 billion people, and Leto II is a totalitarian tyrant. It's all elite power-and-dominance shit, not even the protagonists are good people when you think about it. Regular people merit barely a mention, and are just fodder.

        Often the people are cardboard, and it's the (fantasy) tech and the "world building" that are the focus.

        It doesn't seem like it'd be good influence on someone's worldview, especially when not balanced sufficiently by other influences.

      • SpicyLemonZest 2 days ago ago

        I think you're being uncharitable towards option 2. When a physicist says "nobody disagrees that perpetual motion machines are impossible", are they saying that Jimbo who thinks he's built one in his garage is subhuman? Of course not. What they mean is that all experts who've seriously considered the issue agree, and they see so little substance in non-expert objections that it's not worth engaging.

      • co_king_3 2 days ago ago

        > They do not see their opponents as people.

        You hit the nail on their head.

        They go out of their way to call you an "AI bot" if you say something that contradicts their delusional world view.

        • 2 days ago ago
          [deleted]
    • piva00 2 days ago ago

      > My company tried this, then quickly stopped: $$$

      How much were devs spending to become a sticking point?

      I'm asking because I thought it'd be extremely expensive when it rolled out at the company I work for, we have dashboards tracking expenses averaged per dev in each org layer, the most expensive usage is about US$ 350/month/dev, the average hovers around US$ 30-50.

      It's much cheaper than I expected.

    • Tenoke 2 days ago ago

      Nobody out of people remotely worth listening to. There's always people deeply wrong about things but over 70 years at this point is a pretty insane position unless you have a great reason like expecting Taiwan to get bombed tomorrow and slow down progress.

      • cosmic_cheese 2 days ago ago

        Probabilities have increased, but it's still not a certainty. It may turn out that stumbling across LLMs as a mimicry of human intelligence was a fluke and the confluence of remaining discoveries and advancements required to produce real AGI won't fall into place for many, many years to come, especially if some major event (catastrophic world war, systematic environmental collapse, etc) occurs and brings the engine of technological progress to a crawl for 3-5 decades.

        • HDThoreaun 2 days ago ago

          "100% of AI researchers think we will have AGI this century" isnt the same as "100% of AI researchers think theres a 100% chance that we will have AGI this century"

      • ryandvm 2 days ago ago

        I think the only people that don't think we're going to see AGI within the next 70 years are people that believe consciousness involves "magic". That is, some sort of mystical or quantum component that is, by definition, out of our reach.

        The rest of us believe that the human brain is pretty much just a meat computer that differs from lower life forms mostly quantitatively. If that's the case, then there really isn't much reason to believe we can't do exactly what nature did and just keep scaling shit up until it's smart.

        • cosmic_cheese 2 days ago ago

          I don't think there's "magic" exactly, but I do believe that there's a high chance that the missing elements will be found in places that are non-intuitive and beyond the scope of current research focus.

          The reason is because this has generally been how major discoveries have worked. Science and technology as a whole advances more rapidly when both R&D funding is higher across the board and funding profiles are less spiky and more even. Diminishing returns accumulate pretty quickly with intense focus.

        • shahzbha 2 days ago ago

          Sufficiently advanced science is no different than magic. Religion could be directionally correct, if off on the specifics.

          I think there’s a good bit of hubris in assuming we even have the capacity to understand everything. Not to say we can’t achieve AGI, but we’re listening to a salesman tell us what the future holds.

        • phainopepla2 2 days ago ago

          I'm not sure why you would characterize the possibility that consciousness relies on quantum mechanics to be "magic". Quantum mechanics are very real.

        • fatata123 2 days ago ago

          [dead]

    • roxolotl 2 days ago ago

      While I fully agree with your sentiment it’s striking Dario said “this century”. He likely won’t even be alive for about 50%, assuming he lives to 80, of his prediction window. It’s such a remarkably meaningless comment.

      • viking123 2 days ago ago

        He was hawking the doubling of human lifespan to some boomers few months ago. The current AI is just religion in new clothes, mainly for people who see themselves as too smart to believe in God and heaven so believe in the AI and project everything to it.

    • yolo3000 2 days ago ago

      > We pay humans upwards of $50 trillion in wages because they’re useful, even though in principle it would be much easier to integrate AIs into the economy than it is to hire humans

    • gas9S9zw3P9c 2 days ago ago

      Can someone explain to me what AGI means? What is the concrete technical definition? How do we know it is achieved?

      • wrs 2 days ago ago

        Microsoft and OpenAI had to define it in their agreement, and settled on “AI systems that can generate at least $100 billion in profits”. Which tells you where those folks are coming from.

      • reducesuffering 2 days ago ago

        Dario defines it as:

        'By powerful AI [he dislikes the baggage of AGI, but means the same], I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:

        In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.

        In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.

        It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.

        It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.

        The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with.

        Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.

        We could summarize this as a “country of geniuses in a datacenter”.'

        https://darioamodei.com/essay/machines-of-loving-grace

        • RandomLensman 2 days ago ago

          A human working virtual has a lot less interfaces than a non-virtually working human - will that matter if the interfaces are limited for the AGI?

        • hackable_sand a day ago ago

          What a sick person Dario is

      • z2 2 days ago ago

        I think we still have trouble defining the 'I' part of AGI and the rest is predicted on that definition being objective and concrete first.

        • 2 days ago ago
          [deleted]
      • 2 days ago ago
        [deleted]
      • co_king_3 2 days ago ago

        It's Nirvana or Heaven for the AI cult.

        It's a constantly shifting goalpost. Really it's a just a big lie that says AI will do whatever you can imagine it would.

        • viking123 2 days ago ago

          It's just God for them basically, they project the solutions to their fears into it. Eg. fear of death, in religion we have heaven etc. In AI they believe it will multiply their lifespan with some magic.

        • aeve890 2 days ago ago

          >It's Nirvana or Heaven for the AI cult

          Nah, that would be ASI, artificial super intelligence.

    • anematode 2 days ago ago

      > 100% of today’s SWE tasks are done by the models.

      Meanwhile, Claude Code is implemented using a React-like framework and has 6000 open issues, many of which are utterly trivial to fix.

      • ponector 2 days ago ago

        He does not specify if tasks are done correctly. Merge your change request full of ai slop and close the task in jira as done. Voila! Velocity increased to the moon! 6000 or 7000 open issues - who cares?

    • coffeefirst 2 days ago ago

      I’m honestly trying to understand the state of the art and unfortunately the industry is so grifty it’s hard to tell…

      Can I ask what happened with your Claude Code rollout?

  • co_king_3 2 days ago ago

    [flagged]

    • observationist 2 days ago ago

      Are you talking about Dario, or Dwarkesh?

      Dario is a true believer. He thinks he's right. There's no scam or deception happening there.

      The outcome, however, is the same as if he were, unfortunately.

      Dwarkesh is a wonderful, phenomenal human, and we're lucky to have him.

      • rvz 2 days ago ago

        > There's no scam or deception happening there.

        Oh sweet summer child.

        • observationist 2 days ago ago

          Oh, I get it, I just really think Dario believes his own spiel. The dude is convinced he's one of the smartest people on the planet and is single handedly guiding the course of civilization, and so on. Anthropic lacks a capable Diogenes figure, to keep them grounded and humble.

          • rvz 2 days ago ago

            Nope.

            He is selling fear to you so that you buy more tokens from him and Anthropic raises more money until it IPOs. As the open weight models get better, it threatens Anthropic. So he dials up the threats against open weight models.

            All the AI providers do not trust anyone to achieve "AGI" (whatever that means) under the excuse of "safety", which is why Anthropic "exists".

            I have never seen Anthropic release a single open weight model. Ever. Why is that?

        • uejfiweun 2 days ago ago

          Care to actually explain rather than post reddit-tier snark?

          • rvz 2 days ago ago

            It is purely psychological. He is selling fear through scaremongering about safety when that is just a marketing scapegoat for everyone to continue buying more tokens to use Claude.

            Why do you think he continues to scare-monger about open weight models?

            • uejfiweun 2 days ago ago

              Hmm. So you're saying that he's intentionally overhyping the danger of ML models in a cynical play to eliminate open models through regulation or backlash, when in reality there is no danger. But if this is the case why isn't Altman and Demis saying the same?

      • co_king_3 2 days ago ago

        ...

  • alephnerd 2 days ago ago

    [flagged]

    • dang 2 days ago ago

      We detached this flamewar subthread from https://news.ycombinator.com/item?id=47005949.

      • alephnerd 2 days ago ago

        I didn't mean for this to become a flame war (I think the comment was misinterpreted) br feel free to delete it!

    • DarkCrusader2 2 days ago ago

      I highly doubt that. "The algorithm" will surely adjust the recommendations per geography. I don't think most westerners are getting T-Series recommendation in their feed as well.

      > He's an Indian Lex Friedman (and I mean that derogatorily)

      I might be reading this wrong, but sounds kinda racist to me?

      • preuceian 2 days ago ago

        The insult is that hes similar to Lex Fridman, not that hes Indian.

    • GorbachevyChase 2 days ago ago

      I could see a cynical media mogul seeing that market as a box that needs to be checked. I wonder who makes those decisions, though. Spotify?

    • Philpax 2 days ago ago

      [flagged]

      • phainopepla2 2 days ago ago

        Which part is racist exactly?

      • alephnerd 2 days ago ago

        How? I think both Lex Friedman and Dwarkesh Patel provide little substance and only sizzle.

      • co_king_3 2 days ago ago

        [flagged]

  • taco_emoji 2 days ago ago

    thanks for the autoplay audio crap

  • Davidzheng 2 days ago ago

    It's difficult for me to express this view, which I hold genuinely, without reading as lacking in humanity. However, I think it would be disastrous for humanity as a whole if we eliminate disease completely. To fight against it and to make progress in that fight is of course deeply human. And we are all affected emotionally and personally by disease of all forms. But if we win the fight against disease, I am almost sure that the human race will just end as a (long term) consequence.

    • nathan_douglas 2 days ago ago

      Could you elaborate? How do you see this playing out? Is this unique to disease or do you believe it's also true of other forms of suffering, e.g. poverty?

      • Davidzheng 2 days ago ago

        Well I think anything which gives humans unbounded lifespans is probably going end human civilization long term. So I don't think eliminating poverty is dangerous in a similar way no.

        • nathan_douglas 2 days ago ago

          Because of resource exhaustion or a spiritual crisis or something else/something in addition?

  • theideaofcoffee 2 days ago ago

    > end of the exponential.

    Oh good, hopefully it'll model itself after an exponential rise in any sort of animal populations and collapse on itself because it can no longer be sustained! Isn't that how things go in exponential systems with resource constraints? We can only hope that will be the best outcome. That would be wonderful.