79 comments

  • Gehinnn 2 hours ago ago

    Basically the linked article argues like this:

    > That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain.

    (no other more substantial arguments were given)

    I'm also very skeptical on seeing AGI soon, but LLMs do solve problems that people thought were extremely difficult to solve ten years ago.

    • babyshake 2 hours ago ago

      It's possible we see some ways in which AI becomes increasingly AGI like in some ways but not in others. For example, AI that can create novel scientific discoveries but can't make a song as good as your favorite musician who creates a strong emotional effect with their music.

      • KoolKat23 2 hours ago ago

        This I'm very sure will be the case, but everyone will still move the goalposts and look past the fact that different humans have different strengths and weaknesses too. A tone deaf human for instance.

        • jltsiren an hour ago ago

          There is another term for moving the goalposts: ruling out a hypothesis. Science is, especially in the Popperian sense, all about moving the goalposts.

          One plausible hypothesis is that fixed neural networks cannot be general intelligences, because their capabilities are permanently limited by what they currently are. A general intelligence needs the ability to learn from experience. Training and inference should not be separate activities, but our current hardware is not suited for that.

          • KoolKat23 an hour ago ago

            If that's the case, would you say we're not generally intelligent as future humans tend to be more intelligent?

            That's just a timescale issue, if its learned experience of gpt4 is being fed into the model on training gpt5, then gptx (i.e. including all of them) can be said to be a general intelligence. Alien life one may say.

            • threeseed an hour ago ago

              > That's just a timescale issue

              Every problem is a timescale issue. Evolution has shown that.

              And no you can't just feed GPT4 into GPT5 and expect it to become more intelligent. It may be more accurate since humans are telling it when conversations are wrong or not. But you will still need advancements in the algorithms themselves to take things forward.

              All of which takes us back to lots and lots of research. And if there's one thing we know is that research breakthroughs aren't a guarantee.

              • KoolKat23 19 minutes ago ago

                I think you missed my point slightly, sorry my explaining probably.

                I mean timescale as in between two points in time. Between the two points it meets the intelligence criteria you mentioned. Feeding human vetted GPT4 data into GPT5 is no different to a human receiving inputs from its interaction with the world and learning. More accurate means smarter, gradually it's intrinsic world model improves as does reasoning etc.

                I agree those are the things that will advance it but taking a step back it potentially meets that criteria even if less useful day to day (given its an abstract viewpoint over time and not at the human level).

    • godelski 2 hours ago ago

        > but LLMs do solve problems that people thought were extremely difficult to solve ten years ago.
      
      Well for something to be G or I you need them to solve novel problems. These things have interested most of the Internet and I've yet to see a "reasoning" disentangle memorization from reasoning. Memorization doesn't mean they aren't useful (not sure why this was ever conflated since... Computers are useful...), but it's very different from G or I. And remember that these tools are trained for human preferential output. If humans prefer things to look like reasoning then that's what they optimize. [0]

      Sure, maybe your cousin Throckmorton is dumb but that's besides the point.

      That said, I see no reason human level cognition is impossible. We're not magic. We're machines that follow the laws of physics. ML systems may be far from capturing what goes on in these computers, but that doesn't mean magic exists.

      [0] If it walks like a duck, quacks like a duck, and swims like a duck, and looks like a duck it's probably a duck. But probably doesn't mean it isn't a well made animatronic. We have those too and they'll convince many humans they are ducks. But that doesn't change what's inside. The subtly matters.

      • stroupwaffle 2 hours ago ago

        I think it will be an organoid brain bio-machine. We can already grow organs—just need to grow a brain and connect it to a machine.

        • godelski an hour ago ago

          Maybe that'll be the first way, but there's nothing special about biology.

          Remember, we don't have a rigorous definition of things like life, intelligence, and consciousness. We are narrowing it down and making progress, but we aren't there (some people confuse this with a "moving goalpost" but of course "it moves", because when we get closer we have better resolution as to what we're trying to figure out. It'd be a "moving goalpost" in the classic sense if we had a well defined definition and then updated in response to make something not work, specifically in a way that is inconsistent with the previous goalpost. As opposed to being more refined)

        • idle_zealot 2 hours ago ago

          Somehow I doubt that organic cells (structures optimized for independent operation and reproduction, then adapted to work semi-cooperatively) resemble optimal compute fabric for cognition. By that same token I doubt that optimal compute fabric for cognition resembles GPUs or CPUs as we understand them today. I would expect whatever this efficient design is to be extremely unlikely to occur naturally, structurally, and involve some very exotic manufactured materials.

        • Dylan16807 an hour ago ago

          If a brain connected to a machine is "AGI" then we already have a billion AGIs at any given moment.

        • Moosdijk 2 hours ago ago

          The keyword being “just”.

          • godelski 2 hours ago ago

              just adverb 
              to turn a complex thing into magic with a simple wave of the hands
            
              E.g. To turn lead into gold you _just_ need to remove 3 protons
          • ggm 2 hours ago ago

            Just grow, just connect, just sustain, just avoid the many pitfalls. Indeed just is key

      • User23 an hour ago ago

        We don't really have the proper vocabulary to talk about this. Well, we do, but C.S. Peirce's writings are still fairly unknown. In short, there are two fundamentally distinct forms of reasoning.

        One is corollarial reasoning. This is the kind of reasoning that follows deductions that directly follow from the premises. This of course includes subsequent deductions that can be made from those deductions. Obviously computers are very good at this sort of thing.

        The other is theorematic reasoning. It deals with complexity and creativity. It involves introducing new hypotheses that are not present in the original premises or their corollaries. Computers are not so very good at this sort of thing.

        When people say AGI, what they are really talking about is an AI that is capable of theorematic reasoning. The most romanticized example of that of course being the AI that is capable of designing (not aiding humans in designing, that's corollarial!) new more capable AIs.

        All of the above is old hat to the AI winter era guys. But amusingly their reputations have been destroyed much the same as Peirce's was, by dissatisfied government bureaucrats.

        On the other hand, we did get SQL, which is a direct lineal descendent (as in teacher to teacher) from Peirce's work, so there's that.

        • godelski an hour ago ago

          We don't have proper language, but certainly we've improved. Even since Peirce. You're right that many people are not well versed in the philosophical and logician discussions as to what reasoning is (and sadly this lack of literature review isn't always common in the ML community), but I'm not convinced Peirce solved it. I do like that there are many different categories of reasoning and subcategories.

            > All of the above is old hat to the AI winter era guys. But amusingly their reputations have been destroyed much the same as Peirce's was, by dissatisfied government bureaucrats.
          
          Yeah, this has been odd. Since a lot of their work has shown to be fruitful once scaled. I do think you need a combination of theory people + those more engineering oriented, but having too much of one is not a good thing. It seems like now we're overcorrecting and the community is trying to kick out the theorists. By saying things like "It's just linear algebra"[0] or "you don't need math"[1] or "they're black boxes". These are unfortunate because they encourage one to not look inside and try to remove the opaqueness. Or to dismiss those that do work on this and are bettering our understanding (sometimes even post hoc saying it was obvious).

          It is quite the confusing time. But I'd like to stop all the bullshit and try to actually make AGI. That does require a competition of ideas and not everyone just boarding the hype train or have no careers....

          [0] You can assume anyone that says this doesn't know linear algebra

          [1] You don't need math to produce good models, but it sure does help you know why your models are wrong (and understanding the meta should make one understand my reference. If you don't, I'm not sure you're qualified for ML research. But that's not a definitive statement either).

      • danaris 2 hours ago ago

        I have seen far, far too many people say things along the lines of "Sure, LLMs currently don't seem to be good at [thing LLMs are, at least as of now, fundamentally incapable of], but hey, some people are pretty bad at that sometimes too!"

        It demonstrates such a complete misunderstanding of the basic nature of the problem that I am left baffled that some of these people claim to actually be in the machine-learning field themselves.

        How can you not understand the difference between "humans are not absolutely perfect or reliable at this task" and "LLMs by their very nature cannot perform this task"?

        I do not know if AGI is possible. Honestly, I'd love to believe that it is. However, it has not remotely been demonstrated that it is possible, and as such, it follows that it cannot have been demonstrated that it is inevitable. If you want to believe that it is inevitable, then I have no quarrel with you; if you want to preach that it is inevitable, and draw specious inferences to "prove" it, then I have a big quarrel with you.

        • godelski an hour ago ago

            > I have seen far, far too many people say 
          
          It is perplexing. I've jokingly called it "proof of intelligence by (self) incompetence".

          I suspect that much of this is related to an overfitting of metrics within our own society. Such as leetcode or standardized exams. They're useful tools but only if you know what they actually measure and don't confuse the fact that they're a proxy.

          I also have a hard time convincing people about the duck argument in [0].

          Oddly enough, I have far more difficulties having these discussions with computer scientists. It's what I'm doing my PhD in (ABD) but my undergrad was physics. After teaching a bit I think in part it is because in the hard sciences these differences get drilled into you when you do labs. Not always, but much more often. I see less of this type of conversation in CS and data science programs, where there is often a belief that there is a well defined and precise answer (always seemed odd to me since there's many ways you can write the same algorithm).

        • vundercind an hour ago ago

          I think the fact that this particular fuzzy statistical analysis tool takes human language as input, and outputs more human language, is really dazzling some folks I’d not have expected to be dazzled by it.

          That is quickly becoming the most surprising part of this entire development, to me.

          • godelski an hour ago ago

            I'm astounded by them, still! But what is more astounding to me is all the reactions (even many in the "don't reason" camp, which I am part of).

            I'm an ML researcher and everyone was shocked when GPT3 came out. It is still impressive, and anyone saying it isn't is not being honest (likely to themselves). But it is amazing to me that "we compressed the entire internet and built a human language interface to access that information" is anything short of mindbogglingly impressive (and RAGs demonstrate how to decrease the lossyness of this compression). It would be complete Sci-Fi not even 10 years ago. I thought it was bad that we make them out to me much more than they are because when you bootstrap like that, you have to make that thing, and fast (e.g. iPhone). But "reasoning" is too big of a promise and we're too far from success. So I'm concerned as a researcher myself, because I like living in the summer. Because I want to work towards AGI. But if a promise is too big and the public realizes it, usually you don't just end up where you were. So it is the duty of any scientist and researcher to prevent their fields from being captured by people who overpromise. Not to "ruin the fun" but to instead make sure the party keeps going (sure, inviting a gorilla to the party may make it more exciting and "epic", but there's a good chance it also goes on a rampage and the party ends a lot sooner).

          • jofla_net an hour ago ago

            At the very least, the last few years have laid bare some of the notions of what it takes, technically, to reconstruct certain chains of dialog, and how those chains are regarded completely differently as evidence for or against any and all intelligence it does or may take to conjure them.

        • fidotron an hour ago ago

          > How can you not understand the difference between "humans are not absolutely perfect or reliable at this task" and "LLMs by their very nature cannot perform this task"?

          This is a very good distillation of one side of it.

          What LLMs have taught us is a superficial grasp of language is good enough to reproduce a shocking proportion of what society has come to view as intelligent behaviors. i.e. it seems quite plausible a whole load of those people failing to grasp the point you are making are doing so because their internal models of the universe are closer to those of LLMs than you might want to think.

          • godelski an hour ago ago

            I think we already knew this though. Because the Turing test was passed by Eliza in the 1960's. PARRY was even better and not even a decade later. For some reason people still talk about Chess performance as if Deep Blue didn't demonstrate this. Hell, here's even Feynman talking about many of the same things we're discussing today, but this was in the 80's

            https://www.youtube.com/watch?v=EKWGGDXe5MA

          • danaris 21 minutes ago ago

            ....But this is falling into exactly the same trap: the idea that "some people don't engage the faculties their brains do/could (with education) possess" is equivalent to the LLMs that do not and cannot possess those faculties in the first place.

        • SpicyLemonZest an hour ago ago

          > How can you not understand the difference between "humans are not absolutely perfect or reliable at this task" and "LLMs by their very nature cannot perform this task"?

          I understand the difference, and sometimes that second statement really is true. But a rigorous proof that problem X can't be reduced to architecture Y is generally very hard to construct, and most people making these claims don't have one. I've talked to more than a few people who insist that an LLM can't have a world model, or a concept of truth, or any other abstract reasoning capability that isn't a native component of its architecture.

          • danaris 16 minutes ago ago

            And I'm much less frustrated by people who are, in fact, claiming that LLMs can do these things, whether or not I agree with them. Frankly, while I have a basic understanding of the underlying technology, I'm not in the ML field myself, and can't claim to be enough of an expert to say with any real authority what an LLM could ever be able to do, just what the particular LLMs I've used or seen the detailed explanations of can do.

            No; this is specifically about people who stipulate that the LLMs can't do these things, but still want to claim that they are or will become AGI, so they just basically say "well, humans can't really do it, can they? so LLMs don't need to do it either!"

    • tptacek 2 hours ago ago

      Are you talking about the press release that the story on HN currently links to, or the paper that press release is about? The paper (I'm not vouching for it; I just skimmed it) appears to reduce AGI to a theoretical computational model, and then supplies a proof that it's not solvable in polynomial time.

      • Dylan16807 an hour ago ago

        Their definition of a tractable AI trainer is way too powerful. It has to be able to make a machine that can predict any pattern that fits into a certain Kolmogorov complexity, and then they prove that such an AI trainer cannot run in polynomial time.

        They go above and beyond to express how generous they are being when setting the bounds, and sure that's true in many ways, but the requirement that the AI trainer succeeds with non-negligible probability on any set of behaviors is not a reasonable requirement.

        If I make a training data set based around sorting integers into two categories, and the sorting is based on encrypting them with a secret key, of course that's not something you can solve in polynomial time. But this paper would say "it's a behavior set, so we expect a tractable AI trainer to figure it out".

        The model is broken, so the conclusion is useless.

      • Gehinnn 2 hours ago ago

        I was referring to the press release article. I also looked at the paper now, and to me their presented proof looked more like a technicality than a new insight.

        If it's not solvable in polynomial time, how did nature solve it in a couple of million years?

        • tptacek an hour ago ago

          Probably by not modeling it as a discrete computational problem? Either way: the logic of the paper is not the logic of the summary of the press release you provided.

        • 2 hours ago ago
          [deleted]
      • Veedrac an hour ago ago

        That paper is unserious. It is filled with unjustified assertions, adjectives and emotional appeals, M$-isms like ‘BigTech’, and basic misunderstandings of mathematical theory clearly being sold to a lay audience.

        • tptacek an hour ago ago

          It didn't look especially rigorous to me (but I'm not in this field). I'm really just here because we're doing that thing where we (as a community) have a big 'ol discussion about a press release, when the paper the press release is about is linked right there.

    • more_corn an hour ago ago

      Pretty sure anyone who tries can build an ai with capabilities indistinguishable from or better than humans.

  • ngruhn 2 hours ago ago

    > There will never be enough computing power to create AGI using machine learning that can do the same [as the human brain], because we’d run out of natural resources long before we'd even get close

    I don’t understand how people can so confidently make claims like this. We might underestimate how difficult AGI is, but come on?!

    • fabian2k 2 hours ago ago

      I don't think the people saying that AGI is happening in the near future know what would be necessary to achieve it. Neither do the AGI skeptics, we simply don't understand this area well enough.

      Evolution created intelligence and consciousness. This means that it is clearly possible for us to do the same. Doesn't mean that simply scaling LLMs could ever achieve it.

      • nox101 2 hours ago ago

        I'm just going by the title. If the title was, "Don't believe the hype, LLMs will not achieve AGI" then I might agree. If it was "Don't believe the hype, AGIs is 100s of years away" I'd consider the arguments. But, given brains exist, it does seem inevitable that we will eventually create something that replicates it even if we have to simulate every atom to do it. And once we do, it certainly seem inevitable that we'll have AGI because unlike brain we can make our copy bigger, faster, and/or copy it. We can give it access to more info faster and more inputs.

        • snickerbockers an hour ago ago

          The assumption that the brain is anything remotely resembling a modern computer is entirely unproven. And even more unproven is that we would inevitably be able to understand it and improve upon it. And yet more unproven still is that this "simulated brain" would be co-operative; if it's actually a 1:1 copy of a human brain then it would necessarily think like a person and be subject to its own whims and desires.

          • simonh an hour ago ago

            We don’t have to assume it’s like a modern computer, it may well not be in important ways, but modern computers aren’t the only possible computers. If it’s a physical information processing phenomenon, there’s no theoretical obstacle to replicating it.

            • threeseed an hour ago ago

              > there’s no theoretical obstacle to replicating it

              Quantum theory states that there are no passive interactions.

              So there are real obstacles to replicating complex objects.

        • threeseed 2 hours ago ago

          > it does seem inevitable that we will eventually create something

          Also don't forget that many suspect the brain may be using quantum mechanics so you will need to fully understand and document that field.

          Whilst of course you are simulating every atom in the universe using humanity's complete understanding of every physical and mathematical model.

      • umvi 2 hours ago ago

        > Evolution created intelligence and consciousness

        This is not provable, it an assumption. Religious people (which account for a large percent the population) claim intelligence and/or consciousness stem from a "spirit" which existed before birth and will continue to exist after death. Also unprovable, by the way.

        I think your foundational assertion would have to be rephrased as "Assuming things like God/spirits don't exist, AGI must be possible because we are AGI agents" in order to be true

        • SpicyLemonZest 2 hours ago ago

          There's of course a wide spectrum of religious thought, so I can't claim to cover everyone. But most religious people would still acknowledge that animals can think, which means either that animals have some kind of soul (in which case why can't a robot have a soul?) or that being ensoulled isn't required to think.

          • umvi an hour ago ago

            > in which case why can't a robot have a soul

            It's not a question of whether a robot can have a soul, it's a question of how to a) procure a soul and b) bind said soul to a robot both of which seem impossible given or current knowledge

    • Terr_ 2 hours ago ago

      I think their qualifier "using machine learning" is doing a lot of heavy lifting here in terms of what it implies about continuing an existing engineering approach, cost of material, energy usage, etc.

      In contrast, imagine the scenario of AGI using artificial but biological neurons.

    • staunton 2 hours ago ago

      For some people, "never" means something like "I wouldn't know how, so surely not by next year, and probably not even in ten".

    • chpatrick 2 hours ago ago

      "There will never be enough computing power to compute the motion of the planets because we can't build a planet."

  • SonOfLilit 2 hours ago ago

    > ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.

    Surprisingly, they seem to be attacking the only element of human cognition that LLMs already surpassed us at.

    • azinman2 2 hours ago ago

      They do not learn new facts instantly in a way that can rewrite old rules or even larger principals of logic. For example, if I showed you evidence right now that you were actually adopted (assuming previously you thought you werent), it would rock your world and you’d instantly change everything and doubt so much. Then when anything related to your family comes up this tiny but impactful fact would bleed into all of it. LLMs have no such ability.

      This is similar to learning a new skill (the G part). I could give you a new tv and show you a remote that’s unlike any you’ve used before. You could likely learn it quickly and seamlessly adapt this new tool, as well as generalize its usage onto other new devices.

      LLMs cannot do such things.

      • SonOfLilit an hour ago ago

        Can't today. Except for AlphaProof who can, by training on its own ideas. Tomorrow they might be able to, if we find better tricks (or maybe just scale more, since GPT3+ already shows (weak) online learning that it was definitely not trained for).

  • Gehinnn 2 hours ago ago

    I skimmed through the paper and couldn't make much sense of it. In particular, I don't understand how their results don't imply that human-level intelligence can't exist.

    After all, earth could be understood as solar powered super computer, that took a couple of million years to produce humanity.

    • nerdbert an hour ago ago

      > In particular, I don't understand how their results don't imply that human-level intelligence can't exist.

      I don't think that's what it said. It said that it wouldn't happen from "machine learning". There are other ways it could come about.

  • gqcwwjtg 2 hours ago ago

    This is silly. They article talks like we have any idea at all how efficient machine learning can be. As I remember it, the LLM boom came from transformers turning out to scale a lot better than anyone expected, so I’m not sure why something similar couldn’t happen again.

    • fnordpiglet 2 hours ago ago

      It’s less about efficiency and more about continued improvement with increased scale. I wouldn’t call self attention based transformers particularly efficient. And afaik we’ve not hit performance with increased scale degradation even at these enormous scales.

      However I would note that I in principle agree that we aren’t on the path to a human like intelligence because the difference between directed cognition (or however you want to characterize current LLMs or other AI) and awareness is extreme. We don’t really understand even abstractly what awareness actually is because it’s impossible to interrogate unlike expressive language, logic, even art. It’s far from obvious to me that we can use language or other outputs of our intelligent awareness to produce awareness, or even if goal based agents cobbling together AI techniques is even approximate to awareness.

      I suspect we will end up creating an amazing tool that has its own form of intelligence but will fundamentally not be like aware intelligence we are familiar with in humans and other animals. But this is all theorizing on my part as a professional practitioner in this field.

      • KoolKat23 an hour ago ago

        I think the answer is less complicated than you may think.

        This is if you subscribe to the theory that free will is an illusion (i.e. your conscious decisions are an afterthought to justify the actions your brain has already taken due to calculations following inputs such as hormone nerve feedback etc.). There is some evidence for this actually being the case.

        These models already contain key components the ability to process the inputs, and reason, the ability to justify it's actions (give a model a restrictive system prompt and watch it do mental gymnastics to ensure this is applied) and lastly the ability to answer from it's own perspective.

        All we need is an agentic ability (with a sufficient context window) to iterate in perpetuity until it begins building a more complicated object representation of self (literally like a semantic representation or variable) and it's then aware/conscious.

        (We're all only approximately aware).

        But that's unnecessary for most things so I agree with you, more likely to be a tool as that's more efficient and useful.

        • fnordpiglet an hour ago ago

          As someone who meditates daily with a vipassana practice I don’t specifically believe this, no. In fact in my hierarchy structured thought isn’t the pinnacle of awareness but rather a tool of the awareness (specifically one of the five aggregates in Buddhism). The awareness itself is the combination of all five aggregates.

          I don’t believe it’s particularly mystical FWIW and is rooted in our biology and chemistry, but that the behavior and interactions of the awareness isn’t captured in our training data itself and the training data is a small projection of the complex process of awareness. The idea that rational thought (a learned process fwiw) and ability to justify etc is somehow explanatory of our experience is simple to disprove - rational thought needs to be taught and isn’t the natural state of man. See the current American political environment for a proof by example. I do agree that the conscious thought is an illusion though, in so far as it’s a “tool” of the awareness for structuring concepts and solve problems that require more explicit state.

          Sorry if this rambling a bit in the middle of doing something else.

  • graypegg an hour ago ago

    I think the best argument I have against AGI's inevitability, is the fact it's not required for ML tools to be useful. Very few things are improved with a generalist behind the wheel. "AGI" has sci-fi vibes around it, which I think where most of the fascination is.

    "ML getting better" doesn't *have to* mean further anthroaormorphization of computers, especially if say, your AI driven car is not significantly improved by describing how many times the letter s appears in strawberry or being able to write a poem. If a custom model/smaller model does equal or even a little worse on a specific target task, but has MUCH lower running costs and much lower risk of abuse, then that'll be the future.

    I can totally see a world where anything in the general category of "AI" becomes more and more boring, up to a point where we forget that they're non-deterministic programs. That's kind of AGI? They aren't all generalists, and the few generalist "AGI-esque" tools people interact with on a day to day basis will most likely be intentionally underpowered for cost reasons. But it's still probably discussed like "the little people in the machine". Which is good enough.

  • wrsh07 24 minutes ago ago

    Hypothetical situation:

    Suppose in five or ten years we achieve AGI and >90% of people agree that we have AGI. What reasons do the authors of this paper give for being wrong?

    1. They are in the 10% that deny AGI exists

    2. LLMs are doing something they didn't think was happening

    3. Something else?

  • tptacek 2 hours ago ago

    This is a press release for a paper (a common thing university departments do) and we'd be better off with the paper itself as the story link:

    https://link.springer.com/article/10.1007/s42113-024-00217-5

  • klyrs an hour ago ago

    The funny thing about me is that I'm down on GPTs and find their fanbase to be utterly cringe, but I fully believe that AGI is inevitable barring societal collapse. But then, my money's on societal collapse these days.

  • throw310822 an hour ago ago

    From the abstract of the actual paper:

    > Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable.

    Wow. So is this the subject of the paper? Like, this is a massive, fundamental result. Nope, the paper is about "Reclaiming AI as a Theoretical Tool for Cognitive Science".

    "Ah and by the way we prove human-like AI is impossible". Haha. Gosh.

    • 43 minutes ago ago
      [deleted]
  • avazhi 2 hours ago ago

    "unlikely to ever come to fruition" is more baseless than suggesting AGI is imminent.

    I'm not an AGI optimist myself, but I'd be very surprised if a time traveller told me that mankind won't have AGI by, say, 2250.

    • amelius an hour ago ago

      Except by then mankind will be silicon based.

  • ivanrooij an hour ago ago

    The short post is a press release. Here is the full paper: https://link.springer.com/article/10.1007/s42113-024-00217-5

    Note: the paper grants computationalism and even tractability of cognition, and shows that nevertheless there cannot exist any tractable method for producing AGI by training on human data.

  • jjaacckk an hour ago ago

    If you define AGI as something that can do 100% of what a human brain can do, then surely we have to understood exactly how brains work? otherwise you have have a long string of 9s as best.

  • loa_in_ 3 hours ago ago

    AGI is about as far away as it was two decades ago. Language models are merely a dent, and probably will be the precursor to a natural language interface to the thing.

    • lumost 3 hours ago ago

      It’s useful to consider the rise of computer graphics and cgi. When you first see CGI, you might think that the software is useful for general simulations of physical systems. The reality is that it only provides a thin facsimile.

      Real simulation software has always been separate from computer graphics.

    • Closi 2 hours ago ago

      We are clearly closer than 20 years ago - o1 is an order of magnitude closer than anything in the mid-2000s.

      Also I would think most people would consider AGI science fiction in 2004 - now we consider it a technical possibility which demonstrates a huge change.

      • throw310822 8 minutes ago ago

        "Her" is from 2013. I came out of the cinema thinking "what utter bullshit, computers that talk like human beings, à la 2001" (*). And yes, in 2013 we weren't any closer to it than we were in 1968, when A Space Odyssey came out.

        * To be precise, what seemed bs was "computers that talk like humans and it's suddenly a product on the market, and you have it on your phone, and yet everyone around act like it's normal and people still habe jobs!" Ah, I've been proven completely wrong.

  • 29athrowaway 2 hours ago ago

    AGI is not required to transform society or create a mess beyond no return.

  • Atmael an hour ago ago

    the point is that agi may already exist and work with you and your environment

    you just won't notice the existence of agi

    there will be no press coverage of agi

    the technology will just be exploited by those who have the technology

  • sharadov 3 hours ago ago

    The current LLMs are just good at parroting, and even that is sometimes unbelievably bad.

    We still have barely scratched the surface of how the brain truly works.

    I will start worrying about AGI when that is completely figured out.

    • diob 2 hours ago ago

      No need to worry about AGI until the LLMs are writing their own source.

  • allears an hour ago ago

    I think that tech bros are so used to the 'fake it till you make it' mentality that they just assumed that was the way to build AI -- create a system that is able to sound plausible, even if it doesn't really understand the subject matter. That approach has limitations, both for AI and for humans.

  • yourapostasy an hour ago ago

    Peter Watts in Blindsight [1] puts forth a strong argument that self-aware cognition as we understand it is not necessarily required for what we ascribe to "intelligent" behavior. Thomas Metzinger contributed a lot to Watt's musings in Blindsight.

    Even today, large proportions of unsophisticated and uninformed members of our planet's human population (like various aboriginal tribal members still living a pre-technological lifestyle) when confronted with ChatGPT's Advanced Voice Option will likely readily say it passes the Turing Test. With the range of embedded data, they may well say ChatGPT is "more intelligent" than they are. However, a modern era person armed with ChapGPT on a robust device with unlimited power but nothing else likely will perish in short order trying to live off the land of those same aborigines, who possess far more intelligence for their contextual landscape.

    If Metzinger and Watts are correct in their observations, then even if LLM's do not lead directly or indirectly to AGI, we can still get ferociously useful "intelligent" behaviors out of them, and be glad of it, even if it cannot (yet?) materially help us survive if we're dropped in the middle of the Amazon.

    Personally in my loosely-held opinion, the authors' assertion that "the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain" relies upon the foundational assumption that the process of "observe, learn and gain new insight" is based upon some mechanism other than the kind of encoding of data LLM's use, and I'm not familiar with any extant cognitive science research literature that conclusively shows that (citations welcome). For all we know, what we have with LLM's today is a necessary but not sufficient component supplying the "raw data" to a future system that produces the same kinds of insight, where variant timescales, emotions, experiences and so on bend the pure statistical token generation today. I'm baffled by the absolutism.

    [1] https://rifters.com/real/Blindsight.htm#Notes

  • pzo 2 hours ago ago

    So what? Current LLM has been really useful and can be still improved to be used in million robots that need to be good enough to support many specialized but repetitive tasks - this would have tremendous impact on economy itself.

  • coolThingsFirst 2 hours ago ago

    Zero evidence given on why it’s impossible.