51 comments

  • ks2048 2 days ago ago

    I'm not sure if there's anything interesting here, but I did notice the author was interviewed on the podcast Machine Learning Street Talk about this paper,

    https://www.youtube.com/watch?v=K18Gmp2oXIM&t=3s

  • getnormality a day ago ago

    In statistics, sample efficiency means you can precisely estimate a specified parameter like the mean with few samples. In AI, it seems to mean that the AI can learn how to do unspecified, very general stuff without much data. Like the underlying truth about the world and how to reach one's goals within it is just some giant parameter vector that we need to infer more or less efficiently from "sampled" sensory data.

  • EliRivers 2 days ago ago

    Picture a machine endowed with human intellect. In its most simplistic form, that is Artificial General Intelligence (AGI)

    Artificial human intelligence. Not what I'd call general, but I guess so long as we make it clear that by "general" we don't actually mean general, fine. I'd really expect actual general intelligence to do a lot better than human, in ways we can't understand any more than ants can comprehend us.

    • UltraSane 2 days ago ago

      Humans are the best/only example of General Intelligence we have.

    • 2 days ago ago
      [deleted]
  • comeonbro 2 days ago ago

    > simp-maxxing

    Might want to write this out in full lol I thought this in particular was going to be a much more entertaining point.

    • zahlman 2 days ago ago

      To be fair, it is spelled with a single 'x' in the paper.

    • 2 days ago ago
      [deleted]
  • 2 days ago ago
    [deleted]
  • nis0s 2 days ago ago

    Per my view, it fulfills the following criteria:

    1) Few-shot to zero-shot training for achieving a useful ability on a given new problem.

    2) Self-determining optimal paths to fine-tuning at inference time based on minimal instructions or examples.

    3) Having the capacity to self-correct, maybe by building or confirming heuristics.

    All of these concern an intern, for example, who is given a new, unseen task and can figure out the rest without handholding.

  • SeanLuke 2 days ago ago

    My answer: while 99% of the AI community was busy working on Weak AI, that is, developing systems that could perform tasks that humans can do notionally because of our Big Brains, a tiny fraction of people promoted Hard AI, that is, AI as a philosophical recreation of Lt. Commander Data.

    Hard AI has long had a well-deserved jet black reputation as a flakey field filled with armchair philosophers, hucksters, impressarios, and Loebner followers who don't understand the Turing Test. It eventually got so bad that the entire field decided to rebrand itself as "Artificial General Intelligence". But it's the same duck.

    • cogman10 2 days ago ago

      The only difference is the same hucksters are trying to sell the notion that LLMs are or will become AGI through some sort of magic trick or with just one more input.

    • adastra22 2 days ago ago

      “Strong AI” is the traditional term to compare with “Weak AI.”

      • SeanLuke 9 hours ago ago

        My bad. Of course it is. Had a brain fart there.

  • Emma_Schmidt a day ago ago

    [dead]

  • mwkaufma 2 days ago ago

    A term in search of a definition, clearly.

  • jonny_eh 2 days ago ago

    Please fix the title in HN to match the actual paper's superior title: "What the F*ck Is Artificial General Intelligence?"

    • dang 2 days ago ago

      We don't have an issue with profanity on HN but we do take out clickbait.

      Edit: ok you guys, I take the point and have put the original title back. More at https://news.ycombinator.com/item?id=45430354.

      • adastra22 2 days ago ago

        Replace it with “what the cuss”?

        • dang 2 days ago ago

          The word 'fuck' isn't the issue. The issue is that "What the fuck is AGI", as a title, doesn't add anything besides sensationalism to "What is AGI".

          • adastra22 2 days ago ago

            I don’t know. They typically read entirely differently to me, in the sense that what I would expect to see after clicking the link is different.

            I admit though the in this case “What is AGI?” better matches expectation to reality. Before I noticed the domain, “What the f*ck is AGI?” would have led me to expect more of a technical blog post with a playful presentation rather than the review article it actually is.

          • UltraSane 2 days ago ago

            It communicates that the paper will probably be a lot less "stuffy" than the typical fancy science PDF

            • dang 18 hours ago ago

              I agree with blooalien - that's a great point. To me it doesn't feel quite enough to overcome the baity/provocative effects, but since several commenters have made good points about this, I we might as well put the original title back.

              I've kept "f*ck" in the title since that's in the original and arguably adds some subtlety in this case. Normally we'd replace it with the real word since we don't like bowdlerisms.

            • blooalien a day ago ago

              > "It communicates that the paper will probably be a lot less "stuffy" than the typical fancy science PDF"

              You pose an excellent point... I tend to agree.

  • mbgerring 2 days ago ago

    From what I can see, Artificial General Intelligence is a drug-fueled millenarian cult, and attempts to define it that don't consider this angle will fail.

  • jongjong 2 days ago ago

    It's been a moving goalpost but I think the point where people will be forced to acknowledge it is when fully autonomous agents are outcompeting most humans in most areas.

    So long as half of people are employed or in business, these people will insist that it's not AGI yet.

    Until AI can fully replace you in your job, it's going to continue to feel like a tool.

    • slfnflctd 2 days ago ago

      Robotics are also a big one.

      Given a useful-enough general purpose body (with multiple appendage options), one of the most significant applications of whatever we end up calling AGI should be finally seeing most of our household chores properly roboticized.

      When I can actually give plain language descriptions of 'simple' manual tasks around the house to a machine the same way I would to, say, a human 4th grader, and not have to spend more time helping it get through the task than it would take me to do it myself, that is when I will feel we have turned the corner.

      I still am not at all convinced I will see this within the next few decades I probably have left.

      • adastra22 2 days ago ago

        Without denigrating the importance of robotics at all (it is important), I don’t see the connection.

      • lazide 2 days ago ago

        The military would pay 1000x what a household would for the same capability, and they are nowhere near the ability to do that. Which should tell you all you need to know.

    • ACCount37 2 days ago ago

      I wonder if all the grad students that struggle to find jobs now and all the cheap workers in India who were laid off are "feeling the AGI" then.

  • robotcookies 20 hours ago ago

    It is intelligence created by design rather than by natural selection.

    • heresie-dabord 18 hours ago ago

      The limitation of your definition is that any intelligence that is untrained will have a high rate of failure.

      So, an intelligence may have evolved in geological time or in laboratorical time, but the ability of the intelligence to learn to think and solve problems will distinguish it from the high rate of general failure.

  • nativeit 2 days ago ago

    [flagged]

    • dang 2 days ago ago

      Please don't fulminate. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

    • emil-lp 2 days ago ago

      Stuart Russell said AGI is coming and that we will get 45 trillion dollars from them.

      That's what I'm waiting for.

      (He didn't specify when or how the money will get here, but I'm betting that I'll get my fair share.)

      • YesThatTom2 2 days ago ago

        I (and I’m being serious) assumed AGI would break into the world’s financial institutions and steal the 45 trillion.

      • tim333 a day ago ago

        Stuart was saying 15,000 tn dollars here https://youtu.be/z4M6vN31Vc0?t=1420

        You cheque will be in the post shortly.

      • adastra22 2 days ago ago

        Hyperinflation?

    • zwnow 2 days ago ago

      [flagged]

      • dang 2 days ago ago

        "Please don't sneer, including at the rest of the community." It's reliably a marker of bad comments and worse threads.

        https://news.ycombinator.com/newsguidelines.html

        p.s. HN is pretty evenly divided on AI, and if one side has the advantage, it's probably the anti.

        • a day ago ago
          [deleted]
      • ronsor 2 days ago ago

        That's funny. I see half of everyone on HN being critical of AI, often unfairly so, but we only ever notice the people we disagree with.

        I'm guilty of this as well, otherwise I wouldn't be writing this.

      • eurekin 2 days ago ago

        Im a big AI/ML enthusiast (published one paper!) and was always flabbergasted to see scientists go off the typical provable/ testable lane and venture into philosophical and emotional territories

  • realityfactchex 2 days ago ago

    It would mean actually reasoning, not just applying stats to look like reasoning.

    • drdeca 2 days ago ago

      What do you mean by “just applying stats”?