I'm not sure if there's anything interesting here, but I did notice the author was interviewed on the podcast Machine Learning Street Talk about this paper,
In statistics, sample efficiency means you can precisely estimate a specified parameter like the mean with few samples. In AI, it seems to mean that the AI can learn how to do unspecified, very general stuff without much data. Like the underlying truth about the world and how to reach one's goals within it is just some giant parameter vector that we need to infer more or less efficiently from "sampled" sensory data.
Picture a machine endowed with human intellect. In its most simplistic form, that is Artificial General Intelligence (AGI)
Artificial human intelligence. Not what I'd call general, but I guess so long as we make it clear that by "general" we don't actually mean general, fine. I'd really expect actual general intelligence to do a lot better than human, in ways we can't understand any more than ants can comprehend us.
My answer: while 99% of the AI community was busy working on Weak AI, that is, developing systems that could perform tasks that humans can do notionally because of our Big Brains, a tiny fraction of people promoted Hard AI, that is, AI as a philosophical recreation of Lt. Commander Data.
Hard AI has long had a well-deserved jet black reputation as a flakey field filled with armchair philosophers, hucksters, impressarios, and Loebner followers who don't understand the Turing Test. It eventually got so bad that the entire field decided to rebrand itself as "Artificial General Intelligence". But it's the same duck.
The only difference is the same hucksters are trying to sell the notion that LLMs are or will become AGI through some sort of magic trick or with just one more input.
I don’t know. They typically read entirely differently to me, in the sense that what I would expect to see after clicking the link is different.
I admit though the in this case “What is AGI?” better matches expectation to reality. Before I noticed the domain, “What the f*ck is AGI?” would have led me to expect more of a technical blog post with a playful presentation rather than the review article it actually is.
I agree with blooalien - that's a great point. To me it doesn't feel quite enough to overcome the baity/provocative effects, but since several commenters have made good points about this, I we might as well put the original title back.
I've kept "f*ck" in the title since that's in the original and arguably adds some subtlety in this case. Normally we'd replace it with the real word since we don't like bowdlerisms.
From what I can see, Artificial General Intelligence is a drug-fueled millenarian cult, and attempts to define it that don't consider this angle will fail.
It's been a moving goalpost but I think the point where people will be forced to acknowledge it is when fully autonomous agents are outcompeting most humans in most areas.
So long as half of people are employed or in business, these people will insist that it's not AGI yet.
Until AI can fully replace you in your job, it's going to continue to feel like a tool.
Given a useful-enough general purpose body (with multiple appendage options), one of the most significant applications of whatever we end up calling AGI should be finally seeing most of our household chores properly roboticized.
When I can actually give plain language descriptions of 'simple' manual tasks around the house to a machine the same way I would to, say, a human 4th grader, and not have to spend more time helping it get through the task than it would take me to do it myself, that is when I will feel we have turned the corner.
I still am not at all convinced I will see this within the next few decades I probably have left.
The military would pay 1000x what a household would for the same capability, and they are nowhere near the ability to do that. Which should tell you all you need to know.
The limitation of your definition is that any intelligence that is untrained will have a high rate of failure.
So, an intelligence may have evolved in geological time or in laboratorical time, but the ability of the intelligence to learn to think and solve problems will distinguish it from the high rate of general failure.
"Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."
Im a big AI/ML enthusiast (published one paper!) and was always flabbergasted to see scientists go off the typical provable/ testable lane and venture into philosophical and emotional territories
I'm not sure if there's anything interesting here, but I did notice the author was interviewed on the podcast Machine Learning Street Talk about this paper,
https://www.youtube.com/watch?v=K18Gmp2oXIM&t=3s
In statistics, sample efficiency means you can precisely estimate a specified parameter like the mean with few samples. In AI, it seems to mean that the AI can learn how to do unspecified, very general stuff without much data. Like the underlying truth about the world and how to reach one's goals within it is just some giant parameter vector that we need to infer more or less efficiently from "sampled" sensory data.
Picture a machine endowed with human intellect. In its most simplistic form, that is Artificial General Intelligence (AGI)
Artificial human intelligence. Not what I'd call general, but I guess so long as we make it clear that by "general" we don't actually mean general, fine. I'd really expect actual general intelligence to do a lot better than human, in ways we can't understand any more than ants can comprehend us.
Humans are the best/only example of General Intelligence we have.
> simp-maxxing
Might want to write this out in full lol I thought this in particular was going to be a much more entertaining point.
To be fair, it is spelled with a single 'x' in the paper.
Per my view, it fulfills the following criteria:
1) Few-shot to zero-shot training for achieving a useful ability on a given new problem.
2) Self-determining optimal paths to fine-tuning at inference time based on minimal instructions or examples.
3) Having the capacity to self-correct, maybe by building or confirming heuristics.
All of these concern an intern, for example, who is given a new, unseen task and can figure out the rest without handholding.
My answer: while 99% of the AI community was busy working on Weak AI, that is, developing systems that could perform tasks that humans can do notionally because of our Big Brains, a tiny fraction of people promoted Hard AI, that is, AI as a philosophical recreation of Lt. Commander Data.
Hard AI has long had a well-deserved jet black reputation as a flakey field filled with armchair philosophers, hucksters, impressarios, and Loebner followers who don't understand the Turing Test. It eventually got so bad that the entire field decided to rebrand itself as "Artificial General Intelligence". But it's the same duck.
The only difference is the same hucksters are trying to sell the notion that LLMs are or will become AGI through some sort of magic trick or with just one more input.
“Strong AI” is the traditional term to compare with “Weak AI.”
My bad. Of course it is. Had a brain fart there.
[dead]
A term in search of a definition, clearly.
Please fix the title in HN to match the actual paper's superior title: "What the F*ck Is Artificial General Intelligence?"
We don't have an issue with profanity on HN but we do take out clickbait.
Edit: ok you guys, I take the point and have put the original title back. More at https://news.ycombinator.com/item?id=45430354.
Replace it with “what the cuss”?
The word 'fuck' isn't the issue. The issue is that "What the fuck is AGI", as a title, doesn't add anything besides sensationalism to "What is AGI".
I don’t know. They typically read entirely differently to me, in the sense that what I would expect to see after clicking the link is different.
I admit though the in this case “What is AGI?” better matches expectation to reality. Before I noticed the domain, “What the f*ck is AGI?” would have led me to expect more of a technical blog post with a playful presentation rather than the review article it actually is.
It communicates that the paper will probably be a lot less "stuffy" than the typical fancy science PDF
I agree with blooalien - that's a great point. To me it doesn't feel quite enough to overcome the baity/provocative effects, but since several commenters have made good points about this, I we might as well put the original title back.
I've kept "f*ck" in the title since that's in the original and arguably adds some subtlety in this case. Normally we'd replace it with the real word since we don't like bowdlerisms.
> "It communicates that the paper will probably be a lot less "stuffy" than the typical fancy science PDF"
You pose an excellent point... I tend to agree.
From what I can see, Artificial General Intelligence is a drug-fueled millenarian cult, and attempts to define it that don't consider this angle will fail.
This feels like we’re approaching consensus. https://news.ycombinator.com/item?id=45418763
It's been a moving goalpost but I think the point where people will be forced to acknowledge it is when fully autonomous agents are outcompeting most humans in most areas.
So long as half of people are employed or in business, these people will insist that it's not AGI yet.
Until AI can fully replace you in your job, it's going to continue to feel like a tool.
Robotics are also a big one.
Given a useful-enough general purpose body (with multiple appendage options), one of the most significant applications of whatever we end up calling AGI should be finally seeing most of our household chores properly roboticized.
When I can actually give plain language descriptions of 'simple' manual tasks around the house to a machine the same way I would to, say, a human 4th grader, and not have to spend more time helping it get through the task than it would take me to do it myself, that is when I will feel we have turned the corner.
I still am not at all convinced I will see this within the next few decades I probably have left.
Without denigrating the importance of robotics at all (it is important), I don’t see the connection.
The military would pay 1000x what a household would for the same capability, and they are nowhere near the ability to do that. Which should tell you all you need to know.
I wonder if all the grad students that struggle to find jobs now and all the cheap workers in India who were laid off are "feeling the AGI" then.
It is intelligence created by design rather than by natural selection.
The limitation of your definition is that any intelligence that is untrained will have a high rate of failure.
So, an intelligence may have evolved in geological time or in laboratorical time, but the ability of the intelligence to learn to think and solve problems will distinguish it from the high rate of general failure.
[flagged]
Please don't fulminate. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
Stuart Russell said AGI is coming and that we will get 45 trillion dollars from them.
That's what I'm waiting for.
(He didn't specify when or how the money will get here, but I'm betting that I'll get my fair share.)
I (and I’m being serious) assumed AGI would break into the world’s financial institutions and steal the 45 trillion.
Stuart was saying 15,000 tn dollars here https://youtu.be/z4M6vN31Vc0?t=1420
You cheque will be in the post shortly.
Hyperinflation?
[flagged]
"Please don't sneer, including at the rest of the community." It's reliably a marker of bad comments and worse threads.
https://news.ycombinator.com/newsguidelines.html
p.s. HN is pretty evenly divided on AI, and if one side has the advantage, it's probably the anti.
That's funny. I see half of everyone on HN being critical of AI, often unfairly so, but we only ever notice the people we disagree with.
I'm guilty of this as well, otherwise I wouldn't be writing this.
You may be the first person I've ever seen making this point! besides myself making it ad nauseum:
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
(or maybe I just haven't noticed the others because I agree with them)
Which is weird given that AI critique usually gets down voted while the frontpage is full of "look what this new model can do" posts every day.
[flagged]
"Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."
https://news.ycombinator.com/newsguidelines.html
https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
It isn’t a dichotomy. It is possible for AI to be useful, not a scam, yet also overhyped by people who do not understand it.
Im a big AI/ML enthusiast (published one paper!) and was always flabbergasted to see scientists go off the typical provable/ testable lane and venture into philosophical and emotional territories
It would mean actually reasoning, not just applying stats to look like reasoning.
What do you mean by “just applying stats”?