Follow the money: Much of the "predictions" and "everyone is sooo worried about" stuff is originated or amplified by people who will profit from that, particularly by attracting investors and grants.
Plus, it's simply a more-exciting future to imagine and discuss than "it'll be moderately useful in some specific applications but most our problems will still be around."
I don't think it's necessarily that LLMs by themselves will lead to AGI, but that LLMs proved what is possible with enough compute and money. In a few short years, we went from "bots struggling with CAPTCHAs" to "bots easily passing the Turing test and able to fool most people". With that came bazillions in investments, jumpstarting an AI/ML arms race that may one day take us to AGI.
People who want a messiah always see the messiah as imminent.
Of course the problem here being AGI would emphatically not be a/the messiah…
LLMs will not lead to a AGI. Now, LLMs may be sufficiently capable to have us, eventually, realize we’re not as generally intelligent or as capable as we think, but the realization of the absence of NGI does not an AGI make.
Follow the money: Much of the "predictions" and "everyone is sooo worried about" stuff is originated or amplified by people who will profit from that, particularly by attracting investors and grants.
Plus, it's simply a more-exciting future to imagine and discuss than "it'll be moderately useful in some specific applications but most our problems will still be around."
I don't think it's necessarily that LLMs by themselves will lead to AGI, but that LLMs proved what is possible with enough compute and money. In a few short years, we went from "bots struggling with CAPTCHAs" to "bots easily passing the Turing test and able to fool most people". With that came bazillions in investments, jumpstarting an AI/ML arms race that may one day take us to AGI.
People who want a messiah always see the messiah as imminent.
Of course the problem here being AGI would emphatically not be a/the messiah…
LLMs will not lead to a AGI. Now, LLMs may be sufficiently capable to have us, eventually, realize we’re not as generally intelligent or as capable as we think, but the realization of the absence of NGI does not an AGI make.