"like it or not". The defeatist/inevitabilism of this kind of thing. We've heard it before with other things. Are we supposed to just accept climate impacts? Accept ongoing airborne pandemic impacts and contstant sickness? Stop using it. It doesn't make it better or paper over the negative aspects that well. It doesn't mean many will accept it.
And also the 'matter of fact'-ism of large statements like "it will still take away jobs and opportunities on a large scale" and then just carrying on as if there isn't any gravity to those points.
It isn't about the tools or using them, it's about the scale. The scale of impact is immense and we're not ready to handle it in a mutitude of areas because of all the areas technology touches. Millions of jobs erased with no clear replacement? Value of creative work diminshed leading to more opportunities erased? Scale of 'bad' actors abusing the tools and impacting a whole bunch of spheres from information dispersal to creative industries etc. Not even getting into environmental and land-use impacts to spaces with data centers and towns etc (again, it's the scale that gets ya). And for what? Removing a huge chunk of human activity & expression, for what?
I think theres many healthy reactions to the situations involving AI but I am concerned slightly over how gamed AI contrarianism is at times.
A low hanging example are social media engagement farming accounts like "pictures AI could never create" meanwhile its entirely just stolen slop content for the sake of getting a paycheck out of said engagement.
Social media nonsense is one thing, but I feel we're going to increasingly see people's frustrations redirected and weaponized in more harmful ways. Its an easy hairpin trigger towards brigading.
I read a post in the last election cycle where somebody was horrified by the polarization of modern politics, but had a solution: Explain to everyone about the evils of high fructose corn syrup, and the people would join together and rise up to demand more comprehensive regulation, forming a nucleus of harmony that would cross issues and save the country.
There seems to be a similar narrative around AI, that the sheep will look around and realize how much it is lying to them, and combine to throw off the oppressor. I kind of wish I could recapture that kind of optimism.
> These same voices overuse the phrase “AI slop” to disparage the remarkable images, documents, videos, and code that AI models produce at the touch of a button.
oh i find them remarkable alright. how is it possible to be this out-of-touch?
The Ayn Rand quote ("Man cannot survive except through his mind. He comes on earth unarmed. His brain is his only weapon") neatly distills precisely what worries me the most about an AI dominated future: that those in control of our destiny seem to have swallowed her misanthropic philosophy that (to paraphrase Rand again) "he is not a social animal".
Man, in fact, cannot survive without society. You don't have to be a communist to realise this. Until now the stratification of society has had certain unavoidable limits - everyone has a finite lifespan, everyone has an upper bound of intelligence and physical ability - as well as self imposed limits of regulation through states or unions. When kings and empires have come to dominate, revolutions have at least attempted to reform the social order, if not reset it. I fear that with AGI in the hands of the likes of Musk and Thiel we may soon be entering an age when men with Rand's worldview have the kind of power that makes them utterly untouchable and any chance of building a just and democratic future becomes impossible.
It can be simultaneously true that AI is a world changing technology, with drastic consequences for how we live our lives and it is a massive financial bubble, built on extremely overconfident bets.
This exact scenario happened with the dotcom bubble. Inarguably Internet services drastically changed everyone's daily lives, yet the buildup of Internet infrastructure was based on excessively optimistic predictions, leading to a bubble.
the dotcom bubble (and the current AI bubble) were based on excessively optimistic bets, by people who know/knew NOTHING about those technologies. It was exuberance by greedy wall street players and laymen looking to make a buck.
Why wouldnt the tech CEO take the billions of dollars thrown at them if they had the chance?
Having lived through the Dot Com Bubble/Bomb, the AI situation feels eerily similar.
The hype and over promotion of AI, as well as polluting the commons with slop are "unfortunate"; but the power of what it can do and how it can transform how we live and work is also undeniable.
> what it can do and how it can transform how we live and work is also undeniable.
I’m still not so sure on that part. Maybe, eventually? But it feels like we are still trying to find a problem for it to solve.
Has there been any actual, life transformative use cases from an LLM outside of code generation? I can certainly sit here and say how impactful Claude code has been for me, but I honestly can’t say the same thing for the other users where I work. In fact, the quality of emails has went down since we unleashed Copilot to the world, and so far no one has reported any real productivity gains.
> Has there been any actual, life transformative use cases from an LLM outside of code generation?
Content analysis and summarization is a big win in my view.
Having also been around during the emergence of the personal computer revolution I'm reminded of how having a home computer could be helpful for keeping recipes and balancing checkbooks -- it was the promise of "someday" that fueled optimism. Then the killer apps of spreadsheets, word processing, and desktop publishing sealed the deal.
Following that analogy we're at the Apple ][ stage -- it works and shows capabilities but there's likely so much more ahead.
When AI first passed the original Turing Test in spirit - producing text indistinguishable from a human - we didn’t declare machines intelligent. Instead, we raised the bar: now we ask if they can create music, art, or literature that feels human.
But if every time AI meets the challenge, we redefine the challenge, are we really measuring intelligence - or just defending human exceptionalism? At what point do we admit that creativity isn’t a mystical trait, but a process that can emerge from algorithms as well as neurons?
Here’s the real question: should we measure AI against the best humans can do - Einstein, Picasso, and Coltrane - standards most humans themselves can’t reach? Or should we measure success by how well AI enables the next Einstein, Picasso, and Coltrane?
I think we need to move to the era of Assisted Intelligence, a symbiotic relationship between AI and human intelligence.
> At what point do we admit that creativity isn’t a mystical trait, but a process that can emerge from algorithms as well as neurons?
I think anyone who already works in a creative field does acknowledge this. Creativity is, in fact, a process, and a skill that can be broken down into steps, taught to others, and practiced. Graham Wallas broke down the creative process all the way back in the 1920s and it boils down to making novel and valuable connections between existing ideas. What does an LLM do other than that exact process?
So we agree? I think one of the problems people have with creativity and AI is they associate creativity with something spiritual and so bristle at the thought of AI being creative because they believe machines have no soul.
I'm an optimist. I think what AI is going to do is show us what it truly means to be human.
> Computer scientist Louis Rosenberg
they have conveniently omitted he's also CEO of "UNANIMOUS AI"
having seen this comment before reading the article i was entirely unsurprised at him quoting ayn rand lol
This reminds me of Amara's law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
"like it or not". The defeatist/inevitabilism of this kind of thing. We've heard it before with other things. Are we supposed to just accept climate impacts? Accept ongoing airborne pandemic impacts and contstant sickness? Stop using it. It doesn't make it better or paper over the negative aspects that well. It doesn't mean many will accept it.
And also the 'matter of fact'-ism of large statements like "it will still take away jobs and opportunities on a large scale" and then just carrying on as if there isn't any gravity to those points.
It isn't about the tools or using them, it's about the scale. The scale of impact is immense and we're not ready to handle it in a mutitude of areas because of all the areas technology touches. Millions of jobs erased with no clear replacement? Value of creative work diminshed leading to more opportunities erased? Scale of 'bad' actors abusing the tools and impacting a whole bunch of spheres from information dispersal to creative industries etc. Not even getting into environmental and land-use impacts to spaces with data centers and towns etc (again, it's the scale that gets ya). And for what? Removing a huge chunk of human activity & expression, for what?
Raising concerns because AI slop can literally destroy your brand identity or reputation is called grieving, then. Thanks.
You can't ignore the problems of current AI agents by "rising above them". The people who question it aren't in denial, you guys are.
Can you explain further how AI slop can destroy your brand/reputation?
I think theres many healthy reactions to the situations involving AI but I am concerned slightly over how gamed AI contrarianism is at times. A low hanging example are social media engagement farming accounts like "pictures AI could never create" meanwhile its entirely just stolen slop content for the sake of getting a paycheck out of said engagement.
Social media nonsense is one thing, but I feel we're going to increasingly see people's frustrations redirected and weaponized in more harmful ways. Its an easy hairpin trigger towards brigading.
I read a post in the last election cycle where somebody was horrified by the polarization of modern politics, but had a solution: Explain to everyone about the evils of high fructose corn syrup, and the people would join together and rise up to demand more comprehensive regulation, forming a nucleus of harmony that would cross issues and save the country.
There seems to be a similar narrative around AI, that the sheep will look around and realize how much it is lying to them, and combine to throw off the oppressor. I kind of wish I could recapture that kind of optimism.
> These same voices overuse the phrase “AI slop” to disparage the remarkable images, documents, videos, and code that AI models produce at the touch of a button.
oh i find them remarkable alright. how is it possible to be this out-of-touch?
The Ayn Rand quote ("Man cannot survive except through his mind. He comes on earth unarmed. His brain is his only weapon") neatly distills precisely what worries me the most about an AI dominated future: that those in control of our destiny seem to have swallowed her misanthropic philosophy that (to paraphrase Rand again) "he is not a social animal".
Man, in fact, cannot survive without society. You don't have to be a communist to realise this. Until now the stratification of society has had certain unavoidable limits - everyone has a finite lifespan, everyone has an upper bound of intelligence and physical ability - as well as self imposed limits of regulation through states or unions. When kings and empires have come to dominate, revolutions have at least attempted to reform the social order, if not reset it. I fear that with AGI in the hands of the likes of Musk and Thiel we may soon be entering an age when men with Rand's worldview have the kind of power that makes them utterly untouchable and any chance of building a just and democratic future becomes impossible.
Dupe from yesterday: https://news.ycombinator.com/item?id=46120830
It can be simultaneously true that AI is a world changing technology, with drastic consequences for how we live our lives and it is a massive financial bubble, built on extremely overconfident bets.
This exact scenario happened with the dotcom bubble. Inarguably Internet services drastically changed everyone's daily lives, yet the buildup of Internet infrastructure was based on excessively optimistic predictions, leading to a bubble.
the dotcom bubble (and the current AI bubble) were based on excessively optimistic bets, by people who know/knew NOTHING about those technologies. It was exuberance by greedy wall street players and laymen looking to make a buck.
Why wouldnt the tech CEO take the billions of dollars thrown at them if they had the chance?
Just because it's slop doesn't mean it can't profoundly reshape society
[Cope Intensifies]
Copium from a bag holder.
Having lived through the Dot Com Bubble/Bomb, the AI situation feels eerily similar.
The hype and over promotion of AI, as well as polluting the commons with slop are "unfortunate"; but the power of what it can do and how it can transform how we live and work is also undeniable.
> what it can do and how it can transform how we live and work is also undeniable.
I’m still not so sure on that part. Maybe, eventually? But it feels like we are still trying to find a problem for it to solve.
Has there been any actual, life transformative use cases from an LLM outside of code generation? I can certainly sit here and say how impactful Claude code has been for me, but I honestly can’t say the same thing for the other users where I work. In fact, the quality of emails has went down since we unleashed Copilot to the world, and so far no one has reported any real productivity gains.
> Has there been any actual, life transformative use cases from an LLM outside of code generation?
Content analysis and summarization is a big win in my view.
Having also been around during the emergence of the personal computer revolution I'm reminded of how having a home computer could be helpful for keeping recipes and balancing checkbooks -- it was the promise of "someday" that fueled optimism. Then the killer apps of spreadsheets, word processing, and desktop publishing sealed the deal.
Following that analogy we're at the Apple ][ stage -- it works and shows capabilities but there's likely so much more ahead.
[dead]
When AI first passed the original Turing Test in spirit - producing text indistinguishable from a human - we didn’t declare machines intelligent. Instead, we raised the bar: now we ask if they can create music, art, or literature that feels human.
But if every time AI meets the challenge, we redefine the challenge, are we really measuring intelligence - or just defending human exceptionalism? At what point do we admit that creativity isn’t a mystical trait, but a process that can emerge from algorithms as well as neurons?
Here’s the real question: should we measure AI against the best humans can do - Einstein, Picasso, and Coltrane - standards most humans themselves can’t reach? Or should we measure success by how well AI enables the next Einstein, Picasso, and Coltrane?
I think we need to move to the era of Assisted Intelligence, a symbiotic relationship between AI and human intelligence.
> At what point do we admit that creativity isn’t a mystical trait, but a process that can emerge from algorithms as well as neurons?
I think anyone who already works in a creative field does acknowledge this. Creativity is, in fact, a process, and a skill that can be broken down into steps, taught to others, and practiced. Graham Wallas broke down the creative process all the way back in the 1920s and it boils down to making novel and valuable connections between existing ideas. What does an LLM do other than that exact process?
So we agree? I think one of the problems people have with creativity and AI is they associate creativity with something spiritual and so bristle at the thought of AI being creative because they believe machines have no soul.
I'm an optimist. I think what AI is going to do is show us what it truly means to be human.