If the tech industry leaders demonstrated any amount of responsibility, accountability, or care for the general well-being of people, then I think this would instead be an exciting time for tech innovation. The enthusiasm I felt decades ago is completely gone.
There was a recent talk from CCC that got a bit deeper into Peter Thiel's "Antichrist" ideology and where it comes from.
The bottom line of that talk seemed even worse: That for some tech leaders, the general wellbeing of the population or "classical" arguments for progress such as improving living conditions and advancing mutual understanding aren't even orthogonal to their goals but are explicit anti-goals: Because a world with too much wellbeing and too little conflict would ostensibly lead to stagnation, loss of freedom and innovation and the state of what Thiel termed the "Antichrist".
Too much conflict is bad as well because it carries the risk of complete destruction, so they'd aim for some kind of ideal level of conflict and suffering in the world, like some sort of twisted inflation target.
Imagine that in ten thousand years aliens actually invade Earth and try to enslave humanity. If we spend this time perfecting the art of war we'll have some chance of survival. If we spend this time just growing tomatoes, no fucking way. So yes, from the perspective of humanity as a whole, it makes sense to say that the perfect amount of war and suffering is non-zero.
Any alien race that is actually able to travel all the way to earth with the intention of invasion, with enough resources to presumably make the trip back home with all the loot/slaves (why else would they invade?) will be so far beyond our human technical abilities that we stand basically 0 chance of survival.
I think it's absurd to justify killing one another in wars because it might one day save us from some hypothetical invasion.
> will be so far beyond our human technical abilities that we stand basically 0 chance of survival.
Not if we also develop technical abilities before the invasion. Here though you could make an argument that it would be much more effective if we cooperated instead of fighting.
> I think it's absurd to justify killing one another in wars because it might one day save us from some hypothetical invasion.
Well, the modern western moral system holds human life as the highest value. But even nowadays we still have moral systems that place the existence of a group much above the existence of an individual. Imagine that the whole humanity is like an organism - you don't really care about individual cells in your body, do you? A vaccination hurts, but ultimately, it leaves you stronger, right?
I'd say, "let's keep killing each other in wars so that we can maybe defend ourselves in a hypothetical alien invasion" is indeed absurd, but the general idea "wars allow humanity as a whole to practice self-defense" isn't. Europeans didn't die from native American diseases because millions of Europeans had already died in the past from similar diseases, building the collective immunity.
If the earth is one big tomato garden, maybe aliens will leave us alone.
Also, native Americans were very aware that other groups of humans existed beyond their horizon, and some of the those groups were hostile to them. We have zero reason to believe there is anyone out in space hostile to us. In fact, we have plenty of evidence there isn't anyone at all out there.
Super good point. Only two options with no middle ground and definitely should choose to hurt an unlimited number of people on the off chance of a long future alien invasion and the even longer chance that by going out of our way to hurt people now will somehow actually matter against a spacefaring, almost-certainly-interstellar hostile force.
I'd also be excited about this technology if it had come before everything we've seen in the last 25 years. It's irresponsibly naive not to understand by now that technological advances are being used more against us than for us.
I don't think the opposition from the public is because they don't see value in AI. Quite the opposite, every single non tech person I know has used AI tools and can immediately see the value.
The 'backlash' seems to be from the fact that people, esp white collar workers are finally realizing what blue collar folks have been feeling for quite some time. That an overwhelming majority of the technology driven productivity gains accrue to capital owners, not workers and AI is the ultimate productivity tool.
It doesn't help that capital owners no longer feel it necessary to even pretend. Like when CEOs openly salivate at the prospect of firing all workers and replacing them with AI. When people see their electricity rates go up to subsidize billionaires building AI data centers. When they see their real wages falling continually while they are told how good the economy is.
If the gains from AI were shared even a little with the regular people, they might not have the deep sense of unease and sometimes open hostility that we are seeing now.
> If the gains from AI were shared even a little with the regular people, they might not have the deep sense of unease and sometimes open hostility that we are seeing now.
Additionally the AIs are trained on creations by many of those same regular people. They're not just seeing the profits funnelled upwards, some of those profits are being generated through their own works!
And before someone tries to argue "that's just how art etc. work" - sure, but the difference is quantity. If I get inspired by another artist, I can generate output at the speed of one artist. With current AI models, it's like a big company is training millions of artists on your style to pump out new pieces as fast as possible.
It's also worth noting that computers have been largely an equalizing force, since you don't need much capital at all to produce software, just a PC and know-how that's freely available online. You can generally build software cheaper alone or in a small group than as a large company.
AI is taking away and monopolizing the means of production, and making what few white collar workers remain rent their productivity. This is a completely different dynamic.
> Like when CEOs openly salivate at the prospect of firing all workers and replacing them with AI.
CEOs aren't stupid.
Some CEOs claim they are laying off people because AI has increased productivity and they don't need those people. But that's a lie. Not a straight lie, because lying to investors constitutes security fraud, but their language is carefully chosen so that the overall message is a lie, but they have plausible deniability.
Here's an example: Amazon fired 30000 workers, supposedly because of AI. And here's an excerpt from Nate Jones (prolific AI commentator, who worked for 5 years at Amazon) video [1]:
> Amazon fired 30,000 people because they needed the money today in order to buy GPUs to desperately try to secure a place in the future of AI cloud. That is the actual story. Now, that story does not sound as nice for Amazon as a future forward story about how we're automating with AI.
Well said. It’s like human assembly lines being replaced with robots. Humans did those jobs for a really long time. Then they were replaced by automation.
We’re on a slippery slope, because what happens when AI gets even better over the next 5 years? Robots took time to fully replace one person on an assembly line; it didn’t happen overnight.
Basic income seems more like a reality soon. Or you’re going to have slums and the wealthy.
>Like when CEOs openly salivate at the prospect of firing all workers and replacing them with AI.
I saw a series of ads in a train station the other day for some company claiming to offer "AI employees" that had slogans like "our employees never complain about overtime", "our employees don't ask about vacations", etc. and was just shocked at the brazenness of it.
You will find many such Marie Antoinettes in certain social circles, m/v, but mostly m. Living in such a bubble tends to warp one's perspective, also as self-justification. Those people below become resources, accounted for like energy, materials and other consumables. People wouldn't notice it anymore, but it is still a telltale sign how much a company value humans if they delegate herding to a so called Human Resources department.
The default rebuttal is that Human Resources is just a standard term. <= the point
Wow this is bubble talk. Who are you talking to?! I regularly engage with people who are totally ambivalent towards genAI at best and horrifed by genAI at worst. The only people "immediately seeing the value" seem to be marketing grifters on LinkedIn.
Honestly, I'm pretty sure that most media companies' main business now is creating problems that they can then report on. Whip up the crowd with fear and anxiety and then capitalize on it by feeding them a never ending stream of low key doom articles that they'll be compelled to read by their newfound neuroticism.
From the article, an OpenAI researcher apparently:
> “Every time I use Codex to solve some issue late at night or GPT helps me figure out a difficult strategic problem, I feel: what a relief. There are so few minds on Earth that are both intelligent and persistent enough to generate new insights and keep the torch of scientific civilization alive. Now you have potentially infinite minds to throw at infinite potential problems. Your computer friend that never takes the day off, never gets bored, never checks out and stops trying.”
Um, this person needs help? Serious mental issues, hello?! It's really concerning me how many people are having breaks with reality, and I don't only mean the poor people who are sadly taking their own lives.
I think its another example of the Gell-mann amnesia effect, if your'e an expert in something then the AI is often wrong and you're confused why people are saying its great, if you're less skilled then it can be impressive.
If the tech industry leaders demonstrated any amount of responsibility, accountability, or care for the general well-being of people, then I think this would instead be an exciting time for tech innovation. The enthusiasm I felt decades ago is completely gone.
There was a recent talk from CCC that got a bit deeper into Peter Thiel's "Antichrist" ideology and where it comes from.
The bottom line of that talk seemed even worse: That for some tech leaders, the general wellbeing of the population or "classical" arguments for progress such as improving living conditions and advancing mutual understanding aren't even orthogonal to their goals but are explicit anti-goals: Because a world with too much wellbeing and too little conflict would ostensibly lead to stagnation, loss of freedom and innovation and the state of what Thiel termed the "Antichrist".
Too much conflict is bad as well because it carries the risk of complete destruction, so they'd aim for some kind of ideal level of conflict and suffering in the world, like some sort of twisted inflation target.
Imagine that in ten thousand years aliens actually invade Earth and try to enslave humanity. If we spend this time perfecting the art of war we'll have some chance of survival. If we spend this time just growing tomatoes, no fucking way. So yes, from the perspective of humanity as a whole, it makes sense to say that the perfect amount of war and suffering is non-zero.
> but aliens won't invade Earth
Native Americans thought the same.
Any alien race that is actually able to travel all the way to earth with the intention of invasion, with enough resources to presumably make the trip back home with all the loot/slaves (why else would they invade?) will be so far beyond our human technical abilities that we stand basically 0 chance of survival.
I think it's absurd to justify killing one another in wars because it might one day save us from some hypothetical invasion.
> will be so far beyond our human technical abilities that we stand basically 0 chance of survival.
Not if we also develop technical abilities before the invasion. Here though you could make an argument that it would be much more effective if we cooperated instead of fighting.
> I think it's absurd to justify killing one another in wars because it might one day save us from some hypothetical invasion.
Well, the modern western moral system holds human life as the highest value. But even nowadays we still have moral systems that place the existence of a group much above the existence of an individual. Imagine that the whole humanity is like an organism - you don't really care about individual cells in your body, do you? A vaccination hurts, but ultimately, it leaves you stronger, right?
I'd say, "let's keep killing each other in wars so that we can maybe defend ourselves in a hypothetical alien invasion" is indeed absurd, but the general idea "wars allow humanity as a whole to practice self-defense" isn't. Europeans didn't die from native American diseases because millions of Europeans had already died in the past from similar diseases, building the collective immunity.
I don't think you have any concept of the technologies involved here.
This isn't a case of who has the biggest gun. This is a case of someone has a gun that can wipe out your entire planet and you have a rock.
No, a planetary society cannot stop an invasion once they're in orbit. It simply is not possible, not with any technology we can even imagine.
You very much do not at all understand the argument you're making.
We had to destroy the village in order to save it.
If the earth is one big tomato garden, maybe aliens will leave us alone. Also, native Americans were very aware that other groups of humans existed beyond their horizon, and some of the those groups were hostile to them. We have zero reason to believe there is anyone out in space hostile to us. In fact, we have plenty of evidence there isn't anyone at all out there.
Super good point. Only two options with no middle ground and definitely should choose to hurt an unlimited number of people on the off chance of a long future alien invasion and the even longer chance that by going out of our way to hurt people now will somehow actually matter against a spacefaring, almost-certainly-interstellar hostile force.
What nonsense.
> Imagine that in ten thousand years aliens actually invade Earth and try to enslave humanity.
Insert Mr. Bean highway meme
I'd also be excited about this technology if it had come before everything we've seen in the last 25 years. It's irresponsibly naive not to understand by now that technological advances are being used more against us than for us.
It’s been really disgusting watching some people I used to look up to somewhat devolve into.
I don't think the opposition from the public is because they don't see value in AI. Quite the opposite, every single non tech person I know has used AI tools and can immediately see the value.
The 'backlash' seems to be from the fact that people, esp white collar workers are finally realizing what blue collar folks have been feeling for quite some time. That an overwhelming majority of the technology driven productivity gains accrue to capital owners, not workers and AI is the ultimate productivity tool.
It doesn't help that capital owners no longer feel it necessary to even pretend. Like when CEOs openly salivate at the prospect of firing all workers and replacing them with AI. When people see their electricity rates go up to subsidize billionaires building AI data centers. When they see their real wages falling continually while they are told how good the economy is.
If the gains from AI were shared even a little with the regular people, they might not have the deep sense of unease and sometimes open hostility that we are seeing now.
> If the gains from AI were shared even a little with the regular people, they might not have the deep sense of unease and sometimes open hostility that we are seeing now.
Additionally the AIs are trained on creations by many of those same regular people. They're not just seeing the profits funnelled upwards, some of those profits are being generated through their own works!
And before someone tries to argue "that's just how art etc. work" - sure, but the difference is quantity. If I get inspired by another artist, I can generate output at the speed of one artist. With current AI models, it's like a big company is training millions of artists on your style to pump out new pieces as fast as possible.
It's also worth noting that computers have been largely an equalizing force, since you don't need much capital at all to produce software, just a PC and know-how that's freely available online. You can generally build software cheaper alone or in a small group than as a large company.
AI is taking away and monopolizing the means of production, and making what few white collar workers remain rent their productivity. This is a completely different dynamic.
> Like when CEOs openly salivate at the prospect of firing all workers and replacing them with AI.
CEOs aren't stupid.
Some CEOs claim they are laying off people because AI has increased productivity and they don't need those people. But that's a lie. Not a straight lie, because lying to investors constitutes security fraud, but their language is carefully chosen so that the overall message is a lie, but they have plausible deniability.
Here's an example: Amazon fired 30000 workers, supposedly because of AI. And here's an excerpt from Nate Jones (prolific AI commentator, who worked for 5 years at Amazon) video [1]:
[1] https://natesnewsletter.substack.com/p/amazon-just-laid-off-...Well said. It’s like human assembly lines being replaced with robots. Humans did those jobs for a really long time. Then they were replaced by automation.
We’re on a slippery slope, because what happens when AI gets even better over the next 5 years? Robots took time to fully replace one person on an assembly line; it didn’t happen overnight.
Basic income seems more like a reality soon. Or you’re going to have slums and the wealthy.
>Like when CEOs openly salivate at the prospect of firing all workers and replacing them with AI.
I saw a series of ads in a train station the other day for some company claiming to offer "AI employees" that had slogans like "our employees never complain about overtime", "our employees don't ask about vacations", etc. and was just shocked at the brazenness of it.
You will find many such Marie Antoinettes in certain social circles, m/v, but mostly m. Living in such a bubble tends to warp one's perspective, also as self-justification. Those people below become resources, accounted for like energy, materials and other consumables. People wouldn't notice it anymore, but it is still a telltale sign how much a company value humans if they delegate herding to a so called Human Resources department.
The default rebuttal is that Human Resources is just a standard term. <= the point
> certain social circles, m/v, but mostly m.
What does this mean? Best I can come up with us "male/vemale"
oops, sry, it is vemale indeed :)
Yeah. And the same people seem puzzled when they see random citizens hating on AI with passion and treating it as a threat.
It would be funny if the ads were made by a sole proprietor.
Wow this is bubble talk. Who are you talking to?! I regularly engage with people who are totally ambivalent towards genAI at best and horrifed by genAI at worst. The only people "immediately seeing the value" seem to be marketing grifters on LinkedIn.
"The regular people" …what??
https://archive.is/WWBO4
and who is doing the polarizing?
Honestly, I'm pretty sure that most media companies' main business now is creating problems that they can then report on. Whip up the crowd with fear and anxiety and then capitalize on it by feeding them a never ending stream of low key doom articles that they'll be compelled to read by their newfound neuroticism.
From the article, an OpenAI researcher apparently:
> “Every time I use Codex to solve some issue late at night or GPT helps me figure out a difficult strategic problem, I feel: what a relief. There are so few minds on Earth that are both intelligent and persistent enough to generate new insights and keep the torch of scientific civilization alive. Now you have potentially infinite minds to throw at infinite potential problems. Your computer friend that never takes the day off, never gets bored, never checks out and stops trying.”
Um, this person needs help? Serious mental issues, hello?! It's really concerning me how many people are having breaks with reality, and I don't only mean the poor people who are sadly taking their own lives.
I think its another example of the Gell-mann amnesia effect, if your'e an expert in something then the AI is often wrong and you're confused why people are saying its great, if you're less skilled then it can be impressive.