I prefer distributed power, not a democratic system as it's often abused.
If large models become a +100x productivity multiplier they can charge crazy money for access. So rich/money people will dominate the world in no time. Today many corporations are happy to pay $5k/mo/user. Everyday people and small companies can't afford that. I can't. We need to build an open ecosystem that at least shrinks the gap.
Learn to run your own models. Get yourselves at least a cheap GPU or even share one with friends. Join groups. There's a lot to do from data to fine-tuning.
This is one of many factors that precipitated the Soviet collapse.
Turn on the news and you know the language being spewed has no relation to reality. A society full of liars where people say the exact opposite of the truth. Now that LLMs can produce infinitely many words for free, trust in language is falling to all-time lows.
Eventually people just stop believing in words, the fundamental unit of human communication.
I can't recommend Adam Curtis' Hypernormalisation more than ever.
> What emerged instead was a fake version of the society. The Soviet Union became a society where everyone knew that what their leaders said was not real.
> Everybody had to play along and pretend that it was real, because no one could imagine any alternative. One Soviet writer called it "hypernormalisation."
Apologies for the naive question (because I haven't read the book). I grew up with the Evil Empire waiting to nuke me until Gorbachev provided a brief respite before the KGB returned. As I recall, they were presented as an enemy with almost but just barely not quite unlimited capacities. I still don't understand what happened in terms of global geopolitics in the last forty years.
Does the book suggest that the Soviet collapse was caused by rather than delayed by their Orwellian perversion of language?
> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
What do people think is the probability that OpenAI would ever actually do this?
If the other project were equally aligned with the value OpenAI places on consolidating power and wealth onto Sam Altman, I don't see why OpenAI wouldn't do what they say.
Those various caveats there — “value-aligned”, “safety-conscious”, “case-by-case agreements” — probably mean that no project ever will be “worthy” of OpenAI’s assistance.
In the unlikely event that an abiding project appears, then yeah, sure, it’s very probable that OpenAI would assist it :)
knowing sama, that's exactly what he would do. except, the story wouldn't end with openai collaborating with a competitor who is better than them, openai will collaborate with them to ensure they're destroyed from inside out so that only openai can dominate eventually. "Eventual dominance" architecture, you know.
Simply put, I don't think OpenAI has principles, otherwise they'd still be open.
Do I even need to read their article?
No, I don't.
Change my mind. I just had King's day in the Netherlands so maybe I'm too alcohol fueled, but I think I have the right amount of alcohol to call Sam Altman and the rest of the leadership out on it. They don't have principles, not good ones anyway.
Glaring lack of, "We will not participate in the creation or operation of 'kill bots'."
I can't believe it has to be said. Yet, here we are. Nice to haves include: "We will not participate in the use of AI for mass surveillance," and "We will not participate in the use of AI for (cyber-)warfare."
I'm not sure how intelligence and killing go together. The soldiers who are getting blown up by those suicide drones with just enough intelligence to recognize targets and chase them, are infinitely more intelligent than said drones.
I'm kind of not sure how much intelligence squares into this compared to being faster (and cheaper) than your enemy.
I would even dare to explore the corollary - the people who sell the idea to the MIC that strong AI will turn their weapons into unstoppable killing machines are in fact far overselling the amount of improvement these systems can bring.
Outside of coding, that is probably the most lucrative place for AI to be used in. The theatre of war is forgiving of small collateral damage and it’s like this technology was built for kill bots.
I don't think the MIC is that lucrative even in the US, compared to the private sector. Consider the F35 - there have been like 1500 planes, and at $100mm a piece, the total comes out to $150B over two decades, and that's revenue, I doubt they have a huge margin on that, compared to software, where the costs are minimal.
Despite there being a war on, LM stocks have performed close to the market. Boeing has repeatedly reported that its commercial division is doing much better than its defense one, and Boeing isn't doing so hot these days.
I remember reading that Boeing's commercial division
It’s lucrative to companies that see the ridiculous waste the current military industrial complex is. There can be competitors that can provide similar effectiveness for a fraction of the cost.
I don't want to judge the US armed forces, so the following is a hypothetical - I heard the rumor that some procurement guy shared the US Army paid about $10k for a set of power tools for mechanics, which were commercially available, and together cost about $1k.
Now I learned from ChatGPT that the US Army has about 100k mechanics, and assuming every one gets such a set, and the extra $9k is pure profit, then the guy making the sale gets about $900mm out of it - a staggering amount, but not commercially compared to what big companies rake in, and financially not very sophisticated.
Also I'd like to stress that the above scenario is conjecture - I have far too little knowledge of the actual specifics of the org to just make such an accusation openly.
$1k for the tools, $5k for dealing with the paperwork, $2k for having to stock 4,000 sets for years because they need to have direct replacements forever in order to make them standard issue, $500 for needing a different version for the army, navy, air force, and coast guard, and $500 profit.
What a great opportunity to demonstrate the alignment problem between harmful AI, economic incentives, and standing up for ones principles in spite of having something to gain.
the technology openai-sells is actually not that good for kill bots, we have boston dynamics for that. I mean to be real here, they're already better than human soldiers, deploying 100 of the doggies and letting them run loose could wipe out any fortified group.
Especially if you include things that are not normally acceptable such as suicide bombers, poison gas, etc.
Also it has been proven that in real modern warfare cheap drones seem to dominate. So unless we have a kill-bot that can withstand explosives while staying lightweight and operable with good KD (drones are 1.0 or less). kill-bots would have to have a KD of 100 to break even.
SamAs hypothetical friends - not sure if he has any _real_ friends as that requires trust, which is famously antithetical to interacting with SamA - becoming more powerful and wealthy.
This is the key, every AI company makes a big deal about how AGI will be transformative but we're just supposed to take it on faith that this transformation will be good.
Yeah considering we're somewhat far along the maturity curve with LLMs, and diffusion, we can kind of extrapolate where this is going, so unless there's another gamechanging breakthrough, I can't connect the dots between what we have now, and what's being described here.
I'm not super optimistic personally, but isn't the optimistic outcome obvious? If AGI takes over, solves robotics, (and doesn't kill us all,) then we could see the elimination of all human labor for the purposes of meeting necessities.
Unless AI can solve the "people with access to capital would prefer to keep it rather than share it freely" problem, this doesn't actually lead to human flourishing. You need to pay for the labor-bot and you don't have any money to do so because there are no jobs available for you.
Given that AI doesn't seem to be solving the allocation problem even for things like distributing fairly inexpensive TB drugs to people who need them, I'm not holding my breath.
I read "widespread flourishing" as referring to a scope of influence and "at a level that is difficult to imagine" as referring to an amount of accumulated wealth.
But surely the people who aren't committing to not use technology for autonomous death deserve a more charitable reading.
Not succumbing to being cynical I can imagine total elimination of food shortages by completely making farming robot driven, AI driven gene therapy to end most disease, robot driven scalable power generation through building solar panels in an automated fashion, installation and maintenance, one robot at home that can replace all tradesmen etc. There is a lot of stuff that could be possible if the promises pan out and they don’t seem completely out of reach at this point.
I’ll take it over whatever Luddite fucking nonsense the progressives have going on right now. I’m a progressive and will fight whatever short term nonsense perspective vociferously. It’s progressive because it’s forward looking, not some regressive bullshit like we have now.
Billionaires are the ones making paradigm shifting progress because they are the only ones that can afford it. I’m all for taxing billionaires but not understanding how paradigm shifts have worked in the US for 80 years you’re just going on gut feel not evidence.
Today more than one million people die annually of TB, a disease that is completely treatable with a fairly inexpensive medicine. We could pretty much eliminate TB from the entire world if we just distributed this medicine differently than we currently do. Yet still there will be thousands of deaths due to TB today.
Why would some fancy new gene therapy be different? We already know what happens when we have a treatment for a deadly disease. People without money still die from it.
> Power in the future can either be held by a small handful of companies using and controlling superintelligence, or it can be held in a decentralized way by people. We believe the latter is much better...
Superintelligence aside, power in the present is already held by a small handful of companies, at least in the west. The principles are pretty good, vacuous though they may be.
"To be clear when we floated the idea of 'federal loan guarantees' we were only asking the government subsidize building a bunch of cheap supply for us. We definitely believe in competitive markets." https://www.reuters.com/business/openai-does-not-want-govern...
I think in general many people at OpenAI believe in AI enabling a prosperous future for all: diseases cured, work being optional due to supply-side deflation enabled by AI automating much of the economy, everyone in the world having access to essentially the best teachers, doctors, etc for mere pennies
However, I believe a lot of this is contingent on "things will be so prosperous we will figure out the hard stuff later". One major thing happening now is the nature of AI enabling better AI is that the improvements and advances concentrate the gains among fewer and fewer people. The AI boom has minted a handful of deca-billionaires, while millions lose their job or can't compete in this winner-takes all world.
Of course universal basic income would be more feasible in a world enhanced by AI productivity, but in the meantime, the trend is "A few people get very very very rich and everyone else enters the lottery of circumstance over whether the chaos caused by AI will land them in a better or worse position."
What evidence do we have that this trend won't continue into this future of "universal prosperity"? Will current OpenAI employees and tech CEOs essentially become permanent dynasties, lording over empires of autonomous robots while the average person gets to share one? (Universal prosperity cannot change the amount of rare-earth minerals on the planet). Of course a space-faring asteroid mining future solves this, but not right away.
When a company writes down principles, you should be highly skeptical.
- Democratization.
Why is it your prerogative sam bro? In other words, what he means is consolidate access so "We" can democratize. We choose who gets what.
- Empowerment.
People are empowered by default. Its the totalitarians who curtail the empowerment. The fact that sam things he has the power to "empower" people is arrogant at best. People are empowered already, you just build the tools and make the tools accessible at a reasonable price.
- Universal prosperity.
This one pisses me off the most. Who TF made you the benevolent mayor of universe? Are you running for president of the universe and people ask: Hey sam, what would you do as a president of the universe? "I will bring universal prosperity".. yaaaay Sama for president. FFS!
- Adaptability
Yep. we'll kiss the ring of whoever is in power, until we get in power. then we will adapt if needed to your needs.
- Give people a voice.
Read: ensure you control their voice.
My take: who tf are you to give anyone a voice? Everyone HAS a voice.
- Build connection and community.
Read: ensure that you control all the connections and communities so that you can steer elections and other important things.
My take: people have been connecting already for thousands of years.
- Serve everyone
Read: control who you serve.
My take: Serve everyone, except for totalitarian regimes and people with ideas that are not aligned with ours.
That would require more honesty than they are capable of. Putting out a bunch of platitudes that you don't actually care about is easy enough and will convince the naive that really want to believe it already
Why even put this out? This is weeks after the whole Department of War thing, days after the article about Sam Altman being a pathological liar (as if we needed one).
If the intention is to bury all that then I think it's going to have the exact opposite effect and make everyone remember.
I wonder how much money OpenAI has spent lobbying for alternative economic models research (or better—spent on such research themselves). Then compare that to how much they’ve spent lobbying to enable deploying this all as fast as possible. Those numbers will speak louder than any words on their principles.
I love the optimism in this document. I'm listening to the audiobook called The Optimist right now, which is about Sam Altman, and the audiobook jives with this document.
I can definitely see how it is going to create a lot more value to society.
1. Democratization is centralization. We will resist the potential of this technology to consolidate power in the hands of the few, by consolidating it in the hands of us, who are not few but correct.
2. Empowrment is compliance. We believe AGI can empower everyone to achieve the goals we have determined are worth achieving.
3. Prosperity is scarcity. We want a future where everyone can have an excellent life, which will require new economic models because the old ones will no longer function, for reasons unrelated to us.
4.Resilience is dependence. AGI will introduce new risks, which only AGI can solve, which only we can build.
5. Adaptability is revisionism. We continue to believe the only way to meet the challenges of an unpredictable future is to be prepared to update our positions, our charter, our nonprofit status, our safety commitments, our board, our cofounders, and our prior statements, all of which were operative at the time and are now inoperative and were never said.
6. Please don't look at our financials. They are horrible and we are hoping to sucker people into an IPO before all of this implodes. The least your Grandma can do for us is give us 2% of her S&P 500 portfolio so we can exit before it goes to zero. This is AGI after all.
Like who is the intended audience and what purpose does this serve?
I can't imagine that this will have the same powerful effect that Google's 'don't be evil' stuff did all those years ago.
People are just too cynical and have enough experience being burned by big tech companies. You might think that I'm speaking from a place of age and experience but I think this applies to everyone, young and old -- we're all using these devices and services from the cradle now it seems and we've all been burned by them or know someone who has been burned by them -- kids know the big tech rug pull just like they know they rug that they crawl on while sucking on a pacifier.
So what's the point of this? Is the intended audience internal? Like is it just for the people who work at openai to distract them from the news the stories that they hear in the news about their companies and the stuff they hear people say about them in social gatherings before they admit that they work for openai?
In light of the two recent attacks on his domicile, maybe the reasoning is to narrow top of funnel of skeptics. I don't see how anyone in this day and age would buy it but then again, I've known folks who just live their lives in profound ignorance of politics...
All the major AI shops are out trying to be the king of the jungle -- I don't think there can be a market in the end for all of them to be worth 2T+ giants.
> Here are the principles that guide our work.
> 1. Democratization. We will resist the potential of this technology to consolidate power in the hands of the few.
For example they could publish their models and research... instead of doing the opposite of what they claim being their very first principle.
I prefer distributed power, not a democratic system as it's often abused.
If large models become a +100x productivity multiplier they can charge crazy money for access. So rich/money people will dominate the world in no time. Today many corporations are happy to pay $5k/mo/user. Everyday people and small companies can't afford that. I can't. We need to build an open ecosystem that at least shrinks the gap.
Learn to run your own models. Get yourselves at least a cheap GPU or even share one with friends. Join groups. There's a lot to do from data to fine-tuning.
The other day Anthropic cut off 100 users of a company without warning and stonewalled them: https://old.reddit.com/r/ClaudeAI/comments/1sspwz2/psa_anthr...
Or they could resist harvesting everyone's work for free to turn into their own revenue
This is one of many factors that precipitated the Soviet collapse.
Turn on the news and you know the language being spewed has no relation to reality. A society full of liars where people say the exact opposite of the truth. Now that LLMs can produce infinitely many words for free, trust in language is falling to all-time lows.
Eventually people just stop believing in words, the fundamental unit of human communication.
I can't recommend Adam Curtis' Hypernormalisation more than ever.
> What emerged instead was a fake version of the society. The Soviet Union became a society where everyone knew that what their leaders said was not real.
> Everybody had to play along and pretend that it was real, because no one could imagine any alternative. One Soviet writer called it "hypernormalisation."
Apologies for the naive question (because I haven't read the book). I grew up with the Evil Empire waiting to nuke me until Gorbachev provided a brief respite before the KGB returned. As I recall, they were presented as an enemy with almost but just barely not quite unlimited capacities. I still don't understand what happened in terms of global geopolitics in the last forty years.
Does the book suggest that the Soviet collapse was caused by rather than delayed by their Orwellian perversion of language?
Hypernormalisation yes!
Curtis is my favorite documentalist :)
All the others are great too but Hypernormalisation is the most relevant to this.
Watching that one and Yuri Bezmenov's masterclass and long interview are life changing
they are trying to co-opt and dilute the term "democratization".
Remember this one of OpenAI's principles?
> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
What do people think is the probability that OpenAI would ever actually do this?
> value-aligned
If the other project were equally aligned with the value OpenAI places on consolidating power and wealth onto Sam Altman, I don't see why OpenAI wouldn't do what they say.
I think you hit the bullseye here, but they were actually hoping that you would infer this to mean: "value-aligned with human-kind."
Extremely high!
Those various caveats there — “value-aligned”, “safety-conscious”, “case-by-case agreements” — probably mean that no project ever will be “worthy” of OpenAI’s assistance.
In the unlikely event that an abiding project appears, then yeah, sure, it’s very probable that OpenAI would assist it :)
Nearly 100%.
Let me reframe this for you — “If we find a team substantially closer to AGI than us, we would seek to merge with them.“
knowing sama, that's exactly what he would do. except, the story wouldn't end with openai collaborating with a competitor who is better than them, openai will collaborate with them to ensure they're destroyed from inside out so that only openai can dominate eventually. "Eventual dominance" architecture, you know.
Simply put, I don't think OpenAI has principles, otherwise they'd still be open.
Do I even need to read their article?
No, I don't.
Change my mind. I just had King's day in the Netherlands so maybe I'm too alcohol fueled, but I think I have the right amount of alcohol to call Sam Altman and the rest of the leadership out on it. They don't have principles, not good ones anyway.
Glaring lack of, "We will not participate in the creation or operation of 'kill bots'."
I can't believe it has to be said. Yet, here we are. Nice to haves include: "We will not participate in the use of AI for mass surveillance," and "We will not participate in the use of AI for (cyber-)warfare."
I'm not sure how intelligence and killing go together. The soldiers who are getting blown up by those suicide drones with just enough intelligence to recognize targets and chase them, are infinitely more intelligent than said drones.
I'm kind of not sure how much intelligence squares into this compared to being faster (and cheaper) than your enemy.
I would even dare to explore the corollary - the people who sell the idea to the MIC that strong AI will turn their weapons into unstoppable killing machines are in fact far overselling the amount of improvement these systems can bring.
Outside of coding, that is probably the most lucrative place for AI to be used in. The theatre of war is forgiving of small collateral damage and it’s like this technology was built for kill bots.
I don't think the MIC is that lucrative even in the US, compared to the private sector. Consider the F35 - there have been like 1500 planes, and at $100mm a piece, the total comes out to $150B over two decades, and that's revenue, I doubt they have a huge margin on that, compared to software, where the costs are minimal.
Despite there being a war on, LM stocks have performed close to the market. Boeing has repeatedly reported that its commercial division is doing much better than its defense one, and Boeing isn't doing so hot these days.
I remember reading that Boeing's commercial division
> I remember reading that Boeing's commercial division
This sentence appears to be unfinished.
It’s lucrative to companies that see the ridiculous waste the current military industrial complex is. There can be competitors that can provide similar effectiveness for a fraction of the cost.
I don't want to judge the US armed forces, so the following is a hypothetical - I heard the rumor that some procurement guy shared the US Army paid about $10k for a set of power tools for mechanics, which were commercially available, and together cost about $1k.
Now I learned from ChatGPT that the US Army has about 100k mechanics, and assuming every one gets such a set, and the extra $9k is pure profit, then the guy making the sale gets about $900mm out of it - a staggering amount, but not commercially compared to what big companies rake in, and financially not very sophisticated.
Also I'd like to stress that the above scenario is conjecture - I have far too little knowledge of the actual specifics of the org to just make such an accusation openly.
$1k for the tools, $5k for dealing with the paperwork, $2k for having to stock 4,000 sets for years because they need to have direct replacements forever in order to make them standard issue, $500 for needing a different version for the army, navy, air force, and coast guard, and $500 profit.
What a great opportunity to demonstrate the alignment problem between harmful AI, economic incentives, and standing up for ones principles in spite of having something to gain.
Don't you think the lucrative nature is more about how fascists don't care about how much money they throw at killing $BROWN_RACE
the technology openai-sells is actually not that good for kill bots, we have boston dynamics for that. I mean to be real here, they're already better than human soldiers, deploying 100 of the doggies and letting them run loose could wipe out any fortified group.
Especially if you include things that are not normally acceptable such as suicide bombers, poison gas, etc.
Also it has been proven that in real modern warfare cheap drones seem to dominate. So unless we have a kill-bot that can withstand explosives while staying lightweight and operable with good KD (drones are 1.0 or less). kill-bots would have to have a KD of 100 to break even.
Counterpoint: Killbots are vulnerable to smaller, cheaper bots deployed in defensive positions.
"Principles" of "Open" AI
* This will change anytime we want, whether you agree or not
* For employees who are following current "principles", when we change, if you are strict about principles, then please leave, we will hire new people
Not even worth the HTML it's written in. OpenAI has demonstrated they don't have any strongly held principles, repeatedly.
> We envision a world with widespread flourishing at a level that is currently difficult to imagine
Help me imagine it, what are some examples of widespread flourishing we can look forward to?
SamA becoming more powerful and wealthy.
SamAs hypothetical friends - not sure if he has any _real_ friends as that requires trust, which is famously antithetical to interacting with SamA - becoming more powerful and wealthy.
Need any more examples of flourishing?
In a simplified world with 1 trillionaire and 99 people who have $1 each, the average person has ~$10 billion. The average person is flourishing!
Also, if you kill 10 people a year, you will have an accelerating rate of average wealth growth!
Just take away their jobs by replacing them with ai and have them kill themselves. So much cheaper :(
This is the key, every AI company makes a big deal about how AGI will be transformative but we're just supposed to take it on faith that this transformation will be good.
absolute underpants gnomes reasoning
Yeah considering we're somewhat far along the maturity curve with LLMs, and diffusion, we can kind of extrapolate where this is going, so unless there's another gamechanging breakthrough, I can't connect the dots between what we have now, and what's being described here.
I'm not super optimistic personally, but isn't the optimistic outcome obvious? If AGI takes over, solves robotics, (and doesn't kill us all,) then we could see the elimination of all human labor for the purposes of meeting necessities.
Unless AI can solve the "people with access to capital would prefer to keep it rather than share it freely" problem, this doesn't actually lead to human flourishing. You need to pay for the labor-bot and you don't have any money to do so because there are no jobs available for you.
Given that AI doesn't seem to be solving the allocation problem even for things like distributing fairly inexpensive TB drugs to people who need them, I'm not holding my breath.
Do you think our economic system is prepared to cope with that?
I read "widespread flourishing" as referring to a scope of influence and "at a level that is difficult to imagine" as referring to an amount of accumulated wealth.
But surely the people who aren't committing to not use technology for autonomous death deserve a more charitable reading.
Maybe something like swarms of autonomous killer drones invading Greenland and Canada?
There are people that will call you a luddite for this.
"The world is going to be so much better! We're not really sure how, but, trust us, definitely better!"
And give us money, do not put any regulation in place, actually ignore when we break the law, and for sure, no taxes please!
The more specific the better
Not succumbing to being cynical I can imagine total elimination of food shortages by completely making farming robot driven, AI driven gene therapy to end most disease, robot driven scalable power generation through building solar panels in an automated fashion, installation and maintenance, one robot at home that can replace all tradesmen etc. There is a lot of stuff that could be possible if the promises pan out and they don’t seem completely out of reach at this point.
The reason for food shortages is not scarcity of food, nor the reason for shelter, clothing, or transportation.
We have enough technology and resources to make it a heaven for everyone except for unpreventable diseases or similar.
Yet we do not because we did not crack the human-alignment problem.
That’s the problem we need to solve, not the resource problem.
If we solve the resource problem before the human alignment problem, we will cause unimaginable suffering.
It will sort itself out over time. It will be painful. Probably not over our lifetime.
This is the kind of attitude that is the reason it won't happen
I’ll take it over whatever Luddite fucking nonsense the progressives have going on right now. I’m a progressive and will fight whatever short term nonsense perspective vociferously. It’s progressive because it’s forward looking, not some regressive bullshit like we have now.
The billionaires controlling this aren't going to save us no matter how much wealth and power they gain.
Billionaires are the ones making paradigm shifting progress because they are the only ones that can afford it. I’m all for taxing billionaires but not understanding how paradigm shifts have worked in the US for 80 years you’re just going on gut feel not evidence.
> It will be painful
you are choosing the pain of the masses over the pain of the wealthy.
Today more than one million people die annually of TB, a disease that is completely treatable with a fairly inexpensive medicine. We could pretty much eliminate TB from the entire world if we just distributed this medicine differently than we currently do. Yet still there will be thousands of deaths due to TB today.
Why would some fancy new gene therapy be different? We already know what happens when we have a treatment for a deadly disease. People without money still die from it.
> Power in the future can either be held by a small handful of companies using and controlling superintelligence, or it can be held in a decentralized way by people. We believe the latter is much better...
Superintelligence aside, power in the present is already held by a small handful of companies, at least in the west. The principles are pretty good, vacuous though they may be.
Having it in the hands of public companies or foundations seems preferable to me to having it in the hands of private companies or individuals.
This principle would be good if not for the irony of ClosedAI saying that.
Funny how true adherence to this is only with open-sourcing the models.
And this won’t happen.
So another “do no evil” bla-bla which will ultimately be dropped
Those are my principles, and if you don't like them... well, I have others!
First thing I thought about. I have no reason to believe that it’s not accurate for most contemporary tech corps.
I think Sam only principle is to make money. At all cost.
I'm not sure its money but influence, which money greatly facilitates.
Why should we trust a for profit tech concern to do the right thing this time given the historical context available to us?
forget all context; believe capitalism will resolve it this time using $TECHNOLOGY
Capitalism but also be sure to use the government to rescue the industry when things will go bad
"To be clear when we floated the idea of 'federal loan guarantees' we were only asking the government subsidize building a bunch of cheap supply for us. We definitely believe in competitive markets." https://www.reuters.com/business/openai-does-not-want-govern...
I think in general many people at OpenAI believe in AI enabling a prosperous future for all: diseases cured, work being optional due to supply-side deflation enabled by AI automating much of the economy, everyone in the world having access to essentially the best teachers, doctors, etc for mere pennies
However, I believe a lot of this is contingent on "things will be so prosperous we will figure out the hard stuff later". One major thing happening now is the nature of AI enabling better AI is that the improvements and advances concentrate the gains among fewer and fewer people. The AI boom has minted a handful of deca-billionaires, while millions lose their job or can't compete in this winner-takes all world.
Of course universal basic income would be more feasible in a world enhanced by AI productivity, but in the meantime, the trend is "A few people get very very very rich and everyone else enters the lottery of circumstance over whether the chaos caused by AI will land them in a better or worse position."
What evidence do we have that this trend won't continue into this future of "universal prosperity"? Will current OpenAI employees and tech CEOs essentially become permanent dynasties, lording over empires of autonomous robots while the average person gets to share one? (Universal prosperity cannot change the amount of rare-earth minerals on the planet). Of course a space-faring asteroid mining future solves this, but not right away.
When a company writes down principles, you should be highly skeptical.
- Democratization. Why is it your prerogative sam bro? In other words, what he means is consolidate access so "We" can democratize. We choose who gets what.
- Empowerment. People are empowered by default. Its the totalitarians who curtail the empowerment. The fact that sam things he has the power to "empower" people is arrogant at best. People are empowered already, you just build the tools and make the tools accessible at a reasonable price.
- Universal prosperity. This one pisses me off the most. Who TF made you the benevolent mayor of universe? Are you running for president of the universe and people ask: Hey sam, what would you do as a president of the universe? "I will bring universal prosperity".. yaaaay Sama for president. FFS!
- Adaptability Yep. we'll kiss the ring of whoever is in power, until we get in power. then we will adapt if needed to your needs.
You know who else has principles: Meta. (https://www.meta.com/about/company-info/?srsltid=AfmBOooT6i0...)
- Give people a voice. Read: ensure you control their voice. My take: who tf are you to give anyone a voice? Everyone HAS a voice.
- Build connection and community. Read: ensure that you control all the connections and communities so that you can steer elections and other important things. My take: people have been connecting already for thousands of years.
- Serve everyone Read: control who you serve. My take: Serve everyone, except for totalitarian regimes and people with ideas that are not aligned with ours.
etc. etc.
was expecting this page to be blank...
That would require more honesty than they are capable of. Putting out a bunch of platitudes that you don't actually care about is easy enough and will convince the naive that really want to believe it already
Why even put this out? This is weeks after the whole Department of War thing, days after the article about Sam Altman being a pathological liar (as if we needed one).
If the intention is to bury all that then I think it's going to have the exact opposite effect and make everyone remember.
Maybe the goal is to release something publicly every day, can’t get away from OpenAI announcements the last two weeks. Competing for mindshare.
They plan to go public. That’s why
Interesting how this was released on the eve of the Musk v. Altman trial ...
For anyone who's a parent, their image model won't make ANY change to an image containing a child AT ALL, REGARDLESS OF REQUEST.
Consent, harassment between teens, etc. are the cited reasons, I guess.
I wonder how much money OpenAI has spent lobbying for alternative economic models research (or better—spent on such research themselves). Then compare that to how much they’ve spent lobbying to enable deploying this all as fast as possible. Those numbers will speak louder than any words on their principles.
Translation: We know that many of you don't trust us and that our image has degraded a lot recently, but we need you to trust us.
Show, don't tell.
Snatching this contract with the military was not a good sign of things to come from OpenAI or Sam Altman.
The documented patterns of lies and manigances is real.
I believe Groucho Marx once said: "I'm a man of principles. If you don't like them, I have others!"
Related:
Altman's 'beliefs' in his response to the moltov cocktail
https://blog.samaltman.com/2279512 (https://news.ycombinator.com/item?id=47724921)
PRINCIPLES.md
I love the optimism in this document. I'm listening to the audiobook called The Optimist right now, which is about Sam Altman, and the audiobook jives with this document.
I can definitely see how it is going to create a lot more value to society.
Are we defining "society" as "Sam Altman and his closest investors" now?
What the hell are you talking about? More value to society? By allowing the usa military to use their models? You must be a sadic
Here is the translated version:
1. Democratization is centralization. We will resist the potential of this technology to consolidate power in the hands of the few, by consolidating it in the hands of us, who are not few but correct.
2. Empowrment is compliance. We believe AGI can empower everyone to achieve the goals we have determined are worth achieving.
3. Prosperity is scarcity. We want a future where everyone can have an excellent life, which will require new economic models because the old ones will no longer function, for reasons unrelated to us.
4.Resilience is dependence. AGI will introduce new risks, which only AGI can solve, which only we can build.
5. Adaptability is revisionism. We continue to believe the only way to meet the challenges of an unpredictable future is to be prepared to update our positions, our charter, our nonprofit status, our safety commitments, our board, our cofounders, and our prior statements, all of which were operative at the time and are now inoperative and were never said.
6. Please don't look at our financials. They are horrible and we are hoping to sucker people into an IPO before all of this implodes. The least your Grandma can do for us is give us 2% of her S&P 500 portfolio so we can exit before it goes to zero. This is AGI after all.
What is the true definition of "AGI" in this context?
Nobody knows. So, whatever OpenAI wants it to be. That’s like the dream, just morph it into whatever you want, constantly, to justify anything.
> …our principals…
Principals my hole.
OpenAI is a business based on stolen work run by a man that’s busy stealing scans of people’s eyes.
Actions speak louder than words.
This could have been once sentence long:
"Our guiding principle is to make as much money as possible at the expense of absolutely everything and everyone on the planet"
Of course, you could paste that into basically any corps mission statement or values page and it wouldn't be out of place.
Newsflash: they're changing... again
Hilarious how the link leads to a 404: https://imgur.com/a/oOpoh1q
> AI has the potential to significantly improve many aspects of society.
...as well as the potential to significantly worsen many aspects of society
says the org thats no longer non profit.
Did The Onion take over InfoWars or OpenAI?
Why do companies put stuff like this out in 2026?
Like who is the intended audience and what purpose does this serve?
I can't imagine that this will have the same powerful effect that Google's 'don't be evil' stuff did all those years ago.
People are just too cynical and have enough experience being burned by big tech companies. You might think that I'm speaking from a place of age and experience but I think this applies to everyone, young and old -- we're all using these devices and services from the cradle now it seems and we've all been burned by them or know someone who has been burned by them -- kids know the big tech rug pull just like they know they rug that they crawl on while sucking on a pacifier.
So what's the point of this? Is the intended audience internal? Like is it just for the people who work at openai to distract them from the news the stories that they hear in the news about their companies and the stuff they hear people say about them in social gatherings before they admit that they work for openai?
In light of the two recent attacks on his domicile, maybe the reasoning is to narrow top of funnel of skeptics. I don't see how anyone in this day and age would buy it but then again, I've known folks who just live their lives in profound ignorance of politics...
> Like who is the intended audience and what purpose does this serve?
Green Party voters; Technophobic readers of The Guardian[1]; Account managers at image-washing nonprofits; and possibly an anti-Roko's Basilisk.
[1] As opposed to technophilic Guardian subscribers, myself included; just to be clear that I'm not dunking on the newspaper itself.
Wow that's a lot of hollow words that are backed by precisely fuck all.
Dystopian humblebrag gymnastics.
lol "principles" like Sam Altman has any ...
All the major AI shops are out trying to be the king of the jungle -- I don't think there can be a market in the end for all of them to be worth 2T+ giants.
I mean this is why alignment matters.
This was an impressive load of bullshit. I wonder if they asked ChatGPT to generate it.
[dead]