This seems to have a healthy helping of AI editing help (if not fully generated by AI). The links don't quite go to the sources that they should and there's a lot of AI-isms.
Anyways, the calculation for the costs seem crazy high (and are pulled from an ft article). In particular they are based off a calculation that assumes Sora videos take 10 min to generate (which seems simply wrong; I've personally generated Sora videos that take less than 10 min to return fully formed), fully saturate 4 H200s at once (this seems wrong with batching; I would assume they're batching a lot of tokens together per forward pass), and, crucially, that OpenAI is paying full spot, end-user pricing for an H200 (at $2 an hour). As an individual, I can rent an H200 for $2 an hour on e.g. vast.ai (and sometimes even cheaper than that!). There is absolutely no way OpenAI is spending anywhere near that number.
I also have no idea where the Appfigures $2.1 million comes from. As far as I can tell it doesn't exist at all in the linked website.
I haven’t really been following this, but my understanding is that they’re cancelling this program - I haven’t dug into the “why” too much, seems like something about the Disney deal, “focusing on other initiatives”… My thought was that it’s because they’re not making money on it. Why else would they shut down a revenue stream? If it’s decent they don’t even need to improve it, it would be mostly passive income.
Other than money, a really good reason to shut down Sora is that it was a horrible idea in the first place that went completely against OpenAI's mission to make AI benefit humanity and improve lives. Sora was like TikTok, an app already thought to waste time and ruin attention spans, except even worse because there was no real information as everything inside is AI generated. More than that, it had a dual use as it allowed generating fake footage of protests etc that people then reuploaded to other platforms to mislead people. There is nothing about Sora I can think of that benefited humanity, it was only a net negative and a race to the bottom for more extreme memes and desensitizing people to reality.
There are many ways for a project to no longer be worth the company's attention. E.g. it might be the case that total costs factoring in on-going engineering energy and money (which is quite different than just compute costs!) are too much. It might be that political risk exposure from the product isn't worth the benefits it brings (Sora was always a lightning rod of criticism). It might be that the opportunity cost of engineering and/or compute resources spent on a product is too high (very different than absolute cost).
All this is to say, even for very compute cheap things, companies shut down "mostly passive income" revenue streams all the time (see e.g. Google's graveyard of products). There are all sorts of other organizational costs associated with ongoing maintenance of a product.
The title is misleading. It should be read as “Some $20/month users who made videos on Sora costed OpenAI $65 in compute.”
OpenAI is most definitely in a position to be profitable. They are spending less than a third of their revenue on compute (all infrastructure costs combined).
Yeah fair, the $65 is for someone cranking out 50 clips/month. Most users were probably doing 5-10, so more like $6.50-$13 in compute. That's fine at $20/month.
Doesn't change the bigger picture much though. OpenAI's at $25B annualized revenue and still projecting $14B in losses for 2026. Sora wasn't the only problem, just the most obvious one.
The major reason for that is, they are pushing hard on capital expenditure (one-time training costs), building their own silicon and people costs (they have between 4000-6000 employees). If OpenAI really wanted to be profitable today, I am sure they can do it, but they would no longer be competitive.
In other words, we're NO LONGER looking at the scenario where their unit economics was out of whack and lopsided against profitability. The AI frontier behemoths of today (OpenAI, Anthropic and Google) are very well positioned for strong profitability unless Chinese AI companies start gaining more market share in North America.
Yet, I don't think the pace of competition is sustainable. Once there is stronger pressure on them for profitability, we'll see them slow their moonshots, cut costs and focus more on the core business. Maybe that is what we are starting to see now.
Look up the Osborne 1, the first "portable" (i.e. luggable) computer. They went out of business not only because they lost money on each unit, but because of how many they sold. Then they pre-announced their next model, which killed all demand for the existing one, and they were toast.
It's a fascinating story, but is it really related?
IIRC, they were making decent-enough profits with the Osborne 1 at the beginning. It was never intended to be a loss-leader.
It was only after the Osborne 2 was announced (way too early) that existing orders got cancelled, and inventory was sold at fire-sale prices in sheer desperation to generate any value from the well they'd accidentally poisoned.
(For those who don't know, the company imploded before the Osborne 2 was finished.)
IMO Yes it is related. Anthropic and OAI, etc, announced "AGI" and "super intelligence" way too early.
Before the SaaS model teams can release Grand Opus and Really-This-Time-It's-AGI-Promise! GPT, local models and optimized hardware will obsolete them.
The lag from design of manufacturing adjustments needed to shipped hardware is 6-7 years. Still 3-5 more years for first gen hardware influenced by the post-GPT launch era. Right now we're just getting the early tweaks as models keep developing, providing feedback in a sense to where to take hardware.
Everyone will not see it coming as they fetishize every little thing to do with the circus intentionally erected around OAI and company to keep them relevant. We’ll have another All You Need Is Attention and ChatGPT release moment and SaaS models are hosed.
It's orders of magnitude more difficult than the era of the Osborne 2 so the timelines are longer but from inside the hardware industry, can say that is indeed the goal.
The economics work if you generate the video locally, using your own compute and a pretrained model provided for a fee. The compute bit is the expensive part. Local users could trade time for money. They just don't have a business or security model that allows them to distribute the model for people to use locally. Sure, you might need to wait all night for 10 seconds of video generated on your 4090, but you could do it, and folks might even pay for the privilege of using the pretrained model. Licensing for local compute might even pay back the cost of training the model with enough time and users.
This is the model that makes sense to me and I'm surprised nobody at OpenAI pursued it. Yeah a 4090 would take hours for 10 seconds of video, but people already do this. The SD/ComfyUI crowd runs overnight batch generations on consumer GPUs and doesn't care about latency.
Charge for model access, let users burn their own power. Basically Llama but for video (pun intended).
The reason it won't come from OpenAI is the deepfake thing. Distribute the weights and you lose all moderation. Sora already had a deepfake disaster WITH server-side controls. Without any? Good luck.
But yeah, for someone willing to go open-weights, there's a real business there.Opus 4.6Étendue
Full quality Sora yeah probably needs serious hardware. distilled version on a 4090 though? maybe. danjl earlier in this thread made a solid case for just distributing the weights and letting people run locally. the SD/ComfyUI crowd already does this daily. OpenAI won't because deepfakes. they already had a mess WITH server-side moderation. open weights with zero moderation, good luck with that PR.
A $20/month Codex user probably costs thousands of dollars in compute (for multiple providers). I think they just want to weather it over until compute is eventually cheap enough, has to be, otherwise it'll always be unsustainable.
Or the alternative long con: lock in enterprise customers and raise prices. Seriously that is the golden goose, big institutions will just buy like lemmings even if they are already buying the redundant competing product.
That might be the calculus. Well, I'll be glad to leech off whoever's offering the most amount of tokens, with the best models, for the cheapest price, and then they'll get some of my money and hopefully some bigger customers.
If they can wait it out a few more years imagine how many people will be fully dependent on it, and otherwise unskilled. Then they can charge whatever they want.
Anthropic doesn't break out per-product numbers but they're at $19B annualized, Claude Code being a big chunk. The thing is text/code inference is just way cheaper. A Claude Code session runs maybe $0.01-0.05 in compute. A Sora clip is $1.30. Not even the same conversation.
Some friends and I did a "just for fun" calculation on what price AI should really have using some of business and infrastructure experience.
The three of us have a decent amount of years in adjacent fields, still this is more like a "trust me bro comment".
Anyway, we came to a subscription price of 120-150 USD/mo and we did this 6 months ago when the world wasn't yet the chaos it is right now.
If those number had to be adjusted, a quick calculation would put it already close to the 200 USD/mo mark so there a decent margin after taxes.
That said, of we are anywhere close to be correct on this, I think that increase the price of the product by 10x will drastically reduce the number of users which will then drastically reduce the hardware required.
And even if we are off by the double, it would still be a 5x price increase would cause similar effect.
Speculation on my part is that it needs to be cheap because they need as much as human generated content as possible as they are running out of data and the models have plateau'ed. We don't see that thing of models getting 10x smarter anymore and maybe we see they are getting smaller or more specialised.
Ofc, disruptive research might come up, but my guess is that this price is both a incentive and a requirement for this business to not break apart.
your $120-150/month gut feeling is basically where the math lands. $1.30/clip, 100 clips/month, you need $130 just to cover compute. $150 with margin, checks out. Problem is at 10x the price you lose 90%+ of users and the whole growth story is dead. Same thing that happened with ChatGPT Pro at $200/month honestly, barely anyone upgraded.
> generating a 10-second AI video costs roughly 160 times more than generating an equivalent amount of text
Hold up, "equivalent" how? It can't be based on "cost" of generation, or else it would be a 1x factor, by definition. Perhaps "costs" in this case refer to the unprofitable gap between revenues and expenses?
> Table 2
Weird, so it looks like some person just arbitrarily decided that 1K GPT-4 text tokens "is equivalent to" 10s of Sora 2 video?
I've often used this in silly pseudo-proofs demonstrating that words have little to no value.
Given that a picture is worth 1000 words, a film (being a string of pictures) at 24fps is 129600 pictures in 90 minutes, and viewing a film might cost $15: a word can be rented for $0.000116 or at a rate of roughly 86 words per penny.
This also tracks well with paperback novels as 70k words would be a little over $8 and 100k words would be just under $12.
That said, I have nothing but the vaguest sense of what an average movie or book costs these days. Are movies $15? Does walmart still have the $5 bin?
What about books? I know that the last time I was in a book store I was somewhat shocked by the prices but that was years ago.
Although, the local used good probably still sells both media for $1/ea. If that's the case, there's an easy frugality argument in the 90 minute movie being worth ~130k words against most novels topping out under 100k.
(I put it in Gemini for English translation)
The 1080p and most expensive tier is 0.70 USD per second. Since Sora 2 runs at 30 FPS, each second of video costs roughly 2.3c per frame.
While a single 1920x1080 static image is 765 tokens, video models use spacetime compression. Instead of a raw 22,950 tokens per second (765 tokens x 30 frames), a second of 1080p video equates to roughly 10,000 'latent tokens' due to temporal redundancy. Adding 20 tokens per second of audio, we get roughly 10,020 tokens per second of output.
At $0.70 per second for ~10,020 tokens, the cost is approximately $0.00007 per token for Sora 2.
10 seconds of Sora 2 video would cost $7.00 for roughly 100,200 tokens.
In comparison, GPT-5.4-pro at 15 USD per 1M output tokens costs $0.000015 per token. To generate 100,200 tokens of text, it would cost only $1.50.
This puts Sora 2 at roughly 4.6x more expensive than GPT-5.4-pro per token generated. However, if we ignore video compression and treat every frame as a unique 1080p image (765 tokens each), Sora 2 becomes roughly 30x more expensive in terms of raw computational effort per frame
Well I guess you could say there is some amount of text that entertains you as much as a 10s Sora video. Judged in terms of time a fast reader might read 50 words in 10s and that is what, 100 tokens? If somebody wants to fudge that up by a factor of 10 (picture is worth a thousand words or something) you get where they are.
Now personally I am not entertained by motion-for-the-sake-of-motion Instagram reels, they actually make me queasy despite having a cast iron stomach and having taught myself to not get sick in VR. So if that's 10s of entertainment, leave me out. I don't care if Tom Cruise is whaling on Brad Pitt or the other way around for that matter, but boy do I want to see the body thetans burst ouf of Cruise's body when OTIII goes horribly wrong.
My reaction to the article was funny. I mean, I saw that 160x thing and thought it was bogus, and of course it is all AI generated and poorly formatted to boot but I did like the overall message. It does remind me of the early 2010s when a lot of sites with photo-based content (including mine) were going out of business because the revenue wasn't enough to pay the hosting costs and a few newcomers like Instagram were survivors and Google was obviously cleaning up with video on YouTube. From the viewpoint of business models for AI video I think there are two questions:
(i) how many times can you get people to watch the same video, i mean, no matter how expensive it is, if you get enough views/ad impressions/other revenue you are OK
(ii) how does it compete with some other way to generate the video?
The picture that the $20 subscription costs $65 to serve doesn't sound too crazy to me. I mean, there might be somebody who can get 3x the value out of a 10s Sora video than somebody else or they could get the cost down by a factor of 1/3.
Sounds like a great way for an AI company to kill off a competing AI company. You can probably do this "organically." Take your $20/mo user just use that money directly to buy that user a subscription for the competitor product and serve them a wrapper.
Not sure if it would work but it would at least been a great plot for Silicon Valley if that show were still around.
Steam engines have furnace, so yeah, heat, but not _just_ heat. Like openai wouldn't claim they're just tyring to heat up the environment by building data centers.
This seems to have a healthy helping of AI editing help (if not fully generated by AI). The links don't quite go to the sources that they should and there's a lot of AI-isms.
Anyways, the calculation for the costs seem crazy high (and are pulled from an ft article). In particular they are based off a calculation that assumes Sora videos take 10 min to generate (which seems simply wrong; I've personally generated Sora videos that take less than 10 min to return fully formed), fully saturate 4 H200s at once (this seems wrong with batching; I would assume they're batching a lot of tokens together per forward pass), and, crucially, that OpenAI is paying full spot, end-user pricing for an H200 (at $2 an hour). As an individual, I can rent an H200 for $2 an hour on e.g. vast.ai (and sometimes even cheaper than that!). There is absolutely no way OpenAI is spending anywhere near that number.
I also have no idea where the Appfigures $2.1 million comes from. As far as I can tell it doesn't exist at all in the linked website.
I don't really trust the numbers here.
I haven’t really been following this, but my understanding is that they’re cancelling this program - I haven’t dug into the “why” too much, seems like something about the Disney deal, “focusing on other initiatives”… My thought was that it’s because they’re not making money on it. Why else would they shut down a revenue stream? If it’s decent they don’t even need to improve it, it would be mostly passive income.
Other than money, a really good reason to shut down Sora is that it was a horrible idea in the first place that went completely against OpenAI's mission to make AI benefit humanity and improve lives. Sora was like TikTok, an app already thought to waste time and ruin attention spans, except even worse because there was no real information as everything inside is AI generated. More than that, it had a dual use as it allowed generating fake footage of protests etc that people then reuploaded to other platforms to mislead people. There is nothing about Sora I can think of that benefited humanity, it was only a net negative and a race to the bottom for more extreme memes and desensitizing people to reality.
There are many ways for a project to no longer be worth the company's attention. E.g. it might be the case that total costs factoring in on-going engineering energy and money (which is quite different than just compute costs!) are too much. It might be that political risk exposure from the product isn't worth the benefits it brings (Sora was always a lightning rod of criticism). It might be that the opportunity cost of engineering and/or compute resources spent on a product is too high (very different than absolute cost).
All this is to say, even for very compute cheap things, companies shut down "mostly passive income" revenue streams all the time (see e.g. Google's graveyard of products). There are all sorts of other organizational costs associated with ongoing maintenance of a product.
Sorry, I wrote wrong link for several source.
1. Appfigures $2.1M = https://appfigures.com/reports/app-profile/338340235920
2. Watermark bypass = https://www.404media.co/sora-2-watermark-removers-flood-the-...
3. Goldman Sachs $410B = https://www.tomshardware.com/tech-industry/artificial-intell...
The title is misleading. It should be read as “Some $20/month users who made videos on Sora costed OpenAI $65 in compute.”
OpenAI is most definitely in a position to be profitable. They are spending less than a third of their revenue on compute (all infrastructure costs combined).
Yeah fair, the $65 is for someone cranking out 50 clips/month. Most users were probably doing 5-10, so more like $6.50-$13 in compute. That's fine at $20/month.
Doesn't change the bigger picture much though. OpenAI's at $25B annualized revenue and still projecting $14B in losses for 2026. Sora wasn't the only problem, just the most obvious one.
The major reason for that is, they are pushing hard on capital expenditure (one-time training costs), building their own silicon and people costs (they have between 4000-6000 employees). If OpenAI really wanted to be profitable today, I am sure they can do it, but they would no longer be competitive.
In other words, we're NO LONGER looking at the scenario where their unit economics was out of whack and lopsided against profitability. The AI frontier behemoths of today (OpenAI, Anthropic and Google) are very well positioned for strong profitability unless Chinese AI companies start gaining more market share in North America.
Yet, I don't think the pace of competition is sustainable. Once there is stronger pressure on them for profitability, we'll see them slow their moonshots, cut costs and focus more on the core business. Maybe that is what we are starting to see now.
Look up the Osborne 1, the first "portable" (i.e. luggable) computer. They went out of business not only because they lost money on each unit, but because of how many they sold. Then they pre-announced their next model, which killed all demand for the existing one, and they were toast.
It's a fascinating story, but is it really related?
IIRC, they were making decent-enough profits with the Osborne 1 at the beginning. It was never intended to be a loss-leader.
It was only after the Osborne 2 was announced (way too early) that existing orders got cancelled, and inventory was sold at fire-sale prices in sheer desperation to generate any value from the well they'd accidentally poisoned.
(For those who don't know, the company imploded before the Osborne 2 was finished.)
IMO Yes it is related. Anthropic and OAI, etc, announced "AGI" and "super intelligence" way too early.
Before the SaaS model teams can release Grand Opus and Really-This-Time-It's-AGI-Promise! GPT, local models and optimized hardware will obsolete them.
The lag from design of manufacturing adjustments needed to shipped hardware is 6-7 years. Still 3-5 more years for first gen hardware influenced by the post-GPT launch era. Right now we're just getting the early tweaks as models keep developing, providing feedback in a sense to where to take hardware.
Everyone will not see it coming as they fetishize every little thing to do with the circus intentionally erected around OAI and company to keep them relevant. We’ll have another All You Need Is Attention and ChatGPT release moment and SaaS models are hosed.
It's orders of magnitude more difficult than the era of the Osborne 2 so the timelines are longer but from inside the hardware industry, can say that is indeed the goal.
> IMO Yes it is related. Anthropic and OAI, etc, announced "AGI" and "super intelligence" way too early.
So this announcement is causing people to cancel orders of the existing product, and for OpenAI to sell products at a loss?
The economics work if you generate the video locally, using your own compute and a pretrained model provided for a fee. The compute bit is the expensive part. Local users could trade time for money. They just don't have a business or security model that allows them to distribute the model for people to use locally. Sure, you might need to wait all night for 10 seconds of video generated on your 4090, but you could do it, and folks might even pay for the privilege of using the pretrained model. Licensing for local compute might even pay back the cost of training the model with enough time and users.
This is the model that makes sense to me and I'm surprised nobody at OpenAI pursued it. Yeah a 4090 would take hours for 10 seconds of video, but people already do this. The SD/ComfyUI crowd runs overnight batch generations on consumer GPUs and doesn't care about latency.
Charge for model access, let users burn their own power. Basically Llama but for video (pun intended).
The reason it won't come from OpenAI is the deepfake thing. Distribute the weights and you lose all moderation. Sora already had a deepfake disaster WITH server-side controls. Without any? Good luck.
But yeah, for someone willing to go open-weights, there's a real business there.Opus 4.6Étendue
Could OpenAI have released a local paid version, instead of shutting down Sora? Maybe. A lot of users have beefy machines.
You probably need $100K of hardware to run Sora.
Full quality Sora yeah probably needs serious hardware. distilled version on a 4090 though? maybe. danjl earlier in this thread made a solid case for just distributing the weights and letting people run locally. the SD/ComfyUI crowd already does this daily. OpenAI won't because deepfakes. they already had a mess WITH server-side moderation. open weights with zero moderation, good luck with that PR.
> OpenAI won't because deepfakes.
Do you think someone would spend 5 or 6 figures on a license and hardware to create deepfakes?
People pay for OnlyFans accounts; why not this?
A $20/month Codex user probably costs thousands of dollars in compute (for multiple providers). I think they just want to weather it over until compute is eventually cheap enough, has to be, otherwise it'll always be unsustainable.
Or the alternative long con: lock in enterprise customers and raise prices. Seriously that is the golden goose, big institutions will just buy like lemmings even if they are already buying the redundant competing product.
That might be the calculus. Well, I'll be glad to leech off whoever's offering the most amount of tokens, with the best models, for the cheapest price, and then they'll get some of my money and hopefully some bigger customers.
If they can wait it out a few more years imagine how many people will be fully dependent on it, and otherwise unskilled. Then they can charge whatever they want.
It's the AWS/Serverless model.
Do we have similar numbers for Claude code? Wonder how much it costs them?
Internally, we measured a regular developer consuming about $40/workday of compute on the $200/mo plan thresholds.
Anthropic doesn't break out per-product numbers but they're at $19B annualized, Claude Code being a big chunk. The thing is text/code inference is just way cheaper. A Claude Code session runs maybe $0.01-0.05 in compute. A Sora clip is $1.30. Not even the same conversation.
Some friends and I did a "just for fun" calculation on what price AI should really have using some of business and infrastructure experience.
The three of us have a decent amount of years in adjacent fields, still this is more like a "trust me bro comment". Anyway, we came to a subscription price of 120-150 USD/mo and we did this 6 months ago when the world wasn't yet the chaos it is right now. If those number had to be adjusted, a quick calculation would put it already close to the 200 USD/mo mark so there a decent margin after taxes.
That said, of we are anywhere close to be correct on this, I think that increase the price of the product by 10x will drastically reduce the number of users which will then drastically reduce the hardware required.
And even if we are off by the double, it would still be a 5x price increase would cause similar effect.
Speculation on my part is that it needs to be cheap because they need as much as human generated content as possible as they are running out of data and the models have plateau'ed. We don't see that thing of models getting 10x smarter anymore and maybe we see they are getting smaller or more specialised.
Ofc, disruptive research might come up, but my guess is that this price is both a incentive and a requirement for this business to not break apart.
your $120-150/month gut feeling is basically where the math lands. $1.30/clip, 100 clips/month, you need $130 just to cover compute. $150 with margin, checks out. Problem is at 10x the price you lose 90%+ of users and the whole growth story is dead. Same thing that happened with ChatGPT Pro at $200/month honestly, barely anyone upgraded.
I guess you didn't read my comment because your are saying exactly what I said
I’ll take a wild guess that Deepseek without access to magical funny money, can’t really operate at a massive loss.
If either US AI mega corp is at risk of failing I suspect they’ll receive generous bailouts.
Some places have affordable healthcare, we have AI slop
> generating a 10-second AI video costs roughly 160 times more than generating an equivalent amount of text
Hold up, "equivalent" how? It can't be based on "cost" of generation, or else it would be a 1x factor, by definition. Perhaps "costs" in this case refer to the unprofitable gap between revenues and expenses?
> Table 2
Weird, so it looks like some person just arbitrarily decided that 1K GPT-4 text tokens "is equivalent to" 10s of Sora 2 video?
That doesn't seem very rigorous.
It's a well known fact that 1 Picture == 1000 words.
I've often used this in silly pseudo-proofs demonstrating that words have little to no value.
Given that a picture is worth 1000 words, a film (being a string of pictures) at 24fps is 129600 pictures in 90 minutes, and viewing a film might cost $15: a word can be rented for $0.000116 or at a rate of roughly 86 words per penny.
This also tracks well with paperback novels as 70k words would be a little over $8 and 100k words would be just under $12.
That said, I have nothing but the vaguest sense of what an average movie or book costs these days. Are movies $15? Does walmart still have the $5 bin?
What about books? I know that the last time I was in a book store I was somewhat shocked by the prices but that was years ago.
Although, the local used good probably still sells both media for $1/ea. If that's the case, there's an easy frugality argument in the 90 minute movie being worth ~130k words against most novels topping out under 100k.
30 pictures a second for reasonable video, haha
Just burn money.
[dead]
Let me type and think
(I put it in Gemini for English translation) The 1080p and most expensive tier is 0.70 USD per second. Since Sora 2 runs at 30 FPS, each second of video costs roughly 2.3c per frame. While a single 1920x1080 static image is 765 tokens, video models use spacetime compression. Instead of a raw 22,950 tokens per second (765 tokens x 30 frames), a second of 1080p video equates to roughly 10,000 'latent tokens' due to temporal redundancy. Adding 20 tokens per second of audio, we get roughly 10,020 tokens per second of output. At $0.70 per second for ~10,020 tokens, the cost is approximately $0.00007 per token for Sora 2. 10 seconds of Sora 2 video would cost $7.00 for roughly 100,200 tokens. In comparison, GPT-5.4-pro at 15 USD per 1M output tokens costs $0.000015 per token. To generate 100,200 tokens of text, it would cost only $1.50. This puts Sora 2 at roughly 4.6x more expensive than GPT-5.4-pro per token generated. However, if we ignore video compression and treat every frame as a unique 1080p image (765 tokens each), Sora 2 becomes roughly 30x more expensive in terms of raw computational effort per frame
Well I guess you could say there is some amount of text that entertains you as much as a 10s Sora video. Judged in terms of time a fast reader might read 50 words in 10s and that is what, 100 tokens? If somebody wants to fudge that up by a factor of 10 (picture is worth a thousand words or something) you get where they are.
Now personally I am not entertained by motion-for-the-sake-of-motion Instagram reels, they actually make me queasy despite having a cast iron stomach and having taught myself to not get sick in VR. So if that's 10s of entertainment, leave me out. I don't care if Tom Cruise is whaling on Brad Pitt or the other way around for that matter, but boy do I want to see the body thetans burst ouf of Cruise's body when OTIII goes horribly wrong.
My reaction to the article was funny. I mean, I saw that 160x thing and thought it was bogus, and of course it is all AI generated and poorly formatted to boot but I did like the overall message. It does remind me of the early 2010s when a lot of sites with photo-based content (including mine) were going out of business because the revenue wasn't enough to pay the hosting costs and a few newcomers like Instagram were survivors and Google was obviously cleaning up with video on YouTube. From the viewpoint of business models for AI video I think there are two questions:
(i) how many times can you get people to watch the same video, i mean, no matter how expensive it is, if you get enough views/ad impressions/other revenue you are OK
(ii) how does it compete with some other way to generate the video?
The picture that the $20 subscription costs $65 to serve doesn't sound too crazy to me. I mean, there might be somebody who can get 3x the value out of a 10s Sora video than somebody else or they could get the cost down by a factor of 1/3.
[dead]
[dead]
[dead]
Sounds like a great way for an AI company to kill off a competing AI company. You can probably do this "organically." Take your $20/mo user just use that money directly to buy that user a subscription for the competitor product and serve them a wrapper.
Not sure if it would work but it would at least been a great plot for Silicon Valley if that show were still around.
I think the technical term is incinerator, but I digress.
Furnace suggests theres a goal in burning the money; incinerator suggests the goal is getting rid of the money.
Just incase anyones curious about technicalities.
I think furnace is the intent to create heat.
perhaps it is cold there and they are out of firewood.
Steam engines have furnace, so yeah, heat, but not _just_ heat. Like openai wouldn't claim they're just tyring to heat up the environment by building data centers.