That wasn't prerecorded, but it was rigged. They probably practiced a few times and it confused the AI. Still it's no excuse. They've dropped Apollo-program level money on this and it's still dumb as a rock.
I'm endless amazed that Meta has a ~2T market cap, yet they can't build products.
I don't think it was pre-recorded exactly, but I do think they built something for the demo that responded to specific spoken phrases with specific scripted responses.
I think that's why he kept saying exactly "what do I do first" and the computer responded with exactly the same (wrong) response each time. If this was a real model, it wouldn't have simply repeated the exact response and he probably would have tried to correct it directly ("actually I haven't combined anything yet, how can I get started").
It's because their main business (ads, tracking) makes infinite money so it doesn't matter what all the other parts of the business do, are, or if they work or not.
Google are well-known, like Meta, for making products that never achieve any kind of traction, and are cancelled soon after launch.
I don't know about anyone else, but I've never managed to get Gemini to actually do anything useful (and I'm a regular user of other AI tools). I don't know what metric it gets into the top 2 on, but I've found it almost completely useless.
I agree they aren't building great user products anymore but gemini is solid (maybe because it's more an engineering/data achievement than a ux thing? the user controls are basically a chat window).
I asked for a deep research about a topic and it really helped my understanding backed with a lot of sources.
Maybe it helps that their search is getting worse, so Gemini looks better in comparison. But nowadays even kagi seems worse.
I'm not the person you're responding to, but I feel I have a great example. Replacing the Google Assistant with Gemini has made my phone both slower and less accurate. More than once have I said "Hey Google, Play <SONG> by <ARTIST>" and had my phone will chirp back the song is available for streaming instead of just playing it. Once, I even had it claim it wasn't capable of playing music, I assume because that's true on other platforms.
Gemini just eclipsed ChatGPT to be #1 on the Apple app store for these kinds of apps. The 2.5 pro series is also good/SOTA at coding, but unfortunately poorly trained for the agentic workflows that have become predominant.
> When you call a business, the person picking up the phone almost always identifies the business itself (and sometimes gives their own name as well). But that didn't happen when the Google assistant called these "real" businesses:
No, because if you read the article you'd see that there's more, like the "business" not asking for customer information or the PR people being cagey when asked for details/confirmation.
>> I will die on this hill. It isn’t AI. You can’t confuse it.
> They "poisoned the context" which is clearly what they meant.
The "demo" was clearly prescriptive and not genuinely interactive. One could easily make the argument that the kayfabe was more like an IVR[0] interaction.
The ability to think. If it can't think in the first place, it can't get confused. Whether it's "real AI" or not depends on semantics of that you consider AI to be:
* If you think it's something that resembles intelligence enough to be useful in the same way intelligence is and to seem to be intelligence, this is clearly it. The "plant based meats" of AI.
* If you think it means actual intelligence that was manufactured by a human, this is not that. It's shockingly impressive auto correct, and it's useful, but it's not actually thinking. This would be "artificially created intelligence"; in essence, real intelligence with an artificial origin. The lab grown meat of AI.
For the latter, I really think it needs reasoning ability that isn't based on language. Plenty of animals can think and reason without depending on language. Language is a great carrier of intelligence, which is why LLMs work so well, but language is not the foundation of intelligence.
That said, I think "confused" is a fine enough anthropomorphization. I refer to things like Bluetooth issues as the machine getting confused all the time. Back in the day, Netflix would often have problems with the wrong image showing for a show, and I called that "Netflix getting confused". We know it's not actually getting confused.
You're just moving the question from "confused" to "think", and I think you're also conflating "to be confused" (which was what I said) with "to feel confused" (which is a whole other thing.)
I guess my definition of 'to be confused' is something like 'to have a significant mismatch between your model and reality', so yeah, you could argue that a PID controller is "confused" by a workload with significant inertia. And 'to feel confused' would be 'to identify a significant mismatch between your model and reality', of which clearly a PID controller is not capable, but most multicellular life forms are.
If by "genuinely confused" we are talking about what is generally called the emotion "confusion", then you need meta-cognition to even recognize your thinking is confused in order to then experience it. LLMs can model reflection by using their own output as input, but otherwise lack meta-cognitive processes. Through reflection they can later recognize what they did previously was wrong, but they will never experience "what I am currently thinking seems wrong/does not fit".
However "confusion" can also mean "mistaking one thing for another" or simply "doing the wrong thing", which is something computer programs have been able to fuck up since forever.
That was my thought — the memory might not have been properly cleared from the last rehearsal.
I found the use case honestly confusing though. This guy has a great kitchen, just made steak, and has all the relevant ingredients in house and laid out but no idea how to turn them into a sauce for his sandwich?
> Just get text-to-speech to slowly read you the recipe.
Even this feels like overkill, when a person can just glance down at a piece of paper.
I don’t know about others, but I like to double check what I’m doing. Simply having a reference I can look at would be infinitely better than something taking to me, which would need to repeat itself.
A hardened epaper display I could wash under a sink tap for the kitchen, with a simple page forward/back voice interface would actually be pretty handy now that I think about it.
Paper get lost, it gets wet, it gets blown around it not weighed down, and when weighed down it then quickly gets covered with things.
Prepping raw ingredients, once has to be careful not to contaminate paper, or at least the thing weighing the paper down that may be covering the next step.
I cook a lot of food, and having hands free access to steps is a killer feature. I don't even need the AI, just the ability to pull up a recipe and scroll through it using the wrist controller they showed off would be a noticeable, albeit small, improvement to my life multiple times per week.
You could imagine some utility to something that actually worked if it allowed you to continue working / not have to clean a hand and get your phone out while cooking. (Not a ton of utility, but some). But if it stumbles over basic questions, I just can't see how it's better than opening a recipe and leaning your phone against the backsplash.
I mean that the recipes for this sauce on the internet have pear as an ingredient and LLM also assumed this, but there was no pear present on a table, so LLM didn't take into account visual data and assumed pear is there too. Which is funny since it was the whole point of the presentation, querying LLM for whatever text in text or voice only mode is nothing new today.
Credit where it’s due: doing live demos is hard. Yesterday didn’t feel staged—it looked like the classic “last-minute tweak, unexpected break.” Most builders have been there. I certainly have (I once spent 6 hours at a hackathon and broke the Flask server keying in a last minute change on the steps of the stage before going on).
One of the demos was printing a thing out, but the processor was hopelessly too slow to perform the actual print job. So they hand unrolled all the code to get it down from something like a 30 minute print job to a 30 second print job.
I think at this point it should be expected that every publicly facing demo (and most internal ones) are staged.
The CEO of Nokia had to demo their latest handset one time on stage at whatever that big world cellphone expo is each year.
My biz partner and I wrote the demo that ran live on the handset (mostly a wrapper around a webview), but ran into issues getting it onto the servers for the final demo, so the whole thing was running off a janky old PC stuffed in a closet in my buddy's home office on his 2Mbit connection. With us sweating like pigs as we watched.
As much as I hate Meta, I have to admit that live demos are hard, and if they go wrong we should have a little more grace towards the folks that do them.
I would not want to live in a world where everything is pre-recorded/digitally altered.
The difference between this demo and the legendary demos of the past is that this time we are already being told AI is revolutionary tech. And THEN the demo fails.
It used to be the demo was the reveal of the revolutionary tech. Failure was forgivable. Meta's failure is just sad and kind of funny.
It's less about the failure, and more about the person selling the product, we don't like him, or his company, and that's why there is no sympathy for him and he knows that.
When it went bad he could instantly smell blood in the water, his inner voice said, "they know I'm a fraud, they're going to love this, and I'm fucked". That is why it went the way it did.
If it was a more humble, honest, generous person, maybe Woz, we know he would handle it with a lot more grace, we know he is the kind of person who would be 100x less likely to be in this situation (because he understands tech) and we'd be much more forgiving.
Despite the Reddit post's title, I don't think there's any reason to believe the AI was a recording or otherwise cheated. (Why would they record two slightly different voice lines for adding the pear?) It just really thought he'd combined the base ingredients.
That's even worse because it would mean that it wasn't the scripted recording that failed, it means the AI itself sucks and can't tell that the bowl is empty and nothing was combined. Either this was the failure of a recorded demo that was faked to hide how bad the AI is, or it accurately demonstrated that the AI itself is a failure. Either way it's not a good look.
My layperson interpretation of this particular error was that the AI model probably came up with the initial recipe response in full, but when the audio of that response was cut off because the user interrupted it, the model wasn't given any context of where it was interrupted so it didn't understand that the user hadn't heard the first part of the recipe.
I assume the responses from that point onwards didn't take the video input into account, and the model just assumes the user has completed the first step based on the conversation history. I don't know how these 'live' ai sessions things work but based on the existing openai/gemini live ai chat products it seems to me most of the time the model will immediately comment on the video when the 'live' chat starts but for the rest of the conversation it works using TTS+STT unless the user asks the AI to consider the visual input.
I guess if you have enough experience with these live AI sessions you can probably see why it's going wrong and steer it back in the right direction with more explicit instructions but that wouldn't look very slick in a developer keynote. I think in reality this feature could still be pretty useful as long as you aren't expecting it to be as smooth as talking to a real person
It seems extremely likely that they took the context awareness out of the actual demo and had the AI respond to pre defined states and then even that failed.
The AI analyzing the situation is wayyy out of scope here
"unpredictable" and "doesn't work" are different things. As a user, I know it's not deterministic and I can live with "unpredictable" results as long as it still makes sense, but I won't buy something that works 50% of the time.
It was reading step 2 and he was trying to get it to do step 1.
He had not yet combined the ingredients. The way he kept repeating his phrasing it seems likely that “what do we do first” was a hardcoded cheat phrase to get it to say a specific line. Which it got wrong.
I have a friend who does magic shows. He sells his shows as magic and stand-up comedy. It's both live entertainment, okay, but he is the only person I've ever seen use that tagline. We went to see him perform once and everything became clear when he opened the night.
"This is supposed to be a magic show," he told us. "But if my tricks fail you can laugh at it and we'll just do stand-up comedy."
Zuck, for a modest and totally-reasonable fee, I will introduce you to my friend. You can add his tricks (wink wink) to your newly-assembled repertoire of human charisma.
Haha. I honestly don't know. Which makes him...a great entertainer at least? The show was a real good time though.
Take this with lots of salt but I read somewhere that circus shows "fail" at least one jump to help sell to the audience the risk the performers are taking. My friend did flub his opening trick with a cheeky see-I-told-you and we just laughed it off.
He incorporated the audience a lot that night so I thought the stand-up comedy claim was his insurance policy. In his hour-long set he flubbed maybe two or three tricks.
I bet they rehearsed a dozen times and never failed as bad live. Got to give them props for keeping the live demos. Apple has neutered its demos so much it's now basically 2 hr long commercials.
Live Apple demos were always held together with duct tape in the first place. That first "live" iPhone demo had a memorized sequence that Jobs needed to use to keep the whole phone OS from hard crashing.
During that first iPhone demo they also had a portable cell tower (cell on wheels) just off-stage to mimic a better signal strength than it was capable of. NYTimes write-up on the whole thing is worth the read [0].
They also force the developers to make it work, under threat of being fired, and in the ire of Steve Jobs case, being yeeted in to the sun along with their ancestors and descendents.
As much as it'll be "interesting" to see how models behave in real world examples (presumably similarly to how the demos went), I'm not convinced this is a premade recording like what seems to be implied.
I'm imagining this is an incomplete flow within a software prototype that may have jumped steps and lacks sufficient multi-modal capability to correct.
It could also be staged recordings.
But, I don't think it really matters. Models are easily capable of working with the setup and flow they have for the demo. It's real world accuracy, latency, convenience, and other factors that will impact actual users the most.
What's the reliability and latency needed for these to be a useful tool?
For example, I can't imagine many people wanting to use the gesture writing tools for most messages. It's cool, I like that it was developed, but I doubt it'll see substantial adoption with what's currently being pitched.
Yea the behavior of the AI read to me more like a hard coded demo but still very much "live". I suspect him cutting it off was poorly timed and that timing could have amplified due to WiFi? Who knows. I wasn't there. I didn't build it.
This appears to be a classic vision fail on the VLM's part. Which is entirely unsurprising for anyone who has used open VLMs for anything except ""benchmarks"" in the past two god damn years. The field is in a truly embarrassing state, where they pride themselves how it can solve equations off a blackboard, yet couldn't even accurately read a d20 dice roll among many other things. I've tried (and failed) to have VLMs accurately caption images for such a long time, yet anytime I check on the output it is blindingly clear that these models are awful at actually _seeing things_.
Having claude run the browser and then take a screenshot to debug gives similar results. It's why doing so is useless even though it would be so very nice if it worked.
Somewhere in the pipeline, they get lazy or ahead of themselves and just interpret what they want to in the picture they see. They want to interpet something working and complete.
I can imagine it's related the same issue with LLMs pretending tests work when they don't. They're RL trained for a goal state and sometimes pretending they reached the goal works.
It wasn't the wifi - just genAI doing what it does.
For tiny stuff, they are incredible auto-complete tools. But they are basically cover bands. They can do things that have been done to death already. They're good for what they're good for. I wouldn't have bet the farm on them.
I’m just excited that our industry is lead by optimists and our culture enables our corporations to invest huge sums into taking us forward technologically.
Meta could have just done a stock buyback but instead they made a computer that can talk, see, solve problems and paint virtual things into the real world in front of your eyes!
> I’m just excited that our industry is lead by optimists and our culture enables our corporations to invest huge sums into taking us forward technologically.
I am always baffled that people can be that naive.
It's a weird way to put it too, "our industry" and "our culture" enables "our corporations". They're not "our" corporations as a society, why should we be excited about their investments.
There's a cognitive dissonance between talking about capitalist entities that supposedly drive social and technological progress, and the repeated use of the collective "our" and "us". Corporations are not altruistic optimists aiming to better our lives.
He's the CEO of a multi-billion dollar corporation, promising technology that puts the livelihoods of millions of people at risk. He deserves every bit of scrutiny he gets.
Do you think his algorithms have kind of, you know, sewed the seeds of hatred and rage through our society?
Do you think all the lies an misinformation his products help spread kind of...get people elected who take away the aid which millions of women and children rely on?
Not blaming him for it all, we all play our part, but the guy has definitely contributed negatively to society overall and if he is smart enough to know this, but he cannot turn off the profit making machine he created so we all suffer for that.
The parent said alluded to the dangers of AI, well the algorithms that are making us hate each other and become paranoid are that AI.
Yes, the mocking, gleeful negativity really does make me concerned that this place is becoming Reddit. The fact that the highest upvoted post on this thread is just a link to Reddit isn't doing much to help me feel better. And I've been here for at least a decade, so I don't think this is the noob illusion.
But why even have a conversation at all? Who cares if Zuckerberg has a demo that goes awry? Does that satisfy your intellectual curiosity somehow? It certainly doesn't satisfy mine.
Most of the discussion here is about (i) what might have gone wrong, technically, and (ii) what this says about the ROI that Facebook and other US tech giants are getting on their AI projects.
I agree that one demo gone awry does not mean much in itself, but the comments here do rise above the level of Nelson Muntz.
A topic that's notable but has little to discuss can get a lot of upvotes. A lot of the best stuff on this site has exactly that: lots of upvotes, little commentary, because the thing itself is notable. That's definitely not what happens on Meta threads. They usually get lots of votes then lets of repetitive spam-like comments about how ethically bankrupt Zuck or Meta or social media or algorithms or whatever are. A new comment on the behavior would be interesting, but most of the comments are basically just spam. I could probably get an LLM to generate most of them with ease. Perhaps the worst part is, even if there is novel analysis it's buried under an avalanche of "Zuck will grind you to dust" or whatever that gets repeated over and over again.
For a while that was okay, this kind of stuff was just contained in those threads. But it's started leaking out everywhere. Just spam like comments tangentially related to the topic that just bash a big company. That's the lowering of SNR that I find grating.
Oh, no, is someone being mean to the big company? :(
This is absolutely notable, and everyone should be concerned about it. Not so much the potential fakery, but the extreme deficiency of the actual product, which has had the GDP of a small country squandered on it. Like, there is a problem here, and it will have real-world fallout once the wheels fall off.
You'll see the same folks spamming their hatred towards tesla/microsoft/meta/google over and over with zero substance other than sentimental blabbering.
It's a different world than it was ten years ago. Among the ways it's different are people are far more skeptical of billionaires, Big Tech, and capitalism generally. They're willing to cut them much less slack. This is one of the few ways that the world of today is better than the world of ten years ago.
I don't love your silly theory. It sounds like you're in denial, trying to cope with the fact that not everyone thinks LLMs are the greatest thing since flush toilets.
The idea that anti-AI posts on HN are PR hit jobs (paid for by..?) strikes me as conspiracy theory.
The simple reality is that hype generates an anti-hype reaction. Excessive hype leads to excessive anti-hype. And right now AI is being so excessively hyped on a scale I’m not sure I’ve seen in all the years I’ve been working in tech.
You're seeing conspiracies where there are none. A group of people all acting the same way is not suspicious when their actions are the thing that groups them together.
Your theory that there's an invisible hand that makes everyone spontaneously act the same is nonsensical. It hasn't been observed in humans nor in the wider animal kingdom.
For the record I'm in favor of AI safety and regulating the use of AI. I don't know anything about the particular bills or groups in the Politico article though. But it's clear evidence that people with money are funding speech that pushes back against AI.
The funding and use of campaigns to amplify divisive issues is well known, but I'm not claiming this is a source of anti-AI funding. You may perhaps believe that AI does not count as a divisive issue and so there are no anti-AI campaigns through this funding model. I would find that surprising but I don't know of a source yet that has positively identified such a campaign and its sourcing. There were similar campaigns against American technological domination such as the anti-nuclear movement which received a lot of funding from the pro-nuclear Russian military during the cold war. And the anti-war movement which received a lot of funding from the pro-war Russian military during the Vietnam war. Similarly the US has funded "grass roots" movements abroad.
To be clear I'm not saying the anti-AI movement is similarly funded or organized. But it is clearly a movement (and its adherents acknowledge that) and it clearly has funding (including some from very wealthy donors). And they do all use similar stock phrases and examples and often [0] have very new accounts. Everything in the current paragraph is verifiable.
[0] by often, I mean higher than the base rate topic for HN. I don't mean more than 50% of the time or any other huge probability.
We are not bots, we just loathe historically bad-faith actors and especially with the current climate, we will take the opportunity of harmless schadenfreude where we can get it.
Oh please. This isn't like the old iPhone days where new features and amazing tech were revealed during live demos. Failure was acceptable then because what we were being shown was new and pushing the envelope.
Meta and friends have been selling us AI for a couple years now, shoving it everywhere they can and promising us it's going to revolutionize the workforce and world, replace jobs, etc. But it fails to put together a steak sauce recipe. The disconnect is why so many people are mocking this. It's not comparable.
So there was no AI. I know there’s a lot of confusion regarding the exact definition of AI these days, but I’m pretty sure we can this one time all agree that an “on rails” scenario ain’t it. Therefore, whatever it is that they were doing out there, they weren’t demoing their AI product. You could even say it wasn’t a live demo of the product.
Somebody said the cooking guy was some influencer person? I noticed that many non-tech people often resort to this excuse, even in situations where it makes absolutely no sense (e.g., on a desktop with only ethernet, or with mic/speakers connected via cable). It's almost like they just substitute "bad wifi" for "glitch".
It's colloquial in the younger generations to use the term Wifi to actually refer to a WAN connection to one's home or building, regardless of Physical Layer Transport.
I often ask clarifying questions to people, to me its part of casual conversation, not an inquisition or anything (because my own behavior would be different if it was)
The vast majority of people say incoherent deflections instead of just saying “I don’t know”
I’m getting better at ignoring or playing along
It just happens in areas I least expect it
makes me sound like a high functioning autist, but I’m not convinced
Bad idea to rely on WiFi for an important demo in a crowded environment. It would have worked fine in testing but when the crowd arrives and they all start streaming etc, they bring hundreds more devices all competing for bandwidth.
Zuck should have known better and used Ethernet for this one!
It's because Zuck doesn't actually believe in anything. Zuck's values, politics, and business goals change with the wind so everything that stems from them feels empty, because it's missing the true drive.
In contrast, nothing Steve Jobs said felt empty, whether we agreed or disagreed with what he was saying it was clear that he was saying it because he believed it, not because it's what he thought you wanted to hear.
CEOs are paid to promote their company, yes, but that doesn't mean they must fake it. The other alternative is to actually believe what they're saying. I don't think Zuck does.
Felt like the best example of a true believer. I'd say a similar, but less clear version, would be Dario Amodei vs Sam Altman. I don't agree with either, but Dario comes across as a true believer who would be doing AI regardless of the current trends, whereas Sam comes across as a chancer who would be doing cryptocurrencies if that was still big, or social media if that was still the next big thing, evidenced by the fact that he did both of those but they didn't stick so he moved his focus on.
Jobs would have been doing consumer computing hardware whatever happened. Apple in the early days wasn't the success it is now, he was fired and went and started another company in the same space (NeXT).
This is what it has come to? This is artificial intelligence? Billions and billions of dollars spent to narrate a recipe? Something that can be written down on a piece of paper?
I have a copy of the classic "Joy Of Cooking" in the kitchen. It was a lot cheaper, works perfectly every time, and doesn't get ruined if (when) I spill foodstuffs on it.
To their peers, i.e. their golf billionaire buddies from Fortune-500. They talk with each other and I strongly suspect propagate a whole set of alternative reality ideas among themselves. Like this obsession on the voice activated and controlled everything. Billionaire CEOs probably find it very convenient to pretend to multitask constantly and make voice recordings and commands while doing other CEO tasks or during endless meetings. After all their human secretary can later verify information without taking his time. Meanwhile almost no one from my peer group or relatives uses voice activated anything really, no voice mails, no voice controls, no voice assistants. And I never see people on the streets doing that too.
> Meanwhile almost no one from my peer group or relatives uses voice activated anything really, no voice mails, no voice controls, no voice assistants. And I never see people on the streets doing that too.
Could also be that however your peer group uses things, isn't the only way that thing gets used?
For example, voice messages seems more popular than texting around me right now, at least in Europe and Asia, where people even respond to my texts over Whatsapp and Telegram with voice messages instead. I constantly see people on the street listening and sending voice messages too, in all age ranges.
I don't think any of those people would need an AI assistant to recite cooking recipes though, but "voice as interface" seems to be getting more popular as far as I can tell.
Why you wouldn't just transcribe your message (which most keyboards and messengers support) instead of sending minutes worth of meandering audio full of "uhm" is beyond me. I use voice all the time (assistants, LLM, etc.) but voice messages can die in a fire.
So, the obvious answer to me is that voice communications accurately include tone and inflection. But other than that, there are "edge cases" (I mean, they're more like "people") that make it more appealing, especially after Google made their keyboard transcription worse for the people who get the most use out if it (aforementioned "edge cases").
My dyslexic friend's experience with software transcriptions has changed recently. No longer can they say, "What time do I need to pick you up, question mark, I'm just leaving now, comma, so I might be a little late, period." and have it use the punctuation as specified. Now, it's LLM-powered and converts the speech without really letting the user choose the punctuation, except manually after it's been written out, which is difficult to impossible for both dyslexics and blind people.
(As a side note, if a person is an "edge case", it's actually that person's every-time case.)
They don’t want to spend 30 min explaining domain knowledge required to understand a certain super specific case.
Instead they show tech’s quality on a basic highest common denominator use case and allow people to extrapolate to their cases.
Similarly car ads show people going from home to a store (or to mountains). You’re not asking there “but what if I want to go to a cinema with the car”. If it can go to a store, it can go to a cinema, or any other obscure place, as long as there is a similar road getting there.
But those are things cars make sense for. When would I stand in my kitchen with a bunch of random ingredients strewn about the counter wondering what to make with them and conclude that an LLM would have a good answer? And what am I supposed to extrapolate from that example? I guess they were showing off that the system had good vision capabilities? Okay, but generative AIs are notoriously unreliable, unlike cars. Even if the demo had worked, it would tell me nothing about whether it would help me solve some random problem I could think up.
A better analogy would be the first cars being advertised as being usable as ballast for airships. Irrelevant and non-representative of a car's actual usefulness.
The sociopaths pushing this kinda crap don't live the same lives you or I do. They have people they pay to make decisions for them, or they pay people to do shit like buy their weekly groceries for them or whatever other stupid crap they're trying to sell as a usecase for these useless AI tools. That's why all these demos are stupid shit like "Buy me plane tickets for my trip", despite the fact that 99.9% of people need very specific criteria out of their plane tickets and it's more easily done with currently available tools anyways.
They literally think "What does a regular Joe need in their day-to-day?" and their out of touch answer is "I have all these ingredients but don't know what to cook" or whatever. It's obvious these people haven't spoken to anyone who isn't an ass-licking yesman in a looooong time.
Hey, that recipe is worth trillions of dollars of investment, the destruction of the natural environment and the displacement of huge numbers of talented and skilled people. Show some respect for our billionaire class.
> A Korean tasting dressing. It's 2025, anyone living in a modern country should probably be able to make something that tastes Korean with just a small amount of effort...
What an awful, condescending attitude. No, not "everyone living in a modern country" can make Korean food without a recipe. And tools that reduce the barrier for learning and acquiring new skills should be applauded.
Almost half of Americans cannot cook today. And the number 1 cited reason is a lack of time.
That said, I agree with the grandparent that this isn't really a "killer feature". Nor am I interested in the product. For so many reasons.
A real example that would have resonated was asking “what can I make with these ingredients?” No one is asking how to make a specific thing when they already know exactly what ingredients they need. If they knew what ingredients they needed, they probably already had the recipe. It feels out of touch at a basic level.
They just need an emotionless android without conscience, who does whatever is in the best interest of raking in money. They don't need technological excellence. Whether people at his company technologically succeed or fail, what matters is, that the company processes all the PII and feeds the algorithms. The rest is just for show.
I think such an emotionless android would have diligently prepared numerous backup scripts, sets of lenses, actors, demonstrations etc. to cover any failure contingency, since the cost of that is infinitesimal compared to even a slight change in their brand value.
It's funny because he spent so much money on hair and clothing stylists, jewellery, BJJ coaching, surfing lessons , really made an effort to come across as "cool" and the end result is...you cannot fake who you are, and your actions are what define you and make up your character, he is prime example of that. He cannot escape who he is.
I wouldn't say he's one of the best CEOs. He's been "successful" by
- selling an unhealthy addictive product
- burying research on its mental health impact on children
- engaging in anticompetitive behavior
Oh yeah, add stealing the original idea for facebook from the Winklevoss twins. I'll take being a loser if that's what it takes.
Yes he's rich and influential and blah blah blah blah blah and he's also AN ENORMOUS FUCKING DORK with the intellectual depth of a half-empty bottle of salad dressing. For all his money I'd rather be me than him.
Tangent: if you like cringey social awkwardness comedy (not my usual cup of tea, but in this case it's extraordinary, and hilarious), try "I Think You Should Leave".
How strong does a company's reality distortion field have to be for people to think your friends are going to want to come over to play with a new version of Windows?
I mean, why not "Let's all have wine and cheese and do root canals on each other!"?
I honestly was excited about Windows 95. Win98 was underwhelming, and WinME was a joke that I never bothered to install on my own machines. Win2K brought back some of the excitement, but not much.
Then Vista came out, and it was a total flop at first. Win7 fixed most of those mistakes, but the damage was done. Vista basically killed any chance Microsoft had at building excitement for an OS.
FWIW, I think the last macOS version that I was really looking forward to was High Sierra.
Here's one of my favorites, of Lars doing the Wave dance on stage to ad-lib over connectivity hiccups. For some reason it evoked a lot more empathy from me...
I was on the Wave team! Our servers didn't have enough capacity, we launched too soon. I was managing the developer-facing server for API testing, and I had to slowly let developers in to avoid overwhelming it.
Neat, thanks for sharing this tidbit of history. Hey, what did the team think of the decision to build it on GWT at the time? (From the outside, seemed like an enabling approach but a bit like building an engine and airframe all at once).
Hm, I didn't work on the frontend but I don't particularly remember griping..GWT had been around for ~5 years at that point, so it wasn't super new: https://en.wikipedia.org/wiki/Google_Web_Toolkit
I always personally found it a bit odd, as I preferred straight JS myself, but large companies have to pick some sort of framework for websites, and Google already used Java a fair bit.
It was fun! Now we still see Wave-iness in other products: Google Docs uses the Operational Transforms (OT) algorithm for collab editing (or at least it did, last I knew), and non-Google products like Notion, Quip, Slack, Loop from Microsoft, all have some overlap.
We struggled with having too many audiences for Wave - were we targeting consumer or enterprise? email or docs replacement? Too much at once.
This will also not change much, since they want you to use their centralized services for data collection, not local or onsite processing. So you will always have roundtrips and shared resources. For IO this is pretty unacceptable, people get annoyed by millisecond delays.
Nah, more like a 1D chess move. Investors will pay them to invest in AI, so invest in AI, make the stock go up, sell, and leave the dumb investors holding the bag.
2D chess if they're smart: start a new company that competes with the one they just sold to dumb investors. Jack Dorsey is particularly fond of this move.
Something I like about that scene is Seinfeld (the actor) clearly struggles not to smile at Kramer's delivery of the punch line despite that he (the character) was supposed to be irate with Kramer.
If you're taking about the R&D provisions in the OBBBA, that only changes the schedule of the deduction (immediately vs over several years). R&D, like most business expenses were was always deductible. Whether it's prudent or not isn't a factor.
It's an ad network with an attached optional pair of glasses.
It's the platform Zuck always wanted to own but never had the vision beyond 'it's an ad platform with some consumer stuff in it'.
I am super impressed with the hardware (especially the neural band) but it just so happens that a very pricey car is being directly sold by an oil company as a trojan horse.
We all know what the car is for unfortunately.
I can't wait to see what Apple has in store now in terms of the hardware.
Someone would have to be dumb to give facebook access to collect data from everything they see and hear in their life combined with the ability to plaster ads over every available surface in their field of view. They'd have to be beyond stupid to pay for it.
Why the bleep do they still rely on wifi at conferences like this?? I always insist on a wired connection on its own, dedicated, presenter vlan. Is this running on wifi-only glasses or something? Is that the only medium they can present the tech on? Could they have shielded the room the guy's in?
This is like something right out of the show Silicon Valley. You couldn’t have scripted a more cringe-worth demo.
It’s like they mashed up the AI and metaverse into a dumpster fire of aimless tech product gobodlygook. The AI bubble can’t pop soon enough so we can all just get back to normal programming.
This does not deter me from possibly buying one. The concept is pretty cool and appealing to those who want a distraction free lifestyle. Even if there's a screen in front of you at all times, at least you won't need to hold something in your hands to be able to operate it. That alone is a significant win.
My point is that the device that you keep in your pocket is one that you must take out with your hand and operate with your hand such that you cannot perform its primary functions without using hands. It's true that the Ray-Ban Meta Display can be controlled with hand gestures but doing so is apparently optional. Being able to control and consume content from a smart device while having your hands free to multitask is a big deal.
There are so many activities and professions where your hands get dirty and touching a smartphone without washing them would be a bad idea. An auto mechanic could use these glasses to look up information about things they see inside of an engine without having to clean the oil from their hands. A chef could respond to messages about their food delivery without having to drop what they're doing and go sanitize. Anytime I do dirty work outside, I can use this to access smart features without the risk of dirt filling my smartphone case, my smartwatch getting destroyed in a tight situation, or drenching either of them in sweat.
Furthermore, a phone (or a smart watch) is not meant to be used at face level, meaning folks typically look down to use them, and this can lead to extended periods of bad posture resulting in head, neck, and spine problems. My X-ray shows I have bone spurs on the vertebrae of my neck because I look down at screens too much (according to my chiropractor). A smart device that's designed to be used in a way that aligns with good posture habits is absolutely needed.
I hope smart glasses take off and I commented Meta for taking them this far.
None of those things actually have anything to do with being less distracted. If you have a screen in your face throwing all the notifications and bullshit at you that your phone normally does, it is going to be far more distracting. And humans are hilariously bad at multitasking. Taking your hands out of the equation doesn't magically make you better at it. Jesus we think people using their phones while walking or driving is bad, these things are gonna be a disaster.
If people are so addicted to their phone or smart watch that it's giving them back/neck problems, the solution isn't glasses. The solution is to be less addicted to your god damn devices.
Outside of a few niche use case I don't think tech like this will be anything but a net negative.
What nobody seems to be chasing (I assume because screens are flashier) is smart headphones. Imagine you could navigate a webpage with hand gestures and have it read to you while you walk or do chores. In some ways, this is way easier than head-mounted graphical displays; portable audio is already a solved problem. The problem that still needs to be solved is good quality TTS on a portable device, but honestly that seems way more tractable than portable HUDs.
I do wonder what it will take to replace the default voice command handler. Will it be as locked down as possible or will Meta be happy with any adoption at all?
This is how you know AI will not live up to the hype. You have the highest paid people, at one of the richest companies in the world, building the AI and with unlimited access to the best models and the most talent.
And they still can't pull off a keynote.
So then... what does AI have to offer me? Because I would have thought, as Sam Altman put it, having an expert PhD level researcher in all subjects in my pocket could maybe help me pull off a tech demo. But if it can't help them, the people who actually made the thing, on their very high stakes public address where everything is on the line, then what's it supposed to do for the rest of us in our daily lives?
Because it seems more and more, AI is a tool that helps you stage your own very public humiliation.
When billionaires stop fantasizing about AI allowing them to rid themselves of the filthy peasant class which keeps feeling entitled to take even the smallest fraction of their income from them just because they're also doing all the actual work that makes that income possible.
What passes for AI is just good enough to keep the dream alive and even while its usefulness isn't manifesting in reality they still have a deluge of comforting promises to soothe themselves back to sleep with. Eventually all the sweet whispers of "AGI is right around the corner!" or "Replace your pesky employees soon!" will be drowned out by the realization that no amount of money or environmental collateral damage thrown at the problem will make them gods, but until then they just need all of your data, your water, and 10-15 more years.
There's a simple explanation that isn't 'prerecorded'. I'd be very happy to accuse Meta of faking a demo, but that's 1) just a weird way to fake a demo and 2) effect that has easier explanation.
You ask AI how to do something. AI generates steps to do that thing. It has concept of steps, so that when you go 'back' it goes back to the last step. As you ask how to do something, it finishes explaining general idea and goes to first step. You interrupt it. It assumes it went through the first step and won't let you go back.
The first step here was mixing some sauces. That's it. It's a dumb way to make a tool, but if I wanted to make one that will work for a demo, I'd do that. Have you ever tried any voice thing to guide you through something? Convincing Gemini that something it described didn't happen takes a direct explanation of 'X didn't happen' and doesn't work perfectly.
It still didn't work, it absolutely wasn't wi-fi issue and lmao, technology of the future in $2T company, it just doesn't seem rigged.
Step 0: You will be making Korean stake. Step 1: Mix those ingredients. Step 2: Now that you mixed those ingredients, do something else.
System started doing Step 1, believed it was over so moved to Step 2 and when was asked to go back, kept going back to step 2.
Step 1 being Step 0 and Step 1 combined also works.
Again, it's also a weird way to prerecord. If you're prerecording, you're prerecording all steps and practicing with them prerecorded. I can't imagine anyone to be able to go through a single rehearsal with prerecorded audio to not figure out how to do this, we have the technology.
I'd figured it performed image recognition on the scene visible to it, then told the language model it could see various ingredients including some combined in a bowl.
The mocking, gleeful negativity here concerns me. I am worried that with some of these more polarized topics that the discussions on HN are becoming closer to those on Reddit. The fact that the highest upvoted post on this thread is just a link to Reddit isn't doing much to help me feel better. And I've been here for at least a decade, so I don't think this is the noob illusion.
I have no illusions about Zuckerberg. He's done some pretty bad stuff for humanity. But I think AI is pretty cool, and I'm glad he's pushing it forward, despite mishaps. People don't have to be black or white, and just because they did something bad in one domain doesn't make everything they touch permanently awful.
This thread [1] on Zuckerberg from 7+ years ago doesn't look too different. Top comments saying it's "pretty cringe-y", another posting an image from Reddit and some "Twitter bingo cards". The nature of the situation doesn't really offer much for deep analysis, but the discussion yesterday [2] on the product itself seems fine to me. You might disagree since people are more skeptical rather than being glad Meta is pushing it.
I don't think that's a fair comparison at all. The Cambridge Analytica stuff was probably some of the ethically worst stuff that Zuckerberg and Meta has ever done, and they absolutely deserved to get raked over the coals for it. AI glasses and VR are nowhere in the same ballpark. In fact, that the two discussions are tonally similar seems to support my argument.
The discussion yesterday was fine. If that was the only conversation we had, I wouldn't be worried.
I'm surprised that there isn't more. Everything that this person has touched has made life that much worse for humanity as a whole. He deserves every ounce of criticism and mockery, moreso because he makes himself out to be this savior figure. We should sneer at every attempt at theirs (and other's) awful AI because it's lighting this world on fire. The popping of the bubble cannot come soon enough.
You just don’t seem to understand. Mark wouldn’t hesitate to grind you down to the last atom in order to extract every last bit of value out of you. And you defend the guy because he gave you freebies, or something. I have no words.
The failures on stage were kind of endearing, to be honest, especially the one with Zuck. Plus the products seem really cool, I hope I'll be able to try them out soon.
Zuckerberg has negative charisma, it's painful to watch...
Jobs handled this so much better; while clearly he is pissed, he doesn't leave you cringing in mutual embarrassment, goes to show it isn't as easy as he makes it look!
Jobs was a clear communicator who emphasized user friendly products in aesthetically pleasing boxes. If Silicon Valley wasn't the most obtuse place on earth he wouldn't have stood out nearly as hard.
I think youre underestimating the effort necessary to simplify.
Most humans I have encountered, particularly the book smart ones, are absolutely horrible at this. It really takes a concerted desire to be disciplined and focused to do it well. Unsurprisingly Jobs spoke a lot about the idea of focus and being disciplined, as being the foundation for his success.
Endearing is great for trying to sell a heartfelt, homeade piece of art. It clashes when it's a trillionaire company trying to pretend this product can replace entire sectors of human labor.
It's well known Meta AI is shit. But I could probably make an app that can run this demo in an afternoon. The glasses part here is insane and I don't know why everyone is fixated on the tacky AI part. It's like if I invented the car and you complained that it's really hard to crank the windows down. Be happy it's even there!
Probably because we've seen goofy-looking AI glasses before.
Google Glass was released in 2013, the Snapchat Spectacles date back to 2016. Meta's glasses might be better (at first glance, I honestly can't tell), but they aren't some kind of revolutionary product we have never seen before.
The innovative part here is supposed to be the AI demo. That clearly flopped. So what's supposed to be left?
Right. I just wonder why nobody talks about the risks of bringing camera based glasses to the masses. This is mass surveillance at its best. Without camera, i would say it's a good phone replacement. But considering they try to make everyone use a camera on the glasses, it's clear they don't care.
One important thing to note: demo didn't fail! (Or, at least not in the way people usually think of)
> You've already combined the base ingredients, so now grate a pear to add to the sauce.
This is actually the correct Korean recipe for bulgogi steak sauce. The only missing piece here is that the pear has to be Pyrus pyrifolia [1], not the usual pear. In fact every single Korean watching the demo was complaining about this...
The demo was halted because when the AI was asked how to start, it kept assuming that some of the process was already complete and was jumping to the middle of the process (indicating that it was incorrectly analysing the scene).
That wasn't prerecorded, but it was rigged. They probably practiced a few times and it confused the AI. Still it's no excuse. They've dropped Apollo-program level money on this and it's still dumb as a rock.
I'm endless amazed that Meta has a ~2T market cap, yet they can't build products.
I don't think it was pre-recorded exactly, but I do think they built something for the demo that responded to specific spoken phrases with specific scripted responses.
I think that's why he kept saying exactly "what do I do first" and the computer responded with exactly the same (wrong) response each time. If this was a real model, it wouldn't have simply repeated the exact response and he probably would have tried to correct it directly ("actually I haven't combined anything yet, how can I get started").
It's because their main business (ads, tracking) makes infinite money so it doesn't matter what all the other parts of the business do, are, or if they work or not.
That's Google's main business too, they have infinite money plus 50% relative to meta, and they are still in the top two for AI
Google are well-known, like Meta, for making products that never achieve any kind of traction, and are cancelled soon after launch.
I don't know about anyone else, but I've never managed to get Gemini to actually do anything useful (and I'm a regular user of other AI tools). I don't know what metric it gets into the top 2 on, but I've found it almost completely useless.
I agree they aren't building great user products anymore but gemini is solid (maybe because it's more an engineering/data achievement than a ux thing? the user controls are basically a chat window).
I asked for a deep research about a topic and it really helped my understanding backed with a lot of sources.
Maybe it helps that their search is getting worse, so Gemini looks better in comparison. But nowadays even kagi seems worse.
In what ways does Kagi seem worse? Any specific examples?
Please share an example. Your 'almost completely useless' claim runs counter to any model benchmark you could choose.
I'm not the person you're responding to, but I feel I have a great example. Replacing the Google Assistant with Gemini has made my phone both slower and less accurate. More than once have I said "Hey Google, Play <SONG> by <ARTIST>" and had my phone will chirp back the song is available for streaming instead of just playing it. Once, I even had it claim it wasn't capable of playing music, I assume because that's true on other platforms.
Annoying to boot
Gemini just eclipsed ChatGPT to be #1 on the Apple app store for these kinds of apps. The 2.5 pro series is also good/SOTA at coding, but unfortunately poorly trained for the agentic workflows that have become predominant.
Haven't Google also famously faked a phone call with an AI some years ago for an event?
https://www.axios.com/2018/05/17/google-ai-demo-questions
> When you call a business, the person picking up the phone almost always identifies the business itself (and sometimes gives their own name as well). But that didn't happen when the Google assistant called these "real" businesses:
That's the whole argument?
No, because if you read the article you'd see that there's more, like the "business" not asking for customer information or the PR people being cagey when asked for details/confirmation.
At this point, honesty is an oasis that is the 2025 year of scams and grifts. I'm just waiting for all the bubbles to pop.
It's been this way since natural internet user base growth dried up
True. We at least had a period where they weren't so blatant about it. But now it's robbery in blind daylight.
Well, it _IS_ a rock after all.
> confused the AI.
I will die on this hill. It isn’t AI. You can’t confuse it.
They "poisoned the context" which is clearly what they meant.
>>> confused the AI.
>> I will die on this hill. It isn’t AI. You can’t confuse it.
> They "poisoned the context" which is clearly what they meant.
The "demo" was clearly prescriptive and not genuinely interactive. One could easily make the argument that the kayfabe was more like an IVR[0] interaction.
0 - https://en.wikipedia.org/wiki/Interactive_voice_response
“The blue square is red.”
“The blue square is blue.”
“The blue square is green.”
The future is here.
Ok, you’ve piqued my interest. What’s required in order for something to be genuinely confused?
The ability to think. If it can't think in the first place, it can't get confused. Whether it's "real AI" or not depends on semantics of that you consider AI to be:
* If you think it's something that resembles intelligence enough to be useful in the same way intelligence is and to seem to be intelligence, this is clearly it. The "plant based meats" of AI.
* If you think it means actual intelligence that was manufactured by a human, this is not that. It's shockingly impressive auto correct, and it's useful, but it's not actually thinking. This would be "artificially created intelligence"; in essence, real intelligence with an artificial origin. The lab grown meat of AI.
For the latter, I really think it needs reasoning ability that isn't based on language. Plenty of animals can think and reason without depending on language. Language is a great carrier of intelligence, which is why LLMs work so well, but language is not the foundation of intelligence.
That said, I think "confused" is a fine enough anthropomorphization. I refer to things like Bluetooth issues as the machine getting confused all the time. Back in the day, Netflix would often have problems with the wrong image showing for a show, and I called that "Netflix getting confused". We know it's not actually getting confused.
You're just moving the question from "confused" to "think", and I think you're also conflating "to be confused" (which was what I said) with "to feel confused" (which is a whole other thing.)
I guess my definition of 'to be confused' is something like 'to have a significant mismatch between your model and reality', so yeah, you could argue that a PID controller is "confused" by a workload with significant inertia. And 'to feel confused' would be 'to identify a significant mismatch between your model and reality', of which clearly a PID controller is not capable, but most multicellular life forms are.
If by "genuinely confused" we are talking about what is generally called the emotion "confusion", then you need meta-cognition to even recognize your thinking is confused in order to then experience it. LLMs can model reflection by using their own output as input, but otherwise lack meta-cognitive processes. Through reflection they can later recognize what they did previously was wrong, but they will never experience "what I am currently thinking seems wrong/does not fit".
However "confusion" can also mean "mistaking one thing for another" or simply "doing the wrong thing", which is something computer programs have been able to fuck up since forever.
Just try asking an AI about the Seahorse Emoji (insert close-yet-comically-incorrect emojis since they won't render here):
_________
ChatGPT: There is a seahorse emoji: — wait, no, that’s a lobster. Okay, for real now:
Here it is: � Seahorse — argh, it’s still not showing properly here.
Let’s try again:
(hedgehog) (sauropod) (T-rex) (otter) (seal) (lobster) …but no actual (squid) or (mermaid) matches…
Okay, deep breath — I checked the official Unicode list. And you’re right: The seahorse emoji does exist — it’s — no, okay really:
(hedgehog) (hippo) (peacock) Still nothing?
Wait — turns out, I was wrong, and you’re right to call me out.
There is a seahorse emoji:
— no!
_________
https://ibb.co/GfpWRXCz
Capacity for will, I would imagine the OP may respond. I'm not averse to the position.
At that point, why not just go full out with the fake demo, and play all responses from a soundboard?
They could learn a thing or two from Elon.
That was my thought — the memory might not have been properly cleared from the last rehearsal.
I found the use case honestly confusing though. This guy has a great kitchen, just made steak, and has all the relevant ingredients in house and laid out but no idea how to turn them into a sauce for his sandwich?
Yes. Even if the demo worked perfectly, it's hopelessly contrived. Just get text-to-speech to slowly read you the recipe.
> Just get text-to-speech to slowly read you the recipe.
Even this feels like overkill, when a person can just glance down at a piece of paper.
I don’t know about others, but I like to double check what I’m doing. Simply having a reference I can look at would be infinitely better than something taking to me, which would need to repeat itself.
A hardened epaper display I could wash under a sink tap for the kitchen, with a simple page forward/back voice interface would actually be pretty handy now that I think about it.
Paper get lost, it gets wet, it gets blown around it not weighed down, and when weighed down it then quickly gets covered with things.
Prepping raw ingredients, once has to be careful not to contaminate paper, or at least the thing weighing the paper down that may be covering the next step.
I cook a lot of food, and having hands free access to steps is a killer feature. I don't even need the AI, just the ability to pull up a recipe and scroll through it using the wrist controller they showed off would be a noticeable, albeit small, improvement to my life multiple times per week.
I just bought paper made from stone that doesn't get wet, so there's one problem solved..!
You could imagine some utility to something that actually worked if it allowed you to continue working / not have to clean a hand and get your phone out while cooking. (Not a ton of utility, but some). But if it stumbles over basic questions, I just can't see how it's better than opening a recipe and leaning your phone against the backsplash.
Or pick up a bottle of bulgogi sauce?
The Kotaku article on this had a really nice final zinger[0]:
> Oh, and here’s Jack Mancuso making a Korean-inspired steak sauce in 2023.
> https://www.instagram.com/reel/Cn248pLDoZY/?utm_source=ig_em...
0: https://kotaku.com/meta-ai-mark-zuckerberg-korean-steak-sauc...
And the whole pear in the recipe situation was also hilarious :)
> And the whole pear in the recipe situation was also hilarious
The fact that the pear was in the recipe, or that the AI didn’t handle that situation around the pear well?
Asian pears are a common ingredient in beef marinades/sauces in Korea. It adds sweetness and (iirc) helps tenderize the meat when in a marinade.
I mean that the recipes for this sauce on the internet have pear as an ingredient and LLM also assumed this, but there was no pear present on a table, so LLM didn't take into account visual data and assumed pear is there too. Which is funny since it was the whole point of the presentation, querying LLM for whatever text in text or voice only mode is nothing new today.
Credit where it’s due: doing live demos is hard. Yesterday didn’t feel staged—it looked like the classic “last-minute tweak, unexpected break.” Most builders have been there. I certainly have (I once spent 6 hours at a hackathon and broke the Flask server keying in a last minute change on the steps of the stage before going on).
Live demos are especially hard when you're selling snake oil.
Ironically the original snake oil salesman's pitch involved slitting open a live rattlesnake and boiling it in front of a crowd.
https://www.npr.org/sections/codeswitch/2013/08/26/215761377...
Jesus dude
Yeah. Everyone wants to be like Steve but forgets that he usually had something amazing to show off.
Didn't Steve flip through 3 iPhones and hardcode the network UI to look like they had good signal?
One of the demos was printing a thing out, but the processor was hopelessly too slow to perform the actual print job. So they hand unrolled all the code to get it down from something like a 30 minute print job to a 30 second print job.
I think at this point it should be expected that every publicly facing demo (and most internal ones) are staged.
He faked shit all the time. He just faked it well and actually delivered later.
Every demo of not yet launched product will have something faked.
The CEO of Nokia had to demo their latest handset one time on stage at whatever that big world cellphone expo is each year.
My biz partner and I wrote the demo that ran live on the handset (mostly a wrapper around a webview), but ran into issues getting it onto the servers for the final demo, so the whole thing was running off a janky old PC stuffed in a closet in my buddy's home office on his 2Mbit connection. With us sweating like pigs as we watched.
If you ever write up a more detailed recollection of that, I would love to read it lol
I'd love to read it as well. More and more these days I miss that era of IT
As much as I hate Meta, I have to admit that live demos are hard, and if they go wrong we should have a little more grace towards the folks that do them.
I would not want to live in a world where everything is pre-recorded/digitally altered.
The difference between this demo and the legendary demos of the past is that this time we are already being told AI is revolutionary tech. And THEN the demo fails.
It used to be the demo was the reveal of the revolutionary tech. Failure was forgivable. Meta's failure is just sad and kind of funny.
When you have a likable presenter, the audience is cheering for you, even (especially?) when things go wrong.
It's less about the failure, and more about the person selling the product, we don't like him, or his company, and that's why there is no sympathy for him and he knows that.
When it went bad he could instantly smell blood in the water, his inner voice said, "they know I'm a fraud, they're going to love this, and I'm fucked". That is why it went the way it did.
If it was a more humble, honest, generous person, maybe Woz, we know he would handle it with a lot more grace, we know he is the kind of person who would be 100x less likely to be in this situation (because he understands tech) and we'd be much more forgiving.
Live demos being hard isn't an excuse for cheating.
Despite the Reddit post's title, I don't think there's any reason to believe the AI was a recording or otherwise cheated. (Why would they record two slightly different voice lines for adding the pear?) It just really thought he'd combined the base ingredients.
That's even worse because it would mean that it wasn't the scripted recording that failed, it means the AI itself sucks and can't tell that the bowl is empty and nothing was combined. Either this was the failure of a recorded demo that was faked to hide how bad the AI is, or it accurately demonstrated that the AI itself is a failure. Either way it's not a good look.
My layperson interpretation of this particular error was that the AI model probably came up with the initial recipe response in full, but when the audio of that response was cut off because the user interrupted it, the model wasn't given any context of where it was interrupted so it didn't understand that the user hadn't heard the first part of the recipe.
I assume the responses from that point onwards didn't take the video input into account, and the model just assumes the user has completed the first step based on the conversation history. I don't know how these 'live' ai sessions things work but based on the existing openai/gemini live ai chat products it seems to me most of the time the model will immediately comment on the video when the 'live' chat starts but for the rest of the conversation it works using TTS+STT unless the user asks the AI to consider the visual input.
I guess if you have enough experience with these live AI sessions you can probably see why it's going wrong and steer it back in the right direction with more explicit instructions but that wouldn't look very slick in a developer keynote. I think in reality this feature could still be pretty useful as long as you aren't expecting it to be as smooth as talking to a real person
That feels plausible to me.
You can trigger this type of issue by ChatGPT then reading the transcript.
The model doesn’t know you interrupted it, so continued assuming he had heard the steps.
It seems extremely likely that they took the context awareness out of the actual demo and had the AI respond to pre defined states and then even that failed.
The AI analyzing the situation is wayyy out of scope here
So MetaAI is basically the dumb cousin of Siri? I didn‘t expect to ever write that.
this isn't cheating. the models are unpredictable. This product is going out the door this month, there is no reason to cheat.
> the models are unpredictable. This product is going out the door this month
I see a problem.
"unpredictable" and "doesn't work" are different things. As a user, I know it's not deterministic and I can live with "unpredictable" results as long as it still makes sense, but I won't buy something that works 50% of the time.
An LLM repeating the exact same response feels very staged to me.
Yeah, I just watched it again and I’m mostly confused why the guy interrupted what sounded like a valid response.
I wonder if his audio was delayed? Or maybe the response wasn’t what they rehearsed and he was trying to get it on track?
It was reading step 2 and he was trying to get it to do step 1.
He had not yet combined the ingredients. The way he kept repeating his phrasing it seems likely that “what do we do first” was a hardcoded cheat phrase to get it to say a specific line. Which it got wrong.
Probably for a dumb config reason tbh.
> I’m mostly confused why the guy interrupted what sounded like a valid response
I thought they were demonstrating interruption handling.
Because it was repeating what it had already described rather than moving on to the first step
I think he was just trying to get it back on track instead of letting it go on about something that was completely off
Adrenaline makes people do interesting things
I have a friend who does magic shows. He sells his shows as magic and stand-up comedy. It's both live entertainment, okay, but he is the only person I've ever seen use that tagline. We went to see him perform once and everything became clear when he opened the night.
"This is supposed to be a magic show," he told us. "But if my tricks fail you can laugh at it and we'll just do stand-up comedy."
Zuck, for a modest and totally-reasonable fee, I will introduce you to my friend. You can add his tricks (wink wink) to your newly-assembled repertoire of human charisma.
If your friend isn't already aware of Tommy Cooper [1], he's in for a treat.
[1]: https://en.wikipedia.org/wiki/Tommy_Cooper
He was so funny, people laughed when he died!
So, wait, is he just a shitty magician and a funny guy, or does he fail on purpose?
Haha. I honestly don't know. Which makes him...a great entertainer at least? The show was a real good time though.
Take this with lots of salt but I read somewhere that circus shows "fail" at least one jump to help sell to the audience the risk the performers are taking. My friend did flub his opening trick with a cheeky see-I-told-you and we just laughed it off.
He incorporated the audience a lot that night so I thought the stand-up comedy claim was his insurance policy. In his hour-long set he flubbed maybe two or three tricks.
The AI, or this person's friend?
The friend.
I bet they rehearsed a dozen times and never failed as bad live. Got to give them props for keeping the live demos. Apple has neutered its demos so much it's now basically 2 hr long commercials.
The new Apple presentations are much more information dense, and tailored to the main (online) audience. They’re clearly better.
More dense but less trust worthy. I don't think they would have pushed apple intelligence the way they did if there was a live demo.
Live Apple demos were always held together with duct tape in the first place. That first "live" iPhone demo had a memorized sequence that Jobs needed to use to keep the whole phone OS from hard crashing.
During that first iPhone demo they also had a portable cell tower (cell on wheels) just off-stage to mimic a better signal strength than it was capable of. NYTimes write-up on the whole thing is worth the read [0].
0.https://web.archive.org/web/20250310045704/https://www.nytim...
That _was_ worth it indeed--thanks :)
There was one demo where Steve Jobs told everyone to turn off their WiFi.
Even with that, Live demos are incredibly more better than hour long demos.
They also force the developers to make it work, under threat of being fired, and in the ire of Steve Jobs case, being yeeted in to the sun along with their ancestors and descendents.
They are boring infomercials now. The live audience used to keep it from feeling too prepackaged.
You gotta keep your infomercials engaging:
https://www.youtube.com/watch?v=DgJS2tQPGKQ
Microsoft really nailed the genre. (Although I learned just now while looking up the link that this one was an internal parody, never aired.)
and so boring. I would take Jobs presenting a live demo than any of this heavily-produced stuff.
As much as it'll be "interesting" to see how models behave in real world examples (presumably similarly to how the demos went), I'm not convinced this is a premade recording like what seems to be implied.
I'm imagining this is an incomplete flow within a software prototype that may have jumped steps and lacks sufficient multi-modal capability to correct.
It could also be staged recordings. But, I don't think it really matters. Models are easily capable of working with the setup and flow they have for the demo. It's real world accuracy, latency, convenience, and other factors that will impact actual users the most.
What's the reliability and latency needed for these to be a useful tool?
For example, I can't imagine many people wanting to use the gesture writing tools for most messages. It's cool, I like that it was developed, but I doubt it'll see substantial adoption with what's currently being pitched.
Yea the behavior of the AI read to me more like a hard coded demo but still very much "live". I suspect him cutting it off was poorly timed and that timing could have amplified due to WiFi? Who knows. I wasn't there. I didn't build it.
So the live demo failed?
I was more clarifying my own hypothesis on what level of "live" this demo was, more so than explaining away the failure mechanics.
This appears to be a classic vision fail on the VLM's part. Which is entirely unsurprising for anyone who has used open VLMs for anything except ""benchmarks"" in the past two god damn years. The field is in a truly embarrassing state, where they pride themselves how it can solve equations off a blackboard, yet couldn't even accurately read a d20 dice roll among many other things. I've tried (and failed) to have VLMs accurately caption images for such a long time, yet anytime I check on the output it is blindingly clear that these models are awful at actually _seeing things_.
5-10 years and Radiologists will be out of a Job, just you wait and see.
I don't think it was rigged.
Having claude run the browser and then take a screenshot to debug gives similar results. It's why doing so is useless even though it would be so very nice if it worked.
Somewhere in the pipeline, they get lazy or ahead of themselves and just interpret what they want to in the picture they see. They want to interpet something working and complete.
I can imagine it's related the same issue with LLMs pretending tests work when they don't. They're RL trained for a goal state and sometimes pretending they reached the goal works.
It wasn't the wifi - just genAI doing what it does.
For tiny stuff, they are incredible auto-complete tools. But they are basically cover bands. They can do things that have been done to death already. They're good for what they're good for. I wouldn't have bet the farm on them.
i have a lot of difficulty getting claude to understand arrows in pictures.
tried giving it flowcharts, and it fails hard
So much negativity.
I’m just excited that our industry is lead by optimists and our culture enables our corporations to invest huge sums into taking us forward technologically.
Meta could have just done a stock buyback but instead they made a computer that can talk, see, solve problems and paint virtual things into the real world in front of your eyes!
I commend them on attempting a live demo.
> I’m just excited that our industry is lead by optimists and our culture enables our corporations to invest huge sums into taking us forward technologically.
I am always baffled that people can be that naive.
It's a weird way to put it too, "our industry" and "our culture" enables "our corporations". They're not "our" corporations as a society, why should we be excited about their investments.
There's a cognitive dissonance between talking about capitalist entities that supposedly drive social and technological progress, and the repeated use of the collective "our" and "us". Corporations are not altruistic optimists aiming to better our lives.
He's the CEO of a multi-billion dollar corporation, promising technology that puts the livelihoods of millions of people at risk. He deserves every bit of scrutiny he gets.
> that puts the livelihoods of millions of people at risk.
what livelihood are these glasses putting at risk?
I was referring to AI/LLMs in general.
Do you think his algorithms have kind of, you know, sewed the seeds of hatred and rage through our society?
Do you think all the lies an misinformation his products help spread kind of...get people elected who take away the aid which millions of women and children rely on?
Not blaming him for it all, we all play our part, but the guy has definitely contributed negatively to society overall and if he is smart enough to know this, but he cannot turn off the profit making machine he created so we all suffer for that.
The parent said alluded to the dangers of AI, well the algorithms that are making us hate each other and become paranoid are that AI.
Yes, the mocking, gleeful negativity really does make me concerned that this place is becoming Reddit. The fact that the highest upvoted post on this thread is just a link to Reddit isn't doing much to help me feel better. And I've been here for at least a decade, so I don't think this is the noob illusion.
But, I mean… it’s just not good. There is no real way to spin this as anything better than embarrassing.
But why even have a conversation at all? Who cares if Zuckerberg has a demo that goes awry? Does that satisfy your intellectual curiosity somehow? It certainly doesn't satisfy mine.
Most of the discussion here is about (i) what might have gone wrong, technically, and (ii) what this says about the ROI that Facebook and other US tech giants are getting on their AI projects.
I agree that one demo gone awry does not mean much in itself, but the comments here do rise above the level of Nelson Muntz.
I mean, I think it is notable that a massive tech company is blowing tens to hundreds of billions on this complete nonsense.
A topic that's notable but has little to discuss can get a lot of upvotes. A lot of the best stuff on this site has exactly that: lots of upvotes, little commentary, because the thing itself is notable. That's definitely not what happens on Meta threads. They usually get lots of votes then lets of repetitive spam-like comments about how ethically bankrupt Zuck or Meta or social media or algorithms or whatever are. A new comment on the behavior would be interesting, but most of the comments are basically just spam. I could probably get an LLM to generate most of them with ease. Perhaps the worst part is, even if there is novel analysis it's buried under an avalanche of "Zuck will grind you to dust" or whatever that gets repeated over and over again.
For a while that was okay, this kind of stuff was just contained in those threads. But it's started leaking out everywhere. Just spam like comments tangentially related to the topic that just bash a big company. That's the lowering of SNR that I find grating.
Oh, no, is someone being mean to the big company? :(
This is absolutely notable, and everyone should be concerned about it. Not so much the potential fakery, but the extreme deficiency of the actual product, which has had the GDP of a small country squandered on it. Like, there is a problem here, and it will have real-world fallout once the wheels fall off.
I've been here for over a decade. It has become very reddit like in the past few years.
I want to get into YC just to use and browse Bookface instead.
The signal to noise ratio certainly became worse.
You'll see the same folks spamming their hatred towards tesla/microsoft/meta/google over and over with zero substance other than sentimental blabbering.
I"ve not seen that honestly. I think you are looking for it satisfy your internal narrative you've created.
You haven't seen people, on this site, hating on Tesla?
i haven't seen 'spamming'
[dead]
It's a different world than it was ten years ago. Among the ways it's different are people are far more skeptical of billionaires, Big Tech, and capitalism generally. They're willing to cut them much less slack. This is one of the few ways that the world of today is better than the world of ten years ago.
[flagged]
Quick question: if there is a paid anti-AI movement, where do I send my invoice? May as well not leave money on the table.
I love this
I don't love your silly theory. It sounds like you're in denial, trying to cope with the fact that not everyone thinks LLMs are the greatest thing since flush toilets.
The idea that anti-AI posts on HN are PR hit jobs (paid for by..?) strikes me as conspiracy theory.
The simple reality is that hype generates an anti-hype reaction. Excessive hype leads to excessive anti-hype. And right now AI is being so excessively hyped on a scale I’m not sure I’ve seen in all the years I’ve been working in tech.
That explanation doesn't make any sense because it explains none of the facts
When people get something shoved in their face day in day out some of them react negatively to it. Is the concept really so outlandish?
No, but have you seen what people are like when they react negatively? Their behavior is angry and aggressive and unpredictable.
What needs to be explained is a group of people predictably all acting exactly the same way.
You're seeing conspiracies where there are none. A group of people all acting the same way is not suspicious when their actions are the thing that groups them together.
Your theory that there's an invisible hand that makes everyone spontaneously act the same is nonsensical. It hasn't been observed in humans nor in the wider animal kingdom.
> the people funding the anti-AI movement
You can't be serious.
Well the movement exists and they have funding so somebody is funding it.
Unless you think they run entirely on the barter system.
What movement? What funding?
I posted some links in this thread: https://news.ycombinator.com/threads?id=ants_everywhere&next.... Not sure how to link to the specific comment id.
Some of the sites have funding and programming.
Politico also reports that "billionaires" (they name a couple) are funding "AI doomsayers" and have created registered lobbying groups: https://www.politico.com/news/2024/02/23/ai-safety-washingto...
For the record I'm in favor of AI safety and regulating the use of AI. I don't know anything about the particular bills or groups in the Politico article though. But it's clear evidence that people with money are funding speech that pushes back against AI.
The funding and use of campaigns to amplify divisive issues is well known, but I'm not claiming this is a source of anti-AI funding. You may perhaps believe that AI does not count as a divisive issue and so there are no anti-AI campaigns through this funding model. I would find that surprising but I don't know of a source yet that has positively identified such a campaign and its sourcing. There were similar campaigns against American technological domination such as the anti-nuclear movement which received a lot of funding from the pro-nuclear Russian military during the cold war. And the anti-war movement which received a lot of funding from the pro-war Russian military during the Vietnam war. Similarly the US has funded "grass roots" movements abroad.
To be clear I'm not saying the anti-AI movement is similarly funded or organized. But it is clearly a movement (and its adherents acknowledge that) and it clearly has funding (including some from very wealthy donors). And they do all use similar stock phrases and examples and often [0] have very new accounts. Everything in the current paragraph is verifiable.
[0] by often, I mean higher than the base rate topic for HN. I don't mean more than 50% of the time or any other huge probability.
Bots bots bots... tearing down our stars is good business for a variety of vested interests. Don't let the bastards grind you down.
We are not bots, we just loathe historically bad-faith actors and especially with the current climate, we will take the opportunity of harmless schadenfreude where we can get it.
Oh please. This isn't like the old iPhone days where new features and amazing tech were revealed during live demos. Failure was acceptable then because what we were being shown was new and pushing the envelope.
Meta and friends have been selling us AI for a couple years now, shoving it everywhere they can and promising us it's going to revolutionize the workforce and world, replace jobs, etc. But it fails to put together a steak sauce recipe. The disconnect is why so many people are mocking this. It's not comparable.
All you're doing here is associating your hopes and dreams with grifters and charlatans.
They should be mocked and called out, it might leave room for actual innovators who aren't glossy incompetents and bullshitters.
But that’s the thing… it’s not a live demo…
I see no evidence of that. It seems like they tried to put the AI “on rails” with predefined steps and things went wrong.
So there was no AI. I know there’s a lot of confusion regarding the exact definition of AI these days, but I’m pretty sure we can this one time all agree that an “on rails” scenario ain’t it. Therefore, whatever it is that they were doing out there, they weren’t demoing their AI product. You could even say it wasn’t a live demo of the product.
You can put the AI on rails by just prompting by it. The latest models are very steerable.
System prompt: “stick to steps 1-n. Step 1 is…”
I can say confidently because our company does this. And we have F500 customers in production.
And a Fortune 500 company has never done anything stupid.
I love how they randomly blame the WiFi network, like anyone is going to buy it.
Somebody said the cooking guy was some influencer person? I noticed that many non-tech people often resort to this excuse, even in situations where it makes absolutely no sense (e.g., on a desktop with only ethernet, or with mic/speakers connected via cable). It's almost like they just substitute "bad wifi" for "glitch".
It's colloquial in the younger generations to use the term Wifi to actually refer to a WAN connection to one's home or building, regardless of Physical Layer Transport.
I often ask clarifying questions to people, to me its part of casual conversation, not an inquisition or anything (because my own behavior would be different if it was)
The vast majority of people say incoherent deflections instead of just saying “I don’t know”
I’m getting better at ignoring or playing along
It just happens in areas I least expect it
makes me sound like a high functioning autist, but I’m not convinced
It's almost certainly a joke. Everyone knows that the demo failed.
A reference to the 2010 iPhone 4 demo perhaps: https://www.infoworld.com/article/2297843/steve-jobs-wi-fi-m...
Maybe but it really didnt read like a joke..
Certainly didnt seem like a joke.
Its definitely a joke. They were saying it again when the WhatsApp call failed. They were being sarcastic.
Pretty sure it's a meme, like blaming the WLAN cable.
*wifi cable
Bad idea to rely on WiFi for an important demo in a crowded environment. It would have worked fine in testing but when the crowd arrives and they all start streaming etc, they bring hundreds more devices all competing for bandwidth.
Zuck should have known better and used Ethernet for this one!
Ethernet-connected AR glasses?
Everything is always so cringe with Facebook and Zuck. It was always doomed to fail.
It's because Zuck doesn't actually believe in anything. Zuck's values, politics, and business goals change with the wind so everything that stems from them feels empty, because it's missing the true drive.
In contrast, nothing Steve Jobs said felt empty, whether we agreed or disagreed with what he was saying it was clear that he was saying it because he believed it, not because it's what he thought you wanted to hear.
CEOs (and other spokespeople) are actors paid to believe things convincingly in front of other people.
CEOs are paid to promote their company, yes, but that doesn't mean they must fake it. The other alternative is to actually believe what they're saying. I don't think Zuck does.
OK, then whose job is it to make important decisions and to define (and explain) the company's strategy? Is that also the CEO?
As you describe him Zuck sounds very much like the AI he's trying to sell.
Why are you comparing 2000s Steve Jobs to Zuck?
Felt like the best example of a true believer. I'd say a similar, but less clear version, would be Dario Amodei vs Sam Altman. I don't agree with either, but Dario comes across as a true believer who would be doing AI regardless of the current trends, whereas Sam comes across as a chancer who would be doing cryptocurrencies if that was still big, or social media if that was still the next big thing, evidenced by the fact that he did both of those but they didn't stick so he moved his focus on.
Jobs would have been doing consumer computing hardware whatever happened. Apple in the early days wasn't the success it is now, he was fired and went and started another company in the same space (NeXT).
Surely this is the end of Zuck?
Any decade now.
Never work with children, animals, and puppets.
https://tvtropes.org/pmwiki/pmwiki.php/Main/NeverWorkWithChi...
and AI
Should've downloaded more ram for the wifi to work better
More than likely the full response was kept as context despite being interrupted.
Notably though, the AI was clearly not utilizing its visual feed to work alongside him as implied
There is a second part that is equally bad, but with Zuck:
https://old.reddit.com/r/interestingasfuck/comments/1nkbqyk/...
This is what it has come to? This is artificial intelligence? Billions and billions of dollars spent to narrate a recipe? Something that can be written down on a piece of paper?
I have a copy of the classic "Joy Of Cooking" in the kitchen. It was a lot cheaper, works perfectly every time, and doesn't get ruined if (when) I spill foodstuffs on it.
The more you spill, the more the book starts to look delicious
BRB got a book I have to lick real quick.
i always wonder why they choose the stupidest shit for these demos. like, to whom do they think they're advertising this?
To their peers, i.e. their golf billionaire buddies from Fortune-500. They talk with each other and I strongly suspect propagate a whole set of alternative reality ideas among themselves. Like this obsession on the voice activated and controlled everything. Billionaire CEOs probably find it very convenient to pretend to multitask constantly and make voice recordings and commands while doing other CEO tasks or during endless meetings. After all their human secretary can later verify information without taking his time. Meanwhile almost no one from my peer group or relatives uses voice activated anything really, no voice mails, no voice controls, no voice assistants. And I never see people on the streets doing that too.
> Meanwhile almost no one from my peer group or relatives uses voice activated anything really, no voice mails, no voice controls, no voice assistants. And I never see people on the streets doing that too.
Could also be that however your peer group uses things, isn't the only way that thing gets used?
For example, voice messages seems more popular than texting around me right now, at least in Europe and Asia, where people even respond to my texts over Whatsapp and Telegram with voice messages instead. I constantly see people on the street listening and sending voice messages too, in all age ranges.
I don't think any of those people would need an AI assistant to recite cooking recipes though, but "voice as interface" seems to be getting more popular as far as I can tell.
Why you wouldn't just transcribe your message (which most keyboards and messengers support) instead of sending minutes worth of meandering audio full of "uhm" is beyond me. I use voice all the time (assistants, LLM, etc.) but voice messages can die in a fire.
> Why you wouldn't just transcribe your message
So, the obvious answer to me is that voice communications accurately include tone and inflection. But other than that, there are "edge cases" (I mean, they're more like "people") that make it more appealing, especially after Google made their keyboard transcription worse for the people who get the most use out if it (aforementioned "edge cases").
My dyslexic friend's experience with software transcriptions has changed recently. No longer can they say, "What time do I need to pick you up, question mark, I'm just leaving now, comma, so I might be a little late, period." and have it use the punctuation as specified. Now, it's LLM-powered and converts the speech without really letting the user choose the punctuation, except manually after it's been written out, which is difficult to impossible for both dyslexics and blind people.
(As a side note, if a person is an "edge case", it's actually that person's every-time case.)
I agree with you that voice messages can die in a fire. Send a text, or call. I do not want to listen to a voice message.
They don’t want to spend 30 min explaining domain knowledge required to understand a certain super specific case.
Instead they show tech’s quality on a basic highest common denominator use case and allow people to extrapolate to their cases.
Similarly car ads show people going from home to a store (or to mountains). You’re not asking there “but what if I want to go to a cinema with the car”. If it can go to a store, it can go to a cinema, or any other obscure place, as long as there is a similar road getting there.
But those are things cars make sense for. When would I stand in my kitchen with a bunch of random ingredients strewn about the counter wondering what to make with them and conclude that an LLM would have a good answer? And what am I supposed to extrapolate from that example? I guess they were showing off that the system had good vision capabilities? Okay, but generative AIs are notoriously unreliable, unlike cars. Even if the demo had worked, it would tell me nothing about whether it would help me solve some random problem I could think up.
A better analogy would be the first cars being advertised as being usable as ballast for airships. Irrelevant and non-representative of a car's actual usefulness.
In their world this is what they think people do
The sociopaths pushing this kinda crap don't live the same lives you or I do. They have people they pay to make decisions for them, or they pay people to do shit like buy their weekly groceries for them or whatever other stupid crap they're trying to sell as a usecase for these useless AI tools. That's why all these demos are stupid shit like "Buy me plane tickets for my trip", despite the fact that 99.9% of people need very specific criteria out of their plane tickets and it's more easily done with currently available tools anyways.
They literally think "What does a regular Joe need in their day-to-day?" and their out of touch answer is "I have all these ingredients but don't know what to cook" or whatever. It's obvious these people haven't spoken to anyone who isn't an ass-licking yesman in a looooong time.
Hey, that recipe is worth trillions of dollars of investment, the destruction of the natural environment and the displacement of huge numbers of talented and skilled people. Show some respect for our billionaire class.
[flagged]
> A Korean tasting dressing. It's 2025, anyone living in a modern country should probably be able to make something that tastes Korean with just a small amount of effort...
Lol are you serious?
Google exists. Finding recipes that give specific flavor profiles is not hard to find.
Using a recipe = being told how to make something.
What an awful, condescending attitude. No, not "everyone living in a modern country" can make Korean food without a recipe. And tools that reduce the barrier for learning and acquiring new skills should be applauded.
Almost half of Americans cannot cook today. And the number 1 cited reason is a lack of time.
That said, I agree with the grandparent that this isn't really a "killer feature". Nor am I interested in the product. For so many reasons.
A real example that would have resonated was asking “what can I make with these ingredients?” No one is asking how to make a specific thing when they already know exactly what ingredients they need. If they knew what ingredients they needed, they probably already had the recipe. It feels out of touch at a basic level.
People used exactly the same argument to negate a need for the internet and later for the mobile phones.
https://www.newsweek.com/clifford-stoll-why-web-wont-be-nirv...
[dead]
God, that's actually painful to watch. I can't believe I lasted two minutes.
Mark's definitely mastered optimizing for peak cringe factor while at 1.95T valuation.
They just need an emotionless android without conscience, who does whatever is in the best interest of raking in money. They don't need technological excellence. Whether people at his company technologically succeed or fail, what matters is, that the company processes all the PII and feeds the algorithms. The rest is just for show.
I think such an emotionless android would have diligently prepared numerous backup scripts, sets of lenses, actors, demonstrations etc. to cover any failure contingency, since the cost of that is infinitesimal compared to even a slight change in their brand value.
they have one in the already in the CEO position
Not to put words in the OP's mouth, but I think that was the joke.
Obligatory https://m.youtube.com/watch?v=eBxTEoseZak
Good god...
It's funny because he spent so much money on hair and clothing stylists, jewellery, BJJ coaching, surfing lessons , really made an effort to come across as "cool" and the end result is...you cannot fake who you are, and your actions are what define you and make up your character, he is prime example of that. He cannot escape who he is.
> He cannot escape who he is.
One of the best CEOs in the world with about 20 years of experience at age 40? And also founded the company?
He’s doing pretty good. And if you’re talking about “image” he is a millennial archetype.
I wouldn't say he's one of the best CEOs. He's been "successful" by - selling an unhealthy addictive product - burying research on its mental health impact on children - engaging in anticompetitive behavior
Oh yeah, add stealing the original idea for facebook from the Winklevoss twins. I'll take being a loser if that's what it takes.
It's not about the harm you do, it's about the billions you have. But you seem to have a moral compass, you wouldn't understand ;-)
What courage and integrity is being demonstrated here?
Yes he's rich and influential and blah blah blah blah blah and he's also AN ENORMOUS FUCKING DORK with the intellectual depth of a half-empty bottle of salad dressing. For all his money I'd rather be me than him.
I’m a Millennial and I’m no lizardman, thank you very much.
Eh, I wouldn't want to have a beer with him or leave my dog with him. He's doing well financially but that's about it.
[dead]
I was going to say that’s two minutes I won’t get back (and I won’t) but, ya know, schadenfreude.
It's kind of like Peep Show, where the writers tried to engineer the most awkward social situations, only without the jokes.
Tangent: if you like cringey social awkwardness comedy (not my usual cup of tea, but in this case it's extraordinary, and hilarious), try "I Think You Should Leave".
"Brian's Hat" is the 1st one I saw and maybe the best: https://youtu.be/LO2k-BNySLI?si=qEX7STkSOeCVZtK-
Also "Hot Dog Car" https://youtu.be/WLfAf8oHrMo?si=jz5EKwjJZm1UMZau
Nathan for You is almost physically painful to watch, the cringe is so intense I can only take a few seconds at a time.
Canadians do it best. https://youtu.be/tXgfzxXUVsc?si=vBXu20TewUldSBJB
That he certainly is good at, but I loved his interview with Fabio, also that Monica Lewinsky one.
Nathan Fielder is a genius.
The Rehearsal is less in-the-moment cringe and more soul-soaking cringe. Amazing stuff.
Also a tangent, but Microsoft was the OG for corporate cringe.
https://www.youtube.com/watch?v=1cX4t5-YpHQ
It goes back waaay further than that: https://www.youtube.com/watch?v=bLuC4yZk7us
Everyone forgets about this cringe from Microsoft, but it is oddly endearing to me too.
https://www.youtube.com/watch?v=Zww2ivWdLas
Developers developers developers developers!
One of the best mashups on youtube came from this. https://www.youtube.com/watch?v=KMU0tzLwhbE
Enjoy. :)
Oh my god.
How strong does a company's reality distortion field have to be for people to think your friends are going to want to come over to play with a new version of Windows?
I mean, why not "Let's all have wine and cheese and do root canals on each other!"?
It was a different time.
I honestly was excited about Windows 95. Win98 was underwhelming, and WinME was a joke that I never bothered to install on my own machines. Win2K brought back some of the excitement, but not much.
Then Vista came out, and it was a total flop at first. Win7 fixed most of those mistakes, but the damage was done. Vista basically killed any chance Microsoft had at building excitement for an OS.
FWIW, I think the last macOS version that I was really looking forward to was High Sierra.
Here's one of my favorites, of Lars doing the Wave dance on stage to ad-lib over connectivity hiccups. For some reason it evoked a lot more empathy from me...
https://youtu.be/v_UyVmITiYQ?t=19m35s
I was on the Wave team! Our servers didn't have enough capacity, we launched too soon. I was managing the developer-facing server for API testing, and I had to slowly let developers in to avoid overwhelming it.
<Waves to you>
Neat, thanks for sharing this tidbit of history. Hey, what did the team think of the decision to build it on GWT at the time? (From the outside, seemed like an enabling approach but a bit like building an engine and airframe all at once).
Hm, I didn't work on the frontend but I don't particularly remember griping..GWT had been around for ~5 years at that point, so it wasn't super new: https://en.wikipedia.org/wiki/Google_Web_Toolkit
I always personally found it a bit odd, as I preferred straight JS myself, but large companies have to pick some sort of framework for websites, and Google already used Java a fair bit.
Wave was extremely cool and I wish it had stuck around. Hope it was as fun for you to work on as it was for us to use.
It was fun! Now we still see Wave-iness in other products: Google Docs uses the Operational Transforms (OT) algorithm for collab editing (or at least it did, last I knew), and non-Google products like Notion, Quip, Slack, Loop from Microsoft, all have some overlap.
We struggled with having too many audiences for Wave - were we targeting consumer or enterprise? email or docs replacement? Too much at once.
The APIs were so dang fun though.
There are such long gaps after each question too. Apparently the sci-fi future will be laggier than expected.
This will also not change much, since they want you to use their centralized services for data collection, not local or onsite processing. So you will always have roundtrips and shared resources. For IO this is pretty unacceptable, people get annoyed by millisecond delays.
robozuck was also having wifi problems
Big tech has spent $155bn on AI this year. It’s about to spend hundreds of billions more https://www.theguardian.com/technology/2025/aug/02/big-tech-...
Wow. That money would repay all public debt of my country (Czechia). Or rebuild a third of Ukraine.
Should have just hired a korean cook instead of spending billions of dollars to hire some AI dude to come up with an app to narrate a korean recipe.
Maybe it's a 5D chess move to generate investor pressure on them to not spend that much.
Nah, more like a 1D chess move. Investors will pay them to invest in AI, so invest in AI, make the stock go up, sell, and leave the dumb investors holding the bag.
2D chess if they're smart: start a new company that competes with the one they just sold to dumb investors. Jack Dorsey is particularly fond of this move.
They classify a lot of it as R&D and write it off taxes. Taxpayers foot the bill.
> They classify a lot of it as R&D and write it off taxes. Taxpayers foot the bill.
Taxpayers do not "foot the bill" for corporations reducing their tax obligations via "write-offs".
See: https://accountinginsights.org/what-does-write-it-off-mean-f...
"That's not a write off!" https://www.youtube.com/watch?v=aCP27_vquxQ
Obligatory -- "You don't even know what a write-off is. Do you?"
https://youtu.be/XEL65gywwHQ
Something I like about that scene is Seinfeld (the actor) clearly struggles not to smile at Kramer's delivery of the punch line despite that he (the character) was supposed to be irate with Kramer.
Heh. Well, Kramer himself...was often less amused...
Thread/video: https://old.reddit.com/r/television/comments/7lvvg5/michael_...
Probably that's why it feels like half the actual episode takes were like that, because they couldn't keep from breaking!
If you're taking about the R&D provisions in the OBBBA, that only changes the schedule of the deduction (immediately vs over several years). R&D, like most business expenses were was always deductible. Whether it's prudent or not isn't a factor.
This is the best thing I’ve seen ever. It makes me so happy I can’t even tell you.
As bad as I thought that was going to be, it was worse. And I set the bar very low for anything involving Zuck. #MustWatch
If not for these epic failures, I won’t even know they had a demo. Guess neg marketing is still marketing, it worked.
This is like "The Office" (Original UK version with Ricky Gervais as David Brent) with $2T market cap company.
There is already Hooli!
OT, but thanks for linking to old.reddit.com instead of www.reddit.com. The new interface is an abomination.
Would be good to change the OP link to this - it's the same clip but plus a bit more.
I really missed seeing Zuck sweat.
Reminds me of Jin Yang and his 8 ways to cook an octopus, on silicon valley:
https://m.youtube.com/watch?v=ltFB4WBdDg4
Funny how VR was the hyped thing when the show was filmed, and today the hype has been replaced wholesale by AI.
Also funny how Meta has been trying to capitalise on both things.
In fairness to Zuck, he spends most of his time hunting with his crossbow these days.
Every added drop of his flop sweat during this disaster just gives me that much more life. Amazing.
"A man's reach should exceed his grasp."
It's not a fail, but the correct demonstration of the quality of the service
I'm sure there were engineers who were "yo guies don't do this it's not ready to show" and ... naw the yes men "won".
I hope they keep doing live demos. This is much better than prerecorded videos.
This is like a Black Mirror episode. Also, is it a conscious decision to make the TTS sound so robotic?
Maybe it's modeled on Zuck's robotic voice
It's an ad network with an attached optional pair of glasses.
It's the platform Zuck always wanted to own but never had the vision beyond 'it's an ad platform with some consumer stuff in it'.
I am super impressed with the hardware (especially the neural band) but it just so happens that a very pricey car is being directly sold by an oil company as a trojan horse.
We all know what the car is for unfortunately.
I can't wait to see what Apple has in store now in terms of the hardware.
Someone would have to be dumb to give facebook access to collect data from everything they see and hear in their life combined with the ability to plaster ads over every available surface in their field of view. They'd have to be beyond stupid to pay for it.
Don’t look up how many users Meta has if you wish to maintain your sanity
[dead]
Sad thing is that even those who won’t buy it will have their privacy infringed upon. We’re all Zucked
Why the bleep do they still rely on wifi at conferences like this?? I always insist on a wired connection on its own, dedicated, presenter vlan. Is this running on wifi-only glasses or something? Is that the only medium they can present the tech on? Could they have shielded the room the guy's in?
You can say fuck on the Internet. It's fine
The main takeaway for me was: do not interrupt AI while it's giving exposition. No matter how trivial it may seem. It will throw it off-kilter.
Deja Vu on this demo -- I've seen this one before,
https://www.youtube.com/watch?v=TYsulVXpgYg
This is like something right out of the show Silicon Valley. You couldn’t have scripted a more cringe-worth demo.
It’s like they mashed up the AI and metaverse into a dumpster fire of aimless tech product gobodlygook. The AI bubble can’t pop soon enough so we can all just get back to normal programming.
This does not deter me from possibly buying one. The concept is pretty cool and appealing to those who want a distraction free lifestyle. Even if there's a screen in front of you at all times, at least you won't need to hold something in your hands to be able to operate it. That alone is a significant win.
Right, strapping a screen to your face is less distracting then having a device you can turn off and shove in your pocket...
My point is that the device that you keep in your pocket is one that you must take out with your hand and operate with your hand such that you cannot perform its primary functions without using hands. It's true that the Ray-Ban Meta Display can be controlled with hand gestures but doing so is apparently optional. Being able to control and consume content from a smart device while having your hands free to multitask is a big deal.
There are so many activities and professions where your hands get dirty and touching a smartphone without washing them would be a bad idea. An auto mechanic could use these glasses to look up information about things they see inside of an engine without having to clean the oil from their hands. A chef could respond to messages about their food delivery without having to drop what they're doing and go sanitize. Anytime I do dirty work outside, I can use this to access smart features without the risk of dirt filling my smartphone case, my smartwatch getting destroyed in a tight situation, or drenching either of them in sweat.
Furthermore, a phone (or a smart watch) is not meant to be used at face level, meaning folks typically look down to use them, and this can lead to extended periods of bad posture resulting in head, neck, and spine problems. My X-ray shows I have bone spurs on the vertebrae of my neck because I look down at screens too much (according to my chiropractor). A smart device that's designed to be used in a way that aligns with good posture habits is absolutely needed.
I hope smart glasses take off and I commented Meta for taking them this far.
None of those things actually have anything to do with being less distracted. If you have a screen in your face throwing all the notifications and bullshit at you that your phone normally does, it is going to be far more distracting. And humans are hilariously bad at multitasking. Taking your hands out of the equation doesn't magically make you better at it. Jesus we think people using their phones while walking or driving is bad, these things are gonna be a disaster.
If people are so addicted to their phone or smart watch that it's giving them back/neck problems, the solution isn't glasses. The solution is to be less addicted to your god damn devices.
Outside of a few niche use case I don't think tech like this will be anything but a net negative.
What nobody seems to be chasing (I assume because screens are flashier) is smart headphones. Imagine you could navigate a webpage with hand gestures and have it read to you while you walk or do chores. In some ways, this is way easier than head-mounted graphical displays; portable audio is already a solved problem. The problem that still needs to be solved is good quality TTS on a portable device, but honestly that seems way more tractable than portable HUDs.
Give them a break, small indie company
I do wonder what it will take to replace the default voice command handler. Will it be as locked down as possible or will Meta be happy with any adoption at all?
These demos whether good or bad go in meta's favor I think
Successful demo? sweet! people will rave about it for a bit
Catastrophic failure? sweet! people will still talk about it and for even longer now!
LMAO. Billions of dollars for this, seriously? Makes Bill Gates Win95 presentation BSOD fail look professional.
And LMAO for all the companies out there burning money for getting on the train of AI just because everyone does so.
How exactly is this "AI recording plays before the actor takes the steps"?
This place really is Reddit these days, so I guess the link is apt.
This is how you know AI will not live up to the hype. You have the highest paid people, at one of the richest companies in the world, building the AI and with unlimited access to the best models and the most talent.
And they still can't pull off a keynote.
So then... what does AI have to offer me? Because I would have thought, as Sam Altman put it, having an expert PhD level researcher in all subjects in my pocket could maybe help me pull off a tech demo. But if it can't help them, the people who actually made the thing, on their very high stakes public address where everything is on the line, then what's it supposed to do for the rest of us in our daily lives?
Because it seems more and more, AI is a tool that helps you stage your own very public humiliation.
What part is supposed to be prerecorded?
a few moments earlier... https://en.wikipedia.org/wiki/The_Mother_of_All_Demos
The interpretation of this is pretty dumb since its not a recording
Forgot to reset the context after a test run? :p
Those WiFi's man, they're always trouble
Why is Meta not able to produce one proper keynote in their 21 year history? At this point it must be on purpose just to be relatable or something.
still better than the pre-taped apple events. happy to see these products in action
Thing is, even if this had _worked_, it would still have been, ah, a bit shit. Absolutely farcical.
AI is hot trash. When will this river of garbage stop?
When billionaires stop fantasizing about AI allowing them to rid themselves of the filthy peasant class which keeps feeling entitled to take even the smallest fraction of their income from them just because they're also doing all the actual work that makes that income possible.
What passes for AI is just good enough to keep the dream alive and even while its usefulness isn't manifesting in reality they still have a deluge of comforting promises to soothe themselves back to sleep with. Eventually all the sweet whispers of "AGI is right around the corner!" or "Replace your pesky employees soon!" will be drowned out by the realization that no amount of money or environmental collateral damage thrown at the problem will make them gods, but until then they just need all of your data, your water, and 10-15 more years.
[flagged]
What, pray tell, is the AI argument?
That sounds a bit too much like "this is good for Bitcoin"
Your vague-posting and straw man arguments are not very good.
There's a simple explanation that isn't 'prerecorded'. I'd be very happy to accuse Meta of faking a demo, but that's 1) just a weird way to fake a demo and 2) effect that has easier explanation.
You ask AI how to do something. AI generates steps to do that thing. It has concept of steps, so that when you go 'back' it goes back to the last step. As you ask how to do something, it finishes explaining general idea and goes to first step. You interrupt it. It assumes it went through the first step and won't let you go back.
The first step here was mixing some sauces. That's it. It's a dumb way to make a tool, but if I wanted to make one that will work for a demo, I'd do that. Have you ever tried any voice thing to guide you through something? Convincing Gemini that something it described didn't happen takes a direct explanation of 'X didn't happen' and doesn't work perfectly.
It still didn't work, it absolutely wasn't wi-fi issue and lmao, technology of the future in $2T company, it just doesn't seem rigged.
AI: "You've already combined the base ingredients."
Except, no. He hadn't.
Step 0: You will be making Korean stake. Step 1: Mix those ingredients. Step 2: Now that you mixed those ingredients, do something else.
System started doing Step 1, believed it was over so moved to Step 2 and when was asked to go back, kept going back to step 2.
Step 1 being Step 0 and Step 1 combined also works.
Again, it's also a weird way to prerecord. If you're prerecording, you're prerecording all steps and practicing with them prerecorded. I can't imagine anyone to be able to go through a single rehearsal with prerecorded audio to not figure out how to do this, we have the technology.
I mean, that could be easily attributed to, whisper it, ‘AI’ being a bit shit.
I'd figured it performed image recognition on the scene visible to it, then told the language model it could see various ingredients including some combined in a bowl.
Legs! Part II
So much FOMO, hopium and fraud in the space
We'll be talking about how obvious it all was 20 years from now
Maybe it’s the new Silicon Valley trend to fuck up demos so people talk about it more?
You know there is no such things as bad publiciity..
[dead]
Yet Meta stock is at all time high
The mocking, gleeful negativity here concerns me. I am worried that with some of these more polarized topics that the discussions on HN are becoming closer to those on Reddit. The fact that the highest upvoted post on this thread is just a link to Reddit isn't doing much to help me feel better. And I've been here for at least a decade, so I don't think this is the noob illusion.
I have no illusions about Zuckerberg. He's done some pretty bad stuff for humanity. But I think AI is pretty cool, and I'm glad he's pushing it forward, despite mishaps. People don't have to be black or white, and just because they did something bad in one domain doesn't make everything they touch permanently awful.
This thread [1] on Zuckerberg from 7+ years ago doesn't look too different. Top comments saying it's "pretty cringe-y", another posting an image from Reddit and some "Twitter bingo cards". The nature of the situation doesn't really offer much for deep analysis, but the discussion yesterday [2] on the product itself seems fine to me. You might disagree since people are more skeptical rather than being glad Meta is pushing it.
[1] https://news.ycombinator.com/item?id=16803775
[2] https://news.ycombinator.com/item?id=45283306
I don't think that's a fair comparison at all. The Cambridge Analytica stuff was probably some of the ethically worst stuff that Zuckerberg and Meta has ever done, and they absolutely deserved to get raked over the coals for it. AI glasses and VR are nowhere in the same ballpark. In fact, that the two discussions are tonally similar seems to support my argument.
The discussion yesterday was fine. If that was the only conversation we had, I wouldn't be worried.
I'm surprised that there isn't more. Everything that this person has touched has made life that much worse for humanity as a whole. He deserves every ounce of criticism and mockery, moreso because he makes himself out to be this savior figure. We should sneer at every attempt at theirs (and other's) awful AI because it's lighting this world on fire. The popping of the bubble cannot come soon enough.
The point of the video isn't even AI, it's VR.
You just don’t seem to understand. Mark wouldn’t hesitate to grind you down to the last atom in order to extract every last bit of value out of you. And you defend the guy because he gave you freebies, or something. I have no words.
These are the sort of people repeatedly falling for the most obvious scams
This is exactly the behavior I'm describing.
>becoming closer to those on Reddit
People are people. If you have two communities that anyone can join, eventually the only difference between them (if any) will be the rules.
HNers lamenting that HN is becoming more like Reddit is one of the longest standing HN traditions: https://news.ycombinator.com/item?id=1023252
Sure, but check my account age. It's not like I joined the site 6 months ago. In fact, I was around when that comment was made.
The failures on stage were kind of endearing, to be honest, especially the one with Zuck. Plus the products seem really cool, I hope I'll be able to try them out soon.
Zuckerberg has negative charisma, it's painful to watch...
Jobs handled this so much better; while clearly he is pissed, he doesn't leave you cringing in mutual embarrassment, goes to show it isn't as easy as he makes it look!
See: https://www.youtube.com/watch?v=1M4t14s7nSM https://www.youtube.com/watch?v=znxQOPFg2mo
This is much better? He tossed the camera aggressively to the other person, and then made a snide comment, and that's better than blaming wifi?
Jobs was one of a kind. He had that aura that Bill G et al envied. He admitted so in a video that can be found on YT.
Yet even Bill G handled public failure better than Zuck: https://m.youtube.com/watch?v=jGwy4sb9aO8
that clip reminds me of how Sundar reacts to these things.
I mean Bill wasnt figuring out how to get back at girls that rejected him in college lol.
Zuck carries that energy no matter what he does nor what amount of wealth he amasses.
Jobs was a clear communicator who emphasized user friendly products in aesthetically pleasing boxes. If Silicon Valley wasn't the most obtuse place on earth he wouldn't have stood out nearly as hard.
I think youre underestimating the effort necessary to simplify.
Most humans I have encountered, particularly the book smart ones, are absolutely horrible at this. It really takes a concerted desire to be disciplined and focused to do it well. Unsurprisingly Jobs spoke a lot about the idea of focus and being disciplined, as being the foundation for his success.
These ppl expect perfect ( made up answers) in their "tell me about a time" interviews.
Endearing is great for trying to sell a heartfelt, homeade piece of art. It clashes when it's a trillionaire company trying to pretend this product can replace entire sectors of human labor.
Zuck and "endearing" do not belong in the same galaxy, yet alone in the same sentence together.
Yeah. I’m impressed we have any sort of wave guide display on sale commercially this year.
It's well known Meta AI is shit. But I could probably make an app that can run this demo in an afternoon. The glasses part here is insane and I don't know why everyone is fixated on the tacky AI part. It's like if I invented the car and you complained that it's really hard to crank the windows down. Be happy it's even there!
Probably because we've seen goofy-looking AI glasses before.
Google Glass was released in 2013, the Snapchat Spectacles date back to 2016. Meta's glasses might be better (at first glance, I honestly can't tell), but they aren't some kind of revolutionary product we have never seen before.
The innovative part here is supposed to be the AI demo. That clearly flopped. So what's supposed to be left?
Right. I just wonder why nobody talks about the risks of bringing camera based glasses to the masses. This is mass surveillance at its best. Without camera, i would say it's a good phone replacement. But considering they try to make everyone use a camera on the glasses, it's clear they don't care.
Ya for real, i dont even care about the LLM/AI part. but the hardware looks really cool
One important thing to note: demo didn't fail! (Or, at least not in the way people usually think of)
> You've already combined the base ingredients, so now grate a pear to add to the sauce.
This is actually the correct Korean recipe for bulgogi steak sauce. The only missing piece here is that the pear has to be Pyrus pyrifolia [1], not the usual pear. In fact every single Korean watching the demo was complaining about this...
[1] https://en.wikipedia.org/wiki/Pyrus_pyrifolia
Except that he hadn't already combined the base ingredients.
Maybe I'm missing something; there was no mention of what base ingredients or how to combine them?!
In my understanding the demo was halted because of the apparently non-sensical recipe, but I wanted to say that that recipe was indeed correct.
The demo was halted because when the AI was asked how to start, it kept assuming that some of the process was already complete and was jumping to the middle of the process (indicating that it was incorrectly analysing the scene).