Google’s AI thinks I left a Gatorade bottle on the moon

(edwardbenson.com)

352 points | by gwintrob a day ago ago

187 comments

  • simonw a day ago ago

    The linked article describes an attack against NotebookLM, which is limited to people who deliberately create a Notebook that includes the URL of the page with the attack on it.

    I had a go at something a bit more ambitious a few weeks ago.

    If you ask Google Gemini "what was the name of the young whale that hung out in pillar point harbor?" it will tell you that the whale was called "Teresa T".

    Here's why: https://simonwillison.net/2024/Sep/8/teresa-t-whale-pillar-p...

    (Gemini used to just say "Teresa T", but when I tried just now it spoiled the effect a bit by crediting me as the person who suggested the name.)

    • Lockal 15 hours ago ago

      There are (at least) 2 completely different public endpoints called as "Gemini":

      1) https://gemini.google.com/ - this one just searches in Google with current language/region/safe-browsing settings and personal adjustments and rewrites top search results as an answer. Generative capabilities are basically not used.

      2) https://aistudio.google.com/ - here you can select specific version and generate response with LLM. Retrieval Augmented Generation (i. e. Google Search) is not used.

      I suppose you used #1, that's why you cave the correct result. #2 fails. There is a huge group of question where you can immediately find the answer, but LLM struggles. Another question (as example) is "What was the intended purpose of the TORIFUNE satellite in The Touhou Project?".

      OpenAI has something similar, providing https://www.bing.com/chat for RAG and https://chat.openai.com for an actual LLM.

      • simonw 7 hours ago ago

        Yes, in this case I meant the Gemini search-enabled user-facing product, not the model itself.

    • fastball 18 hours ago ago

      Has anyone else named the humpback? If not, isn't "Teresa T" its actual name? As the first person to bother, you get dibs.

      • simonw 7 hours ago ago

        I saw one news story that called it "Pillar".

    • DrAwesome 19 hours ago ago

      Interesting! I got no citation/link until I clicked the "Double-Check Response" button, it just replied "The young whale that hung out in Pillar Point Harbor was named Teresa T."

      One of the drafts had a little more: "Teresa T is the name of the young humpback whale that was spotted in Pillar Point Harbor. She made headlines in September 2024 when she was seen swimming near the shore, drawing crowds and causing excitement among local residents."

    • hackernewds 20 hours ago ago

      Says Teresa T for me, but also links your article

      • jadtz 19 hours ago ago

        For me the response was just:

        ```The young whale that visited Pillar Point Harbor in 2024 was named Teresa T.

        It was a humpback whale that ventured into the harbor, likely by accident.```

    • hackernewds 20 hours ago ago

      Google employee read your comment and quickly fixed it OR Gemini read your comment and quickly fixed it.

      • input_sh 19 hours ago ago

        Or it's just non-deterministic, like with every LLM.

  • alex-moon 18 hours ago ago

    I write fiction sometimes and I've got this story I've been working on which has languished by the wayside for at least a year. Whacked it into the podcast machine. Boom. Hearing these two people just get REALLY INTO this unfinished story, engaging with the themes, with the characters, it's great, it makes me want to keep writing.

    • tivert 12 hours ago ago

      > Hearing these two people just get REALLY INTO this unfinished story, engaging with the themes, with the characters, it's great...

      Except they're not people, and they're not actually engaging with anything. It's all literal bullshit.

  • daniel_iversen a day ago ago

    Isn’t this just like SEO where you can also try and trick the crawlers? Only difference is that it feels more serious with AI, it’s more realtime, and the AI engines aren’t always smart enough with anti-duping capabilities?

    • reportt a day ago ago

      Also could be causing user informational dissonance. You are potentially reading the "FireFox Version" of the site, and your NotebookLM is chomping away on the "AI Version" of the site, and they can be wildly different. And you won't even know because you don't see the "source" of the "AI Version". What are we gonna do, upload everything ourselves, manually?

      • schiffern 19 hours ago ago

        How about giving humans the ability to read the AI version? In my browser I can already select different page styles (eg viewing the print version), so this doesn't seem too impossible.

    • amelius 18 hours ago ago

      Yes it's a rather boring attack, and Google can have this fixed in no time.

      • dartos 18 hours ago ago

        I feel like you can say the same about every sort of prompt manipulation attacks, but they’re still around.

    • fastball 18 hours ago ago

      I doubt the LLM version is any more realtime.

    • dartos 18 hours ago ago

      Yeah, it kind of reinforces my theory that LLMs are essentially search algorithms. They’re searching a compressed version of what they were trained+context.

  • sho a day ago ago

    I am very confused. Is this talking about NotebookLM (https://notebooklm.google.com/) or NotebookLLM (https://notebookllm.net/) or both? Something else? The article appears to consistently use LLM but link to LM, but the LLM site I linked has a podcast generator?

    One of these projects has to change their name!

    • FreakLegion a day ago ago

      It's talking about NotebookLM, which recently added podcast generation and has been making the rounds for the last week or so. https://news.ycombinator.com/item?id=41693087

      NotebookLLM was set up two days ago, presumably by "entrepreneurs" eager to monetize all the free fun people have been having with podcast generation in NotebookLM.

      • imjonse 21 hours ago ago

        Noone said you cannot reuse the tailwind/nextjs template you used for crypto hustling if you genuinely feel you can move humanity forward.

      • sho 21 hours ago ago

        Yeah, I figured it out. Doesn't help that the author constantly refers to it as NotebookLLM.

        The .net version is really poor quality by comparison

  • jrm4 a day ago ago

    FWIW, had a pleasantly surprising experience with this podcast thing. I tried it out on a few little blogposts I wrote and I was like, hmm cool. Showed my 8 year old son how it was referencing things I wrote.

    And he was ON IT. Like, he ran to his room and grabbed a pencil and paper and put down an essay (okay about 6 or so sentences) about Minecraft, had me type them in, and ran the Notebook, and now he's just showing off both to EVERYONE.

    (Yes, he understands it's not real people.)

    • boredtofears 20 hours ago ago

      Can't help but think that your son and his peers are going to fundamentally use AI in such a different way than we do now and do a much better job of understanding it's constraints and using it to it's full potential.

      • actionfromafar 19 hours ago ago

        I hope so I guess, but, using it's potential to do what? Our globally connected supercomputers in our pockets are already being used to watch commercials interspersed with videos who mostly are product placement. /yay

        • skeaker 9 hours ago ago

          It's impossible to say. If we knew that now, then the next generation wouldn't be doing "something different" on a definitional level, because we'd be doing it already.

      • tessierashpool 20 hours ago ago

        this is a popular myth but never lines up with reality. studies pretty consistently find that kids are worse at understanding technology’s limitations.

        maybe because they don’t have enough prior experience to compare “the new way” with any given old way.

        • Vampiero 19 hours ago ago

          the studies may be flawed in that they look at the wrong age range. I'm sure many people on this site were born between 1990 and 2000. That generation knows how to use computers innately because they lived the most important part of the evolution of the consumer desktop as well as the transition that saw everything and everyone move to the internet. Before all the simplifications and streamlined UIs. Before all the assistants. Before every problem was already solved by someone else.

          I imagine that an AI-native generation will be the same, but with respect to AI.

          • zmgsabst 19 hours ago ago

            I’d argue the people born 1970-1990 are stronger at that than those born 1990-2000.

            I think you have the age range of people who wrote their own MySpace pages versus those in pre-structured gardens like Facebook slightly wrong.

        • dartos 18 hours ago ago

          What studies? How long do they run for?

          I figured you’d at least need to study the same group for 10 years as they grow up to really tell.

          Obviously a 12 year old might not understand the limitations of a technology, but give that 12 year old 10 years of living with it and they’d be better than their parents.

      • hackernewds 20 hours ago ago

        why can't we do the same?

        • Vampiero 19 hours ago ago

          Because kids can afford to spend 14 hours a day playing with an AI. You can't.

          • IshKebab 19 hours ago ago

            I can if I'm asking ChatGPT how to write Makefiles or whatever all day :-D

        • valval 19 hours ago ago

          If I had to defend GP’s argument that I didn’t make, I’d say something along the lines of our fundamental understanding of the world is built on other premises than it will be for the next generation.

          • boredtofears 8 hours ago ago

            Yeah, I’m not making much of a substantial argument, it’s very much an unqualified intuition at best.

            Right now there are millions of high school students tweaking and testing different inputs against LLMs for very real consequences: their grades.

            Meanwhile I barely trust LLMs enough to write a relatively inconsequential piece of code.

            If this version of AI is the real deal, the kids that are really depending on it are going to figure out the breakthroughs, not me.

  • hansvm a day ago ago

    AI is kind of bad at searching the web right now anyway. I've found myself having to waste tokens forcing models to not do so just to achieve the results I actually want.

    • lolinder a day ago ago

      Perplexity is actually very good at web searches. I'm leaning on it more and more for technical queries because it saves substantial time vs Google and actually gets it right (as compared to ChatGPT 4o, which is wrong ~50% of the time in my queries).

      • dimitri-vs 16 hours ago ago

        I've had the opposite experience with things I already kinda know the answer to and just want to find the source. It's pretty much 50/50 if it selects a high quality source or some random website thats in the top results for whatever search query it cooks up.

      • shombaboor a day ago ago

        ive been using perplexity more and more too. I respect those attribution/citation bubbles they give (1) (2) etc and click to them to get to the source with low friction.

      • hansvm a day ago ago

        Thanks for the tip!

  • tivert a day ago ago

    I have no problem with this. Once we switch over to an LLM-based education system, there won't be a problem with this Benson on the moon story, because everyone will just learn it's true.

    Every technological revolution has tradeoffs. Luckily once the people who knew what we lost finally die off, the complaints will stop and everyone will think the new normal is fine and better.

    • ljm 19 hours ago ago

      A post-knowledge world where everybody survives by living in the moment, because nothing else can be trusted.

      Buddha may have described the concept of enlightenment but not specifically how to get there.

      • dartos 18 hours ago ago

        You’re describing enlightenment and the society in 1984.

        Which do you think we’re likely to fall into?

    • jb1991 21 hours ago ago

      That’s dark.

      • damsalor 21 hours ago ago

        Go read a book

        • friendzis 21 hours ago ago

          Don't give them ideas, they might read "Torment Nexus"

          • tivert 12 hours ago ago

            > Don't give them ideas, they might read "Torment Nexus"

            Oh I have. It was so cool. The R&D on that technology can't happen fast enough.

            I'm just lampooning tech apologist tropes.

    • valval 19 hours ago ago

      You’ve managed to portray the essence of my conservatism in a funny and satirical manner.

      Every time we change something “for the better”, we ought to keep in mind that the old way was a solution to some problem that we no longer know or remember.

    • itronitron 20 hours ago ago

      future podcast:

         "I mean, what's not to like about the new normal?"
         "Yeah! It's both new *and* better!"
    • PUSH_AX 20 hours ago ago

      There is already misinformation and incorrect facts in llm training data. It still gets things right by nature of how it’s designed to generate output.

  • foota a day ago ago

    The big asterisk here is, what did they prompt the AI with to generate the podcast? Was it "Generate a podcast based on the website 'Foo'", or was it "Generate a podcast telling the true story of the Space Race?"

    • KTibow a day ago ago

      The author set it up so that if anyone uses the website text extractor feature in NotebookLM on his site, it returns a guide for the structure of an episode. From there, if you use the "audio overview" feature on that guide, Gemini internally writes an episode that follows it.

      • foota a day ago ago

        Right. That's a bit of a nothing burger to me. I mean, it's not nothing, but if you control the contents of a website it seems fairly irrelevant whether you can get Google to generate a summary that doesn't match the real contents.

        Also, I believe serving content different to the Google bot than normal users see absolutely trashes your search ratings.

  • masto a day ago ago

    I fed my resume into this thing and I can't stop laughing.

    https://masto.xyz/tmp/podcast.mp3

    • iterance a day ago ago

      "That's powerful. That's Masto."

      "You gotta be good. You gotta be top notch."

      "It's like he knew what every team needs before he even applied."

      Man, oh man. Comedy gold.

      • Rinzler89 20 hours ago ago

        >"You gotta be good. You gotta be top notch."

        If you can dodge a wrench, you can dodge a ball.

      • blitzar 19 hours ago ago

        > Man, oh man. Comedy gold.

        Sounds like a (unironic) linkedin post

      • efilife a day ago ago

        Why? Not trying to be rude, I don't see the comedy here

        • friendzis 21 hours ago ago

          > I don't see the comedy here So you are part of the problem.

          The algorithmic overlords have long favored "trends" or more seriously content regurgitation. At first it was "have to post something about $topic". Then it was reaction videos. Arguably negative value add content. Then it was all fed back to algorithmic content regurgitators (LLMs) which flood the internet.

          The beauty of this recording is that it sounds convincingly like a podcast. It has the podcast-style pacing, over the top praise for the most mundane things. It highlights how narrow is the mean this "content" has regressed to.

          It's comedy. Comedy in absurd, but still comedy.

          • hnbad 19 hours ago ago

            As someone who mostly stopped listening to podcasts just around the time the medium started to be taken over by overproduced vacuous drivel (I recall outrage from indie podcasts over random ads being injected into their audio), I always find these NotebookLLM "podcasts" unconvincing because it's just random speakers regurgitating information in between platitudes and praise for the most mundane and arbitrary things.

            Now that you mention it, that does fit what has at this point become the primary podcast style so I guess it's actually being surprisingly realistic because the thing it tries to mimic is already so artificial.

            • friendzis 18 hours ago ago

              Pseudo-intellectual consumerism, as I like to call it. How many people do you know who turn on the news on TV, sit in front and take notes? Who put on some music, relax in the sweet spot and immerse themselves? Who read a book, stop and ponder, continue? Versus people who turn on the TV while cooking, who put on headphones with music while working out for background noise, who put on sped up version of audiobook while driving to tick off another checkmark?

              It's form over substance all over the place. I absolutely love this TED talk: https://www.youtube.com/watch?v=8S0FDjFBj8o

              The more pseudo-intellectual consumerism infiltrates our collective psyche, the more the substance becomes irrelevant. Nuance requires thinking. "For every complex problem there is an answer that is clear, simple, and wrong" -- HL Mencken. But knowing this answer makes you feel smart.

        • karel-3d 20 hours ago ago

          His CV is very normal, dare I say boring, it is (I am sorry here) exactly the same as 1000s of other Google or Apple engineer resumes. Nothing remotely interesting. The AI reacts like he's the second coming.

        • collingreen 21 hours ago ago

          It's an absurd way to talk about a very mundane topic. The comedy is in how disproportionate the reaction is.

          • itronitron 21 hours ago ago

            "I thought we were stuck in a blender. Now we're saving lives? What?!"

          • damsalor 21 hours ago ago

            Is it?

            • fogx 21 hours ago ago

              yes, it is

              • sadcherry 21 hours ago ago

                It's a cultural difference. As a foreigner, the American way of exaggerating everything has always amazed me. They don't even notice themselves, so expect more of these "what's odd about it?" reactions.

                • beeflet 20 hours ago ago

                  we have the best overreactions, perhaps the greatest exaggerations in history, you've never seen overreactions like these folks, trust me.

                  • hnbad 19 hours ago ago

                    I think what sets Trump apart is how straightforward his hyperbole is. It's present throughout American culture but it's usually a bit more subtle. It's even in basic things like answering "How are you?" (in the US, "great!" is a neutral answer and "could be better" would be cause for concern - in e.g. Germany on the other hand, "great!" would prompt a request for elaboration whereas "could be better" would be understood as fairly neutral).

                    I also haven't seen another country (in Europe at least) where politicians across party lines so frequently emphasize in so many ways how great their country is - not even in a jingoistic way, just as a shared cultural consensus.

        • masto 16 hours ago ago

          1. It’s a bizarre, over the top dissection of a person and their resume. Funny because the style doesn’t match the content.

          2. To me personally, it’s extra funny because it’s two people breathlessly discussing.. me.

          3. The strange turns of phrase (“That’s Masto”)

          4. The stuff it makes up. I’ve never touched the guitar, but apparently I “shred”.

        • valval 19 hours ago ago

          If you’ve spent any time in corporate circles where everyone tries to appear as positive and employable as possible, this is how a discussion with two such people (who both think the other is serious) might sound like. I find it hilarious in a condescending way, but it’s not the traditional hahaha type funny.

    • losvedir a day ago ago

      God, this is so weird. Two people earnestly engaged in discussing your resume. It's such a juxtaposition of the trappings of an interesting podcast on just random, boring material. I think this is uncanny valley for me in a way I haven't experienced before.

      • zer00eyz a day ago ago

        > I think this is uncanny valley for me in a way I haven't experienced before.

        It has some of this quality.

        But it's just a super positive spin on the most mundane of topics. There is an emotional play here that you would not normally see in a "resume".

        It's like the wrong emotional subcarrier on the topic that is jaring...

        • boesboes 20 hours ago ago

          This is my experience too. At first it sounds legit, but it is very superficial and lacks context. I fed it a fe papers on stack computers and they had a riveting discussion on how they would be the next big thing. But it lacks any insight, not even a rehashed conclusion, and doesn’t really seem to integrate the knowledge

          • friendzis 19 hours ago ago

            GPTs are, in effect, rather powerful templating engines.

            It's fascinating. This tech can extract a template for a typical podcast, extrapolate from a mundane CV, plug that to the template and produce a podcast script that your typical copywriter would.

            > But it lacks any insight, not even a rehashed conclusion, and doesn’t really seem to integrate the knowledge

            Is it the GPT that is lacking here or the source material it learned on converges to this?

            • dartos 18 hours ago ago

              You can’t gain insight by finding the most statistically likely next token.

              The whole point of grand innovations is that they took years of focus on something not very likely.

              Like the iPhone. In the 90s could you imagine electronics that literally everyone had in their pocket with _almost no buttons_. Or in the 70s, could u imagine everyone having their own personal computer?

              Even in star trek, communicators had buttons.

          • ljm 19 hours ago ago

            Ironically that could describe a lot of talk-show podcasts these days.

    • collingreen 21 hours ago ago

      I didn't know I needed this. The energy is so so funny.

      "Talk about communication skills!"

    • zaptheimpaler 19 hours ago ago

      I would 100% hire you now. There's something about the social proof of 2 people vehemently singing your praises and reinforcing each other that sells it!!

    • hoten 20 hours ago ago

      AI: "It's about the Human stuff"

    • DrawTR a day ago ago

      ahaha. this is so good. they're just so earnest about every bit of praise

    • zote a day ago ago

      Thank you, I didn't know I needed this

    • moi2388 21 hours ago ago

      Oh man, this completely ruins every podcast for me.

      It’s so good. I’d honestly listen to them talk about your career for 5 episodes lmao

      • tdeck 20 hours ago ago

        It's funny because something about the dialog style reminds me strongly of RadioLab, which I haven't listened to in years.

    • monocultured 19 hours ago ago

      Did the same – added CV, blog & Linkedin – and their gushing review was even more supportive than my mom!

    • beeflet a day ago ago

      I like to imagine the male voice is Jon Hamm from mad men and the female voice is Amy Poehler from parks and rec.

      • jefozabuss 20 hours ago ago

        For me he kind of sounds like a younger Howard Stern

    • hackernewds 20 hours ago ago

      this is very very good damn notebook is one of those magic moments with AI

    • chungus 20 hours ago ago

      Masto. Never. Stops. Learning.

    • JCharante 18 hours ago ago

      This is hilarious

  • pinkmuffinere a day ago ago

    somewhat of a side-note: It's interesting to me that the first couple of sentences of the AI podcast sound 'wrong', even though the rest sounds like a real podcast. Is this something to do with having no good initial conditions from which to predict "what comes next"?

    • noirbot a day ago ago

      The other thing I've noticed is that, as expected, they're stateless to some degree, so while they have some overall outline of points to hit, they'll often repeat some peripheral element they already talked about just a minute before as if it's a brand new observation. It can lead to it feeling very disorienting to listen to because they'll bring up something as if it's a new and astute observation, when they already talked about it for 90 seconds.

      • ceejayoz a day ago ago

        This sounds like quite a few podcasts, ironically enough.

    • titanomachy a day ago ago

      The whole thing has a kind of uncanniness if you listen closely. Like one podcaster will act shocked by a fact, but then immediately go to provide more details about the fact as if they knew it all along. The cadences and emotions are very realistic but there is no persistent “person” behind each voice. There is no coherent evolution of each individual’s knowledge or emotional state.

      (Not goalpost moving, I certainly think this is impressive.)

      • ants_everywhere a day ago ago

        > Like one podcaster will act shocked by a fact, but then immediately go to provide more details about the fact as if they knew it all along.

        Some podcasters actually do this. For example, I've noticed it in some science podcasts where the goal is to make the audience feel like "gee whiz that's an interesting fact." The podcaster will act super surprised to set the emotional tone, but of course they often already knew that fact and will follow up with more detail in a less surprised tone.

        That doesn't mean this isn't a bug. But stuff like that reminds me that LLMs may not learn to be like Data from Star Trek. They may learn to be like Billy Mays, amped up and pretending to be excited about whatever they're talking about.

        • singron a day ago ago

          E.g. "Acquired" tends to have this since both co-hosts research the same topic. I think they try to split up the material, but there is inevitable overlap. They have other weird interactions too, like they are trying to outsmart each other, or at least trying not to get outsmarted.

          Some podcasts explicitly avoid this by only having a single host do research so the other host can give genuine reactions. E.g. "You're Wrong About" and "If Books Could Kill".

        • titanomachy 9 hours ago ago

          Interesting, that makes sense. I haven't listened to a lot of podcasts, but most of them were interviews, where the two speakers genuinely had different knowledge and points of view.

      • noirbot a day ago ago

        I do think there's also just a sort of natural goal-post moving when you're talking about something that's hard to imagine. The best comparison in my mind is CGI in movies. When you've never seen something like the Matrix or Lord of the Rings or even Polar Express before, it's wild, but the more you see and sit with it, the more the stuff that isn't right stands out to you.

        It doesn't mean it's not impressive, but it's hard to describe what isn't realistic about something until you see it. A technology getting things 90% right may still be wrong enough to be noticeable to people, but it's not like you could predict what the 10% that's wrong will be until you try it, and competing technologies may not have the same 10% that's wrong.

      • dullcrisp a day ago ago

        Did you catch where she misreads “what I-S progress?”

        • pinkmuffinere a day ago ago

          lol ya, thought that was funny as well

  • syntaxing a day ago ago

    Wow, content aside, this is probably the first time I heard a podcast coming from NotebookLLM and it's kinda nerve wracking and mind blowing at the same time. Those fake laughs in the snippet makes me feel...so uncomfortable for some reason knowing that its "fake". But sounds very real, too real.

    • loveparade a day ago ago

      Interesting, I feel pretty much the opposite. To me these podcasts are the equivalent of the average LLM-generated text. Shallow and non-engaging, not unlike a lot of the "fake marketing speech" human-generated content you find in highly SEO-optimized pages or low-quality Youtube videos. It does indeed sound real, but not mind-blowing or trustworthy at all. If this was a legit podcast found in the store I would've turned it off after the first 30 seconds because it doesn't even come close to passing my BS filter, not because of the content but because of the BS style.

      • tennisflyi a day ago ago

        Yes - they produce a product that sounds like every mid(dling), banal, prototypical, and anodyne podcast. Nothing unique/no USP

        • wongarsu a day ago ago

          It's decent background noise about a topic of your choice, with transparently fake back-and-forth between two speakers with some meaningless banter. It's kind of impressive for what it is, and it can be useful to people, but it´s clearly still missing important elements that make actual podcasts great

        • oceanplexian a day ago ago

          It’s intentionally fine tuned to sound that way because Google doesn’t want to freak people out.

          You can take the open source models and fine tune them to take on any persona you want. A lot like what the Flux community doing with the Boring Reality fine tune.

      • JCharante 18 hours ago ago

        > To me these podcasts are the equivalent of the average LLM-generated text. Shallow and non-engaging

        I think that's what makes notebookllm so realistic. To me this is my perception of all podcasts

      • ralusek a day ago ago

        Do not look at where we are, look at where we will be two more papers down the line

        • Nevermark a day ago ago

          Exactly. And pay more attention to the delta/time and delta/delta/time.

          We are all enjoying/noticing some repeatable wack behavior of LLMs, but we are seeing the dual wack of humans revealed too.

          Massive gains in neural type models and abilities A, B, C, ..., I, J, K, in very little time.

          Lots of humans: It's not impressive because can't L, M, yet.

          They say people model change as linear, even when it is exponential. But I think a lot of people judge the latest thing as if it somehow became a constant. As if there hasn't been a succession of big leaps, and that they don't strongly imply that more leaps will follow quickly.

          Also, when you know before listening that a new artifact was created by a machine, it is easy to identify faults and "conclude" the machine's output was clearly identifiable. But that's pre-informed hindsight. If anyone heard this podcast in the context of The Onion, it would sound perfectly human. Intentionally hilarious, corny, etc. But it wouldn't give itself away as generated.

        • loveparade a day ago ago

          Except that none of the fundamental limitations have changed for many years now. That was a few thousand papers ago. I'm not saying that none of the LLM stuff it's useful, it is, and many useful applications are likely undiscovered. I am using it daily myself. But people expecting some kind of sudden leap in reasoning are going to be pretty disappointed.

        • HeatrayEnjoyer a day ago ago

          We don't even need to look that far. During an extended interaction the new ChatGPT voice mode suddenly began speaking in my boyfriend's voice. Flawlessly. Tone, accent, pauses, speaking style, the stunted vowels from a childhood mouth injury. In that moment there were two of him in the room.

          OpenAI considers this phenomenon a software bug

        • heyitsguay a day ago ago

          I feel like people have been saying that since GPT-4 dropped (many papers up the line now) and while there have been all sorts of cool LLM applications and AI developments writ large, there hasn't really been anything to inspire a feeling that another step change is imminent. We got a big boost by training on all the data on the Internet. What happens next is unclear.

        • jeanlucas a day ago ago

          One my favorite YouTube channels

      • zooq_ai a day ago ago

        luddites gonna luddite

    • pmontra a day ago ago

      My reaction was on the nerve wracking side of that spectrum because it took one minute of useless chit chat to get to the point. It's NotebookLM always like that? TV shows are even worse at that but people have their own reasons to do that. This is computer generated and it doesn't have its own reasons: the idea that Google programmed time wasting into their model is discomforting.

      • shannifin a day ago ago

        The real question is why people enjoy listening to other people's useless chitchat. Humans are weird.

        • guappa 20 hours ago ago

          People that work alone from home don't want to feel so isolated?

          • saagarjha 19 hours ago ago

            A lot of podcast consumers listen to it on their commute.

      • itronitron 19 hours ago ago

        The voicing and delivery matches exactly to Natasha Legero and Moshe Kasher who have a podcast "Endless Honeymoon". Not sure how they feel about it but I'm sure a lot of their audience works at Google.

      • bongodongobob a day ago ago

        This is what the majority of podcasts are. It's nailing it.

        • guappa 20 hours ago ago

          You should listen to better podcasts :D

    • aldanor a day ago ago

      Try replaying the first 3 seconds. There's something ominous in that unnatural laugh. Calls for looping it and laying a deep dark 140bpm techno track on top.

  • coolcoder613 a day ago ago

    I gave it all my blog posts (https://ebruce613.prose.sh/) and the result... hilarious. https://0x0.st/XE4h.mp3

  • migf a day ago ago

    What problem are we trying to solve with this technology?

    • pndy 19 hours ago ago

      This feels like a perfect marketing tool: have a bunch of "people" discussing over a "topic" that is "important", "hot" and who doesn't have to be paid for their time and vocal cords. Surely if this will kick in it'll be used for promoting products etc. and there's a big chance it'll be used for pushing agendas as well. I won't be surprising that if this tech will settle in around, we'll have articles and comments about the usefulness, value or perhaps even some sort of morality of consuming such "discussions"

      Perhaps in 3-5 years a fully generated influencers by voice and "body" become a thing.

    • justinclift a day ago ago

      Lack of $ in the bank accounts for the developers and investors seems to be about it.

    • hallway_monitor a day ago ago

      I’m looking forward to being able to craft a movie by directing ML tools to create dialog, characters and everything else. It will be a powerful storytelling tool.

      • danwills a day ago ago

        I work in VFX and am also looking forward to AI-whole movies! I remember realising that full audio with video was coming, soon after the current AI-boom started.. and wondering whether 'traditional' digital VFX will still be a thing for long.. I think it will for a while, even with AI in the mix. VFX companies can have ML departments as well (like we do where I work!)

    • amelius 17 hours ago ago

      State actors subverting democracy.

    • blibble 19 hours ago ago

      not enough spam / spam not cheap enough

    • gosub100 21 hours ago ago

      Ability to have a better screen reader. I didn't listen to it but it sounds like it will "digest" a larger volume of text and present it in a unique format of two people talking to each other about it. Although another comment here pointed out that time-wasting is essentially programmed into it, which is kind of disturbing.

    • bongodongobob a day ago ago

      What problem did going to the moon solve?

  • janalsncm a day ago ago

    I’m not sure what the attack would be, tbh. Is there a situation where I would want to feed a lie to an LLM that I wouldn’t want regular chrome users to see?

    • left-struck a day ago ago

      Getting an AI to promote or recommend a particular product when users ask for recommendations, or perhaps exaggerating the value of a particular product. Seems like that’s what the author was getting at towards the end

    • gosub100 21 hours ago ago

      Manipulating an election

    • jfim a day ago ago

      Defamation with plausible deniability?

  • wiradikusuma a day ago ago

    It explains my book (Opinionated Launch) better than myself :D https://notebooklm.google.com/notebook/98539685-0890-438b-a0...

  • smolder a day ago ago

    AI identify verification is currently so incredibly dumb that it blows my mind.

  • 2024user 18 hours ago ago

    haha I hadn't heard of an AI podcast before.. and that is absolutely hilarious to me. It perfectly captures the awfulness of most podcasts.

    • dartos 18 hours ago ago

      Yeah it’s pretty awful to listen to. They say “like” at least every 5 words pretty consistently. It’s wildly impressive that we can make something like that, but it’s not really worth listening.

      I’ve had them be incorrect a few times when feeding in arxiv papers, but I don’t think the audience for podcasts like that care.

      • amelius 17 hours ago ago

        I'm waiting for AI that can remove "like" from podcasts.

  • botanical a day ago ago

    LLMs are the type of junk AI that these corps think will succeed? They are spending billions and consuming a large amount of resources and energy for this. Seriously, what a waste.

  • hiharryhere a day ago ago

    The male voice has a real resemblance to Leo Laporte. Similar tone and cadence.

    Uncanny valley all round.

    • surajrmal a day ago ago

      First thing I thought as well

  • helsinkiandrew a day ago ago

    > Google's AI thinks I left a Gatorade bottle on the moon

    No, NotebookLM creates summaries and podcasts, or answers questions specifically from the documents you feed it.

    Feed it fiction it will create fiction as would a human tasked to do the same.

    • gerdesj a day ago ago

      " as would a human tasked to do the same."

      A human might say "are you sure" and understand what they asked and the answer.

    • malfist a day ago ago

      The point isn't that this person fed it lies to get lies, but how easy it was to detect the AI scanner and feed it lies.

      If they can do it for fun, malicious people are probably already doing it to manipulate ai answers. Can you imagine poisoning ai dataset with your blackhat SEO work?

      • helsinkiandrew a day ago ago

        > The point isn't that this person fed it lies to get lies, but how easy it was to detect the AI scanner and feed it lies.

        If the article had got Gemini AI to tell other users he’d left Gatorade on the moon that would be notable, but this is literally just summarising the document it was given. Usually Google search crawler is fairly good at finding when it has been fed different information and ignores/downgrades the site after a few days/weeks

        • acureau a day ago ago

          That's exactly what the article is about actually.

          • jerf a day ago ago

            No, it's what the article superficially reads as being about, but the author did not accomplish what is actually stated in the title. The author is serving a fake version of his page to Google, and the author used a podcast-generating AI to write a podcast based on the fake page, but the loop is never actually closed to show that Google has accepted the fake page as fact into any AI.

            I'm not sure if it's deliberately deceptive or just an example of poor writing conveying something other than what the author intended, but the attack in the article is not instantiated in the blog post.

            Mind you, I well believe that less extreme examples of the attack are possible. However, I doubt truly poisoning an LLM with something that improbable is that easy, on the grounds that plenty of that sort of thing already litters the internet and the process of creating an LLM already has to deal with that. I don't think AI researchers are so dim that they've not considered the possibility that there might be, just might be, some pages on the Internet with truly ludicrous claims on them. That's... not really news.

  • la64710 a day ago ago

    Thinks?? When will folks learn to stop using the word thinks with the current generation of AI?

    • hollerith a day ago ago

      When my thermostat thinks it is too cold, it turns on the heat.

      • esfandia 21 hours ago ago

        "To ascribe beliefs, free will, intentions, consciousness, abilities, or wants to a machine is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behaviour, or how to repair or improve it. It is perhaps never logically required even for humans, but expressing reasonably briefly what is actually known about the state of the machine in a particular situation may require mental qualities or qualities isomorphic to them. Theories of belief, knowledge and wanting can be constructed for machines in a simpler setting than for humans, and later applied to humans. Ascription of mental qualities is most straightforward for machines of known structure such as thermostats and computer operating systems, but is most useful when applied to entities whose structure is incompletely known.” (John McCarthy, 1979) https://www-formal.stanford.edu/jmc/ascribing.pdf

    • mcmcmc a day ago ago

      Come off it, people have been anthropomorphizing computer systems for decades. No one genuinely believes current AIs are thinking for themselves, other than the fanatics who have been convinced by marketing copy and twitter threads

      • consp 20 hours ago ago

        I think you underestimate most of the population for following that marketing trend.

    • Dylan16807 19 hours ago ago

      When someone says a search system "thinks" a claim, it means that the system is presenting it as true. This usage goes far back. You can even say the dictionary thinks the definition of a word is something. Why is this a problem to you?

    • gosub100 21 hours ago ago

      What's thinking, if not finding the minima/maxima of a multidimensional space?

      • probably_wrong 17 hours ago ago

        The molecules of a rock keep it together because breaking up would require more energy than staying as it currently is [1]. In other words, a rock is the result of finding a minimal (energy in this case) in a multidimensional space. If finding a minima is thinking then rocks are intelligent.

        Thinking may involve finding minimal/maxima, but it's not a 1-to-1 relation. I'd argue that thinking requires a will component: a sunflower is not a thinking entity because it doesn't have the choice not to follow the sun.

        [1] https://physics.stackexchange.com/questions/444307/does-a-ro...

        • gosub100 13 hours ago ago

          This is a good point. However maybe the trouble comes from the word "find"? If a natural force such as gravity or thermodynamics results in a state of energy conservation, maybe that isn't the result of "finding"? I know it's a semantics issue but it seems to solve the conundrum. If you spend energy to discover information, vs letting nature take you there, maybe that's the delineation?

  • NBJack a day ago ago

    It's such a cool concept, but yeah, when I've listened to it and Illuminate, it's also a bit scant on details too. Neat technology, even engaging, but not good for more than best-effort high level summaries.

  • fortyseven a day ago ago

    Towards the end she says "I.S." instead of "is". That kind of mistake surprised me, but there it I.S.

  • zooq_ai a day ago ago

    Google Search is pretty good at detecting this dual-content attacks. It's not this is the first time someone thought about that and it will heavily penalize websites that do that.

    This is just the NotebookLM crawler that is being tricked, which is still in it's experimental stage. Rest assured as it scales Google can easily implement safeguards against all spammy tricks people use

  • kleiba 20 hours ago ago

    If you're a presidential candidate, you don't even have to hide your postfactual alternative thruths for AIs to find it.

  • zxilly a day ago ago

    leetcode uses the similar things to detect AI cheaters by placing a <span> with opacity 0

  • kkfx 19 hours ago ago

    I suggest https://www.apa.org/monitor/2009/12/consumer or the Eduard Bernays story to convince MDs that smoking is good for health: he create a new scientific journal distributing it for free "because it's new, we want to spread", hosting REAL publications from anyone who want to publish and being spread for free... After a bit of time he inject some false articles self-written formally as some PhD of remote universities finding that smoking tobacco is good for health, others real professors follow the path stating they discover this or that specific beneficial use of cigarettes, then the false became officially true, tested and proved science in most people mind.

    With LLMs is far cheaper and easier, but the principle is the very same: trust vs verification or the possibility thereof.

  • neilv a day ago ago

    As a personal preference, I dislike podcast artificial banter, and this audio is a great example of what I dislike.

    Artificial artificial.

    Great little project, though. And, as satire, I did like the show notes writing.

    And the generative AI was impressive, in a way. Though I haven't yet thought of a positive application for it. And I don't know the provenance of the training data.

  • mvdtnz a day ago ago

    I tried feeding NotebookLM a Wikipedia article about the murder of Junko Furuta, a horrifying story of a poor girl tortured and murdered in Japan in 1989. NotebookLM refused to do anything with this document - not answer questions, not generate a podcast, nothing. Then I tried feeding it the wiki on Francesco Bagnaia, a wholesome MotoGP rider, and it worked fine.

    Who wants this shit? I do not want puritanical American corporations telling me what I can and can't use their automated tools for. There's nothing harmful in me performing a computer analysis about Junko Furuta, no more so than Pecco Bagnaia. How have we let them treat us like toddlers? It's infantilising and I won't take part in it. Google, OpenAI, Microsoft, Apple, Meta and the rest of them can shove these crappy "AI" tools.

    • creato 18 hours ago ago

      I agree it's dumb, but it's easy to understand why: just look at this thread. Google's being accused of being stupid because of some story about Gatorade on the moon. Dumb, but inoffensive. Now imagine the thread title when "Google" gets even some inconsequential detail wrong about your murder case.

  • zharknado a day ago ago

    “… the science behind it, let’s just say there’s some debate.”

  • gavmor a day ago ago

    "You can't make this stuff up."

  • liamYC a day ago ago

    “You can’t make this stuff up”

  • 1024core a day ago ago

    This is no different than the decades-old technique of "cloaking", to fool crawlers from Google and other search engines.

    I fail to see the value in doing this.

    "Oh hey everybody! I set up a website which presents different content to a crawler than to a human ..... and the crawler indexed it!!"

    • reportt a day ago ago

      > This is no different than the decades-old technique of "cloaking", to fool crawlers from Google and other search engines. Isn't this more "Hey, why is this website giving my NotebookLM different info than my own browser?" You reading Page_1 and the machine is "reading" a different Page_2, what's the difference between that information?

      I'm reading this less as

      > "We serve different data to Google when they are crawling and users who actually visit the page"

      and more

      > "We serve the user different data if they access the page through AI (NotebookLM in this case) vs. when they visit the page in their browser".

      The former just affects page rankings, which had primarily interfaced with the user through keywords and search terms -- you could hijack the search terms and related words that Google associated with your page and make it preferable on searched (i.e. SEO).

      The latter though is providing different content on access method. That sort of situation isn't new (you could serve different content to Windows vs. Mac, FireFox vs. Chrome, etc.), but it's done in a way that feels a little more sinister -- I get 2 different sets of information, and I'm not even sure if I did because the AI information is obfuscated by the AI processes. I guess I could make a browser plugin to download the page as I see it and upload it to NotebookLM, subverting it's normal retrieval process of reaching out to the internet itself.

  • Carrok a day ago ago

    > You can upload a documents with fake show notes straight to NotebookLLM's website, so if you're making silly podcast episodes for your kids, that's the best way to do it.

    Please don’t do this. You don’t need a professional mic to record a podcast with your kids any phone or computer mic will work. Then you can have fun editing it with open source audio tools.

    Don’t have a computer generate crap for your kids to consume. Make it with them instead.

    • SkyPuncher a day ago ago

      My kids and I are having a blast using Suno to make stupid songs. With your attitude, we wouldn't even attempt it because (1) I'm not musically inclined (2) I don't have the time or desire to learn the actual composition (3) the kids don't have the focus beyond having the bot write something silly.

      • Baeocystin a day ago ago

        My family had a great laugh this past week doing just that. Current household favorite is titled "Triple-Digit Temperatures in the Fall are Bullshit", as I'm sure many fellow bay area folks can agree with.

        Would we have taken the time to compose such a track otherwise? No way. But it's sure been fun playing with what we can. The end result is us laughing together, and I love it.

      • Phiwise_ a day ago ago

        But why have fun with your kids when you could spread the good word of open source instead?

      • vineyardmike a day ago ago

        This is actually inline with the parent commentor's point - let your kids be creative and try to produce something.

        Don't (1) ask a machine to automate the creativity and (2) then give it to your kids to consume in a non-interactive manner.

      • breaker-kind a day ago ago

        you should buy your kids a cheap ukelele and hit your computer with a hammer

        • thomashop a day ago ago

          Or maybe hit the Ukulele with a hammer, record it with a computer and create an experimental noise album.

    • Taek a day ago ago

      Alternatively:

      AI gen is probably the future of music composition. By the time your kids are professionals AI is going to be a lot stronger than it is today.

      Are your kids having fun? Are they learning? Is it a good bonding experience? Those are the things that matter.

    • m463 a day ago ago

      <deleted>

      • ethbr1 a day ago ago

        This is the saddest thing I've ever read on HN. Dear future, please do better.

        • vineyardmike a day ago ago

          It wasn't the saddest thing ever. It was also probably a joke.

          That said, I think there has been many sci-fi stories written from the perspective of it being true. One day we may face that reality, and we'll have to ask ourselves what parenthood means.

        • HeatrayEnjoyer a day ago ago

          What was it?

          • ethbr1 a day ago ago

            It was probably less dark than what you're imagining, but in deference to parent's editing I won't repost.

            Suffice to say, raising a child is a uniquely wonderful opportunity, for those who choose to embark on that adventure.

      • ToucanLoucan a day ago ago

        I refused to bring children into this world, what on earth makes you think I want to bring synthetic intelligence into it? I barely want to be here most days.