How many e's are in strawberry?

(smileplease.mataroa.blog)

15 points | by Imustaskforhelp 17 hours ago ago

7 comments

  • dekhn 15 hours ago ago

    What is your point, exactly? You should state it clearly at the start of the post.

    When I asked gemini, it printed the right answer, and it also had a button: "Show code", which I clicked: word = "strawberry" count = word.lower().count('e') print(f"The number of 'e's in '{word}' is {count}.")

    Now, the followup (from a comment below): "How many straws in strawberry": there are 0 "straws" (again with the Python code). Similarly, "How many straw in strawberry": 1 straw (again with the Python code, showing it's just trying to do string matching).

    Next: Q: "When I saw straw, I mean the object you use to drink liquids through. How many straw in strawberry?"

    A: "While the word strawberry contains the letters to spell "straw" exactly one time, there are zero actual drinking objects inside the fruit. Trying to use a berry as a straw would mostly just result in a very messy snack!"

  • 14 hours ago ago
    [deleted]
  • bediger4000 16 hours ago ago

    This obvious flaw is immaterial. LLMs are entirely adequate for use cases involving financial gain, like spam and phishing emails, marketing and propaganda.

  • dankwizard 12 hours ago ago

    Dunning-Kruger vibes from this blog post.

    Look into how LLMs work and it's pretty clear why these character counting scenarios often fail if not invoking Thinking/Python Scripting.

    • Imustaskforhelp 7 hours ago ago

      Hey, just woke up. Good morning, From my understanding, the reason that this happens is that as LLM's work from token to token which breaks the word being the reason why this type of anamoly occurs. The only thing I know is that I know nothing to be honest and I wish to learn more, and this was just me sharing something that I found interesting :-D

      The thing which I find interesting though is that even if you don't involve Thinking, the r's in strawberry test always succeeds

      https://chatgpt.com/s/t_69f85fb01d8881918016f2ceb3d1f314 & https://chatgpt.com/c/69f85e1d-25e8-8320-ba10-2dbe5857fc74

      https://chatgpt.com/share/69f85fd6-4534-8322-a3f9-a06f3e26ec...

      Now one can say that this patch was added as the r in strawberry got added into training test but I remember some comments at that time where it suddenly changed the answers as it started getting more humiliating for OpenAI as everyone started to pick this knowledge up and this might be the same reason why it says that it has three e's so many times because it tries to always say the three r's BUT that is just my opinion (which I don't wish to present as fact)

      I think that the issue I have is that the people working in AI also present it sometimes as the holy grail when it has some clear flaws and they gloss over it.

      If you found this result unsurprising, good for you!, but I feel like even as someone who spent a lot of time in the internet with LLM's, I didn't expect it because I thought that it was a completely solved problem in all LLM's, and I just shared this with everyone.

      Have a nice day :-D

  • andsoitis 17 hours ago ago

    Follow-up Q: How many straws?

    A: One.

  • Imustaskforhelp 17 hours ago ago

    5:10 AM here, the writing is a bit messy to be honest and not up to my standards but I wished to upload it anyway because I will have some good comments to read when I wake up. So thanks for reading if you have read it!

    And even if you haven't and have just read this comment, that's completely fine too and honestly I don't know why really, but I want to just say that I love you all and I love this community and yes it has its problems but I love you all and I wish you all to have a nice day/night!

    Going to go to sleep now. Bye!