AI gets more 'meh' as you get to know it better

(theregister.com)

77 points | by rntn 2 days ago ago

35 comments

  • DamnInteresting 2 days ago ago

    It's just like spending time with a human bullshitter. At first, their energy and confidence are fun! But the spell is broken after a handful of "confidently incorrect" moments, and the realization that they will never stop doing that. It's usually more work than it's worth to extract the kernels from the crap.

    • sph a day ago ago

      > It's just like spending time with a human bullshitter.

      I am close to a very prolific human bullshitter. The hardest thing is that anyone unfamiliar with them will have bought hook, line and sinker into their latest story, and you have to work hard to explain how that's a complete fabrication, while getting attacked as a naysayer and a hater. It's exhausting, and often it's just easier to nod along.

      The parallels with discussing the pros and cons of LLMs in this atmosphere of hype are undeniable.

    • lxgr 2 days ago ago

      Knowing whether (ostensible) solutions are easy or costly to verify is key to using LLMs efficiently.

      • mgh2 2 days ago ago

        It seems the whole world is under this spell of lies

        • rhetocj23 2 days ago ago

          Isnt this what social media is?

          • mgh2 2 days ago ago

            Social media is just its distribution system, the bs source come from people, same w/ AI

  • baobun 2 days ago ago

    One anecdote. I was worried about a recent friend of mine (non-technical solo traveler) becoming besties with ChatGPT and overly trusting and depending on it for basically everything.

    Last time we met they had cancelled their subscription and cut down on the daily chats because they started feeling drained by the constant calls for engagement and follow-up questions, together with "she lost EQ after an update".

    • jncfhnb 2 days ago ago

      > Last time we met they had cancelled their subscription and cut down on the daily chats because they started feeling drained by the constant calls for engagement and follow-up questions, together with "she lost EQ after an update".

      Can you explain what this means?

      Your friend felt drained because chat gpt was asking for her engagement?

      • neom 2 days ago ago

        Not OP, but:

        4o, the model most non-tech people use (that I wish they would depreciate) is very...chatty, it will actively try to engage you, and give you "useful things" you think you need, and take you down huge long rabbit holes. On the second point, it used to be very "high EQ" to people (sycophantic). Once they rolled back the sycophancy thing, even a couple of my non-technical friends msg'd me asking what happened to ChatGPT. I know one person who we've currently lost to 4o, it's got them talked into a very strange place friends can't reason them out of, and one friend who has recently "come back from it" so to speak.

        • lxgr 2 days ago ago

          Since when is sycophancy the same thing as “high EQ”?

          A high EQ might well be a prerequisite for successful sycophancy, but the other way definitely does not hold.

          • neom 2 days ago ago

            It's not, I'm simply saying that I believe the sycophantic version of 4o that they rolled backed appeared "higher EQ" to it's users.

      • baobun 2 days ago ago

        > Your friend felt drained because chat gpt was asking for her engagement?

        Basically yeah (except the "she" in my comment is referring to ChatGPT).

      • coldtea 2 days ago ago

        ChatGPT got on their nerves for nagging and baiting for more engagement.

  • ironSkillet 2 days ago ago

    I don't know about other use cases, but AI is definitively a game changer for software development. You still need to know what you're doing and test/think critically about what it's giving you, but the body of software problems that you can conceptually treat as "boilerplate" becomes massively larger with the help of a good AI coding tool.

    • happymellon a day ago ago

      I've just had to "fix" a bunch of shit that was thrown over the wall that "sort of did the job" that came from someone using AI.

      It's a game changer for some people who only need it to mostly get things started and pretend they did their job, and a work generator for anyone who actually needs to get things working.

      The code was shockingly bad, and had to be rewritten to be able to do step 2 of the task.

      • ironSkillet a day ago ago

        In my mind that is a problem with your lazy developer colleague, not AI as a whole. You can't expect it to be right on the first try (just like human code), you have to iterate with it and have the experience to know when it's off track and you have to take over.

        • btreecat a day ago ago

          > In my mind that is a problem with your lazy developer colleague, not AI as a whole. You can't expect it to be right on the first try (just like human code), you have to iterate with it and have the experience to know when it's off track and you have to take over.

          The problem with this IMO is when a human writes the code, they know the code they wrote, and have a sense of ownership in terms of correctness and quality.

          Current industry workflows attempt to improve quality and ownership with PR reviews.

          Most folks I see using AI coding don't know all the corner cases they might encounter, but more importantly don't know the code or feel any real ownership over it.

          The AI typed it, and the AI said it's correct. And whatever meager tests exist either passed or got a 1 line change to make them pass.

          Quality is going down from those who rely on tools to produce code they don't know. This has a cost associated with it that's been deferred.

          Sometimes this is fine, like POC where you are comfortable with tossing the code out.

          This isn't fine for business who need to be able to plan out work in the future. That requires knowing the system more so than just reading the code base.

        • happymellon a day ago ago

          If only it was this once, and only this person.

    • Gigachad 2 days ago ago

      It’s like Stack Overflow but much faster and doesn’t insult you. Which is useful, but this is so much less than what the companies are claiming it is.

  • andrewinardeer 2 days ago ago

    I'm fairly bored with AI now.

    I genuinely wonder where the next innovative leap in AI will come from and what it will look like. Inference speed? Sharper reasoning?

    • mdhb 2 days ago ago

      I think there’s an extremely high likelihood that we just DON’T see huge advancements at least in terms of accuracy or capabilities which are probably the two major nuts to crack to bring it to a different level.

      I’m open to the possibility of faster, cheaper and smaller (we saw an instance of that with deepseek) but think there’s a real chance we hit a wall elsewhere.

      • rhetocj23 2 days ago ago

        I find it funny we assume (arrogantly) that progress will just keep on coming.

        Really? Im not convinced we have the right people in this day-and-age to bring about those leaps.

        It might be that humanity goes another 50 years until someone comes around with a novel take.

  • m463 a day ago ago

    I regularly use it with searches, and it really cuts through the nonsense.

    Sort of like an information desk. The person there might not be a nobel laureate, but I don't know anything and they usually have enough knowledge to be immediately helpful.

    Like "compare expedition max vs platinum"

    (notice I didn't know max meant extra length, while platinum is a trim level)

  • jcims 2 days ago ago

    Isn't this just the human condition at work?

    https://www.youtube.com/watch?v=PdFB7q89_3U

    Fast forward a hundred years when we have a holodeck and sooner or later everyone will get bored with it.

  • taylodl 2 days ago ago

    Welcome to the trough of disillusionment!

    • ranger207 2 days ago ago

      The top of the S-curve

  • dimmuborgir 2 days ago ago

    Nano Banana for me. After the initial wow phase it's meh now. Randomly refuses to adhere to the prompt. Randomly makes unexpected changes. Randomly triggers censorship filter. Randomly returns the image as is without making any changes.

  • geldedus a day ago ago

    For coding, AI works fantastically great. I'll take the "meh" AI anytime.

  • BoredPositron 2 days ago ago

    It's even worse for image/video generation. The models get better in fidelity (prompt adherence) but raw image quality stagnated for close to 1 1/2 years now.

    • lxgr 2 days ago ago

      It’s the exact opposite for me. Image quality has been more than fine for me for a year or two, while prompt adherence has massively improved but still leaves much to be desired.

      • BoredPositron 2 days ago ago

        Our applications might differ we do 16-20k production for various automotive clients. Hitting 100% geometric details is not possible with the newer models because of the fixed patch sizes in their RoPe implementations.

    • Gigachad 2 days ago ago

      Is this just a cost issue? Like they could turn the resolution up but they can’t afford the resources

  • a day ago ago
    [deleted]