39 comments

  • andyfilms1 a day ago ago

    I work in a creative field, and we've started to get a lot of clients using AI to generate initial concepts for us to build upon. The problem is, they're not actually thinking about these concepts, they're just generating until they see something they like.

    Then, we have meetings where we will ask a basic but specific question about what they want us to make, and we're just met with blank stares. They have no answers, because they've never actually thought about it.

    And then everyone else needs to do the thinking for them.

    • neonstatic a day ago ago

      This reminds me of what's happened back in the early days of Google Translate. Lots of folks would bring very low quality automatic translations "for correction" only. For many it was a way to get a lower price since in their minds it was cheaper to correct something that is "largely done" rather than do the work from scratch. Oh how wrong they were, haha.

    • whattheheckheck a day ago ago

      They're staring at you because you they're paying you to figure it out and youre asking them again

      • yunnpp a day ago ago

        Precisely. I'm not an artist but have worked with some, and I do so with the basic assumption that the artist knows their shit and knows better than me. This client basically made a draft (or think they did) and asked you to fill the gaps, then went blank wondering how is it you're such a noob you can't even do your job. I'd honestly tell them to piss off and find better people to work with/for.

        • Balinares 17 hours ago ago

          Going ahead without asking is a sure recipe for having the client tell you "Sorry, that's not at all what I want" and then having to start over again. Your creatives ask questions for a reason. What is it that made you pick this specific draft out of the slop pile as a good match for your brand? The color scheme? The composition? The atmosphere? The line art style? If you expect your creatives to just magically guess, and then get frustrated when the output is not what you had in mind, then it's hardly your creatives' fault.

          • rcxdude 14 hours ago ago

            Yup, people aren't mind-readers. And it can be very hard to predict what bits the client cares about and what they don't, so it's worth biasing towards asking (though I think it's worth emphasizing that 'I don't care, you choose' is a valid response). The worst clients are the ones who can't express what they want in the first place and then reject output without explaining what it is they did or didn't like about the result.

            That said, it can be very hard to be a good client. Writing requirements (whether for art or engineering) is something that on average, people are very bad at. And often you will only find out you cared about something after you see it (oh god I am so bad at this, especially because it's often delayed, so I will go 'looks good, no notes', then like a day later go 'oh wait, actually...'), which is why having a healthy dialogue and rapid feedback loop is so valuable to any project.

  • ktimespi a day ago ago

    Yeah, realized this the first time I used an LLM to code. I've not used them since. No matter how good it gets, it's dangerous to lose touch of my own intelligence.

    • neonstatic a day ago ago

      I concur. I do use it a fair bit for coding and there is a temptation to have it do as much of it as possible, but there is a very clear line between what I wrote and what "it" wrote. The former I am happy to read, improve, understand. The latter, I only skim over, don't want to touch myself, and get very frustrated when it doesn't "just work".

  • neaHat1766 20 hours ago ago

    This is really dangerous. Several models like Grok get worse. Grok-4.2 spews illogical confident sounding propaganda. A reader who does not think might believe it.

    On soft topics like politics models say something different depending on the prompt or the latest fine tuning. As Microslop say in its TOS, AI is for entertainment only.

    Software is unfortunately dominated by fakers. Paul Graham said in one of his essays that the C students command the A students. Back then it meant MBA > software engineer. Now it means that the bullshitters in software command the intelligent ones.

    You have to resist daily and expose the frauds if this profession is to be saved.

  • tim333 10 hours ago ago

    "Cognitive surrender" seem a bit of a loaded term for trusting the AI.

    If you stop doing long division by hand and use a calculator is that cognitive surrender or just normal life? And if the calculator has the wrong answer and you accept that is that that surprising?

    In terms of the danger of trusting stuff without double checking there seem more problems with Fox News etc. than AI which tends to be fairly neutral if sometimes wrong.

    • ottah 8 hours ago ago

      "And so it is that you by reason of your tender regard for the writing that is your offspring have declared the very opposite of its true effect. If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.

      What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only the semblance of wisdom, for by telling them of many things without teaching them you will make them seem to know much while for the most part they know nothing. And as men filled not with wisdom but with the conceit of wisdom they will be a burden to their fellows."

      - Plato

      I think no reasonable person would be against literacy in the modern world and similarly we will continue to adapt to new technology and be the better for it.

    • bryanrasmussen 9 hours ago ago

      >If you stop doing long division by hand and use a calculator is that cognitive surrender

      yes, you have surrendered to the assumption that the calculator can calculate quicker and more accurately than you can, almost invariably correct.

      Now people are surrendering to the assumption that the AI knows more and can reason from what it knows better than they can.

  • add-sub-mul-div a day ago ago

    Funny, the author of this piece was one of the two on the byline of the Ars article with the AI-fabricated quotes.

    The cognitive surrender is the most predictable outcome. Many here will claim they'll rise above the path of least resistance and use AI responsibly, and even if that is true for many here, think about the most typical worker. Those who only want to go home at 5 after putting the least amount of effort into their job. Our society is about to be rewritten by them.

  • ricktdotorg a day ago ago

    this is exactly the same as people who drive their car into a river because google maps told them to.

    • heavyset_go a day ago ago

      If you don't listen to Google Maps and drive into a river, you're going to be left behind.

      • danielbln 16 hours ago ago

        That's why I drive with 10 pounds of paper maps in my car. I won't have any of this new fangled GPS tech atrophy my map reading skills that I've honed so much.

        • defrost 16 hours ago ago

          If you're carrying ten pounds of paper maps, you're doing no GPS / no digital maps navigation wrong.

    • trehalose a day ago ago

      If you were driving on an unmarked, unbarricaded bridge that Google Maps directed you over in a dark and rainy night, are you 100% certain you'd be driving slowly, undistracted, and checking to make sure the bridge isn't collapsed?

      • gruez a day ago ago

        This analogy doesn't work because you can assume that by a bridge existing, and not having traffic cones/barriers, it's probably built by humans and is fit for use (ie. isn't half built). The same doesn't exist for LLM outputs, which is wholly generated by AI. If I was in some simulation where the environment is vibecoded by AI, I'd be very careful too.

        • trehalose a day ago ago

          That's kind of what I was trying to say, or at least it kind of goes along with it. This meme of "somebody drove into a river just because Google Maps told them to" is a grossly distorted retelling of a fatal accident. One could twist any tragedy into a glib soundbite about how the dead stupidly trusted other people. The street could collapse under my feet as I'm crossing it and I drown in the sewer, and people on the internet would be laughing about how I dived into the sewer just because a traffic light told me to. There were some cracks in the asphalt, so obviously I should have known it wasn't safe to walk across, but I wasn't thinking for myself.

          I suppose part of the reason so many people are so dangerously trustful of LLMs is because they assume that if the LLM was put out there by decently responsible humans (doubtful, but understandable), then so too should the LLM be decently responsible? The analogy does break down there.

  • itmitica 19 hours ago ago

    In other, older, news, some years ago, cognitive surrender leads Google search users abandon logical thinking, research found.

    In other, moderate older news, cognitive surrender leads TV users abandon logical thinking, research finds.

    In other, even older news, cognitive surrender leads newspaper readers abandon logical thinking, research found.

    Shall I go on with how cognitive surrender leading to abandon logical thinking spreads out in history, AI being nothing special in this regard?

    • downboots 11 hours ago ago

      > AI being nothing special in this regard?

      Here the user is actively generating

      Cite your sources?

    • timeon 16 hours ago ago

      False equivalency comes often as response in these "AI" topics. This is good example of lazy mind, cognitive surrender.

      • itmitica 16 hours ago ago

        False is an allegation you failed to prove. You are the laziest here.

  • ChrisArchitect a day ago ago

    [dupe] Discussion on source 2 weeks ago: https://news.ycombinator.com/item?id=47467913

  • UltraSane a day ago ago

    This is just being lazy. I like to use Claude and Gemini to have debates and test ideas. If you do it right you can learn new things with every chat.

    • ezst 21 hours ago ago

      Or you were just reading confabulations, without a way to tell, corrupting your knowledge in the process.

      In general, I believe the problem of our time (sociopolitical divide, echo chambers, propaganda, people getting pulled to extreme viewpoints ...) isn't so much about the difficulty to access truthful information (most people would know how to fact check their beliefs and assumptions given enough time and motivation), but about the constant information overload that makes this process impractical. You are essentially softening your brain into accepting large volumes of information as facts, unchecked, and pretend that it's a good thing. You essentially don't know anymore the extent of what you know. Worse, you no longer know how you came to know, because the underlying principles and processes of knowledge (natural laws, models, theories, ...) were not involved in the learning (you could assert that day means light, and that night brings darkness because you have seen it repeated extensively and convincingly, but you wouldn't internalise this knowledge through the model of the earth orbiting the sun, and so you wouldn't know how to generalize from this knowledge into thinking about seasons or do any abstract reasoning on your own).

      That is to say, we should be much much more cautious about what and how we read and learn about stuff.

      • intended 20 hours ago ago

        You will probably love Network Propaganda.

        This is most definitely the issue, and I’d would say you can go a step further. There are groups of people who don’t know how to stay safe in the information environment, while others who understand how to shape the environment.

        The latter group is able to shape the content available to the former.

        https://news.harvard.edu/gazette/story/2018/10/network-propa...

  • david_shi a day ago ago

    How I imagine "wololo" would practically work

  • TacticalCoder a day ago ago

    Don't know about that research but I certainly have read many HN comments made by those who drank the AI kool-aid (and I write this as someone using Claude Code CLI daily) where any semblance of logical thinking was gone.

  • erelong a day ago ago

    This sounds like FUD to get people to abandon one of our strongest cognitive enhancing toolsof all time

    • WolfeReader a day ago ago

      Nope, its a well-researched article which shows its sources and qualifies its conclusions. You may not like the conclusions, but that doesn't make it FUD.

    • georgemcbay a day ago ago

      > This sounds like FUD to get people to abandon one of our strongest cognitive enhancing toolsof all time

      AI's existence is like the mental equivalent of a heavy weighted barbell that also happens to be edible and tastes delicious. You could use it in a way to get in great shape, you could also use it in a way where you get type 2 diabetes.

      It is up to you and your own experiences to decide how that is likely to go for most people.

      • erelong a day ago ago

        Exactly... I mean the article is "tautological nonsense". Misuse a hammer and you hit your hand, use it well and you drive nails quicker. That's why I just dismiss these posts as FUD from the rich who want people to turn in their hammers so they can move along quicker with less competition.

        • jdlshore a day ago ago

          It’s a report on what looks a very well-researched study. You may not like the results, but calling it nonsense is ridiculous. Did you even read the article?

        • intended 20 hours ago ago

          I’ve just gone through 3 separate papers on the cognitive impact on GenAI, and the points being raised are far more nuanced than what you are assuming them to be.

          I mean, you could read the papers themselves, they aren’t inimical to your position by nature.

          For example, one of the more salient results is that the more confident you are in AI, the less likely you are to check the output.

          When a new invention arrives on the scene, its properties need to be mapped.

    • nickphx a day ago ago

      so dear user, how does a non-deterministic black box of bullshit enhance cognition?

  • Rygian a day ago ago

    The very next entry on the homepage, just below this one: "The danger of military AI isn't killer robots; it's worse human judgement"

    https://news.ycombinator.com/item?id=47632016