LLMs are everything that it wrong in computing

(crys.site)

61 points | by lr0 a day ago ago

56 comments

  • isaacfrond a day ago ago

    How quickly people turn blaise about the micracle that LLMs are. Just a few years ago, passing the Turing test seemed an unattainable goal now it is an irrelevance.

    Yet the author thinks ‘ChatGPT, LLMs and all that crap’. Unbelievable.

    I use LLMs everyday. It has not only boosted productivity, but most of all, it made workmore fun.

    • Legend2440 a day ago ago

      They made a computer program that can follow instructions in plain english. That's been a goal of computer science since the 60s, and I wasn't sure I was going to see it in my lifetime. This is more incredible than people give credit.

      • alexchamberlain a day ago ago

        I think GenAI is incredible in a lot of ways. I think for me it's frustrating that GenAI is currently being used in a way that largely regurgitates and hallucinates information, rather than aids a computer to follow instructions in plain English. For example, at least earlier versions of GenAI, when you asked it a question, it would make up an answer to that question using the knowledge in the training data. That's why we saw limitations on what events the AI knew about, for example. It would be great to see the AI understand the question, query a knowledge base for the answer and then rephrase that answer in English instead. I believe, though I've mostly kept out of the state of the art here, that's the direction it's going in, and I think if that's the case, the confidence in the AI will go up. If we can in parallel, decrease the horrendous amounts of energy being used to run the models, then maybe some of the detractors will start to turn?

      • weMadeThat a day ago ago

        OPINION:

        why is it incredible? we followed a bunch of lists of laws until we got ELIZA on virtual steroids ... after we cracked some milestones in hardware development.

        people work towards that. money is put into that. we'd have cancer solved if the same amount of people and money flowed into the entangled fields.

        next step is digital steroids (signal processing) and the physical level once again.

        software-wise, the current and next generations of coders won't do anything "incredible" anymore. too few approaches on the playing field, no indie thinkers, and the capital is literally dumb.

        (enforced) conformity has too many downsides that can only be balanced via variety in the wide wild; but that means cumulative competition, which was handled via sabotage ( a good analogy is doping in sports, US influence and the actually incredible cognitive & logical weakness as well as the cowardice of almost the entire rest of the world )

        so I cannot agree, people give too much credit indeed.

        • RareBean 8 hours ago ago

          Responses like this tickle me because they make clear there’s a batch of human beings who have just a profoundly different experience with LLMs than I do.

          I see these things and think, this is incredible, machines seem to be approximating or emulating conscious thought. There must be so much we can learn about ourselves, and so much they can do.

          You see the same thing and say meh, useless pattern matching, what’s the point, spend the money elsewhere.

          I wonder why we have this different perspective? I’ve seen these two reactions again and again—I suspect they evince two different worldviews, but I don’t know what the correlates are. I don’t think it’s techno optimism/pessimism, because I’m profoundly worried about what happens to us. It’s not purely an age thing—I’m not young. I see it on here all the time so I don’t think it’s field of work. So what is it, I wonder?

          • juliushuijnk 8 hours ago ago

            My hypothesis; it's people who believe in a just world. That think value and wisdom need to come from effort and/or pain.

            Potential litmus test; They like videoclips from Coldplay where every frame is drawn with real crayons.

            Then it's hard to value a machine that can not feel pain or effort and just generates. It's not fair it didn't have to suffer, and then flip the arrow to say, therefore it's not valuable.

            Happy to becorrected. I'm also very curious what mental models are behind such big differences in perspectives.

            • RareBean 7 hours ago ago

              This is interesting! So to some, perhaps, the creator, intent and process matter as much as—if not more than—the output. The “becoming” over the “being.”

    • uludag a day ago ago

      But in all honesty, if LLMs were really that miraculous, we wouldn't have such detractors. Like, I don't remember any widespread anti-search-engine sentiment when Google came out. I can name many technologies that don't get such a visceral reaction like LLMs do.

      At the end of the day, we build (or should build) technology for the betterment of society. If at the end of the day, large swathes of the population are unhappy or upset at a given technology, that in and of itself should be a sign that something is wrong.

      Obviously LLMs aren't going away so I think we should listen to the detractors and try to better understand how we can steer this technology for the better.

      • emptiestplace a day ago ago

        The LLMs themselves aren't the fundamental concern; rather, they represent an unprecedented amplification of existing socioeconomic stratification mechanisms. They will likely accelerate the already problematic concentration of capital and opportunity among those positioned to leverage them - and we will see, inevitably, a continuation and intensification of patterns we've observed since the dawn of industrialization.

        > if LLMs were really that miraculous, we wouldn't have such detractors

        I don't think you thought this through.

        • uludag a day ago ago

          Good point, often times it is the miraculous that attracts the most detractors and a benign or useless technology would be ignored. I guess I meant something more like a widespread negative reaction means something's probably not right.

          I also agree completely with your point on the socioeconomic context of LLMs causing the most contention and that a deployment giving more autonomy and control to its users would have a much more toned down reaction. Like, I don't really notice a strong reaction against locally deployed LLMs.

      • ivewonyoung 15 hours ago ago

        > If at the end of the day, large swathes of the population are unhappy or upset at a given technology, that in and of itself should be a sign that something is wrong.

        Productivity and efficiency gains at the short term expense of inefficient jobs have always made people upset, so taking only that into consideration is wrong.

        > The Swing Riots were a widespread uprising in 1830 by agricultural workers in England against the mechanization of agriculture. The riots were a protest against the introduction of threshing machines, which had been increasing since the end of the 18th century. The threshing process was labor-intensive, and before the machines were introduced, it employed about 25% of all agricultural workers. The introduction of the machines gradually unemployed a large number of agricultural workers

        > Jethro Tull invented the seed drill in 1701, but it was not immediately popular in England. Tull's ideas were controversial, and his theories fell into disrepute. Tull's servants resisted his new methods because they threatened their position as laborers and their skill with the plow

      • Ygg2 a day ago ago

        > don't remember any widespread anti-search-engine sentiment when Google came out.

        Before Google switched to Gemini AI it didn't offer making mustard gas recipe when prompted for cleaning advice.

        LLM are tools, and highly unreliable one at that. I don't fear it. I fear the idiots that think it is a viable replacement.

    • aragilar 21 hours ago ago

      The Turing Test is in the same vein as the Drake Equation: a means to focus thought on how to approach finding a solution, not to find a definitive answer (and who's existence in pop culture makes it a bigger thing than it is). The original Eliza should have be enough to render the Turing Test a moot point I feel.

      Personally (as someone who's worked in a ML-adjacent field for a number of years, and so seen the various ML waves), LLMs have so far been a resounding disappointment in comparison to other ML tools (the only cool thing I've seen come out of it was Google using LLMs to generate code to try to futz code even better), and have shown that while chasing the last 1% accuracy is interesting from both an engineering and mathematical perspective (which are entirely legitimate things to be interested in, though sadly they seem to have been lost to hype), simple-to-explain-and-understand less accurate methods (like a decision tree) are both cheaper to run and more predictable (which is what you usually want).

    • Ygg2 a day ago ago

      > Just a few years ago, passing the Turing test seemed an unattainable goal

      Who geniunely thought that? Eliza could almost fool a human and that was in the 1960...

      LLMs are a neat thing for doing some tedious task but I saw them making plain stupid mistakes that makes my skin crawl.

      • CamperBob2 16 hours ago ago

        Eliza did fool some humans. Which is why the Turing test was always a flawed notion, one that says more about the human than the computer.

        That aside, nothing you said about LLMs doesn't apply to humans as well.

        • Jensson 14 hours ago ago

          > That aside, nothing you said about LLMs doesn't apply to humans as well.

          Humans make mistakes in a diverse way, that makes groups of them much better than an LLM that will amplify its own mistakes.

          Computer programs has to be extremely better than a single human since they lack the group dynamics of humans, so as long as you compare single humans to AI people will rightfully demand more from the AI.

          If all humans ran the same program and gave the same answers then humans would be way less useful, that is where LLM are today.

          • CamperBob2 14 hours ago ago

            (Shrug) The only important thing is the time derivative. AI models are getting better over time, thanks to things like chain-of-thought reasoning, while humans are not.

            IMHO, LLMs will grow immensely in power once someone comes up with the right way to wrap them in a negative feedback loop to keep the hallucination "gain" under control. The result may no longer be recognizable as an LLM, but I'm pretty sure that the GPT concept will be recognized historically as the foundation for what follows.

    • weMadeThat a day ago ago

      > I use LLMs everyday. It has not only boosted productivity, but most of all, it made workmore fun.

      That's a matter of character. Other people team up and up your LLM-augmented productivity tenfold without LLMs.

      I'm mostly solo myself and did get a bunch of good shit out of LLMs but that has nothing to do with them being good. Search Engines got me the same results after only little more time.

      The one "good" thing about LLMs is that you can get algorithmic, logically straight, safe & sound FACTS about topics like how the insurance mafia monetizes on & cooperates with various other entities to create crisis, or how the smuggling of weapons is done and used to increase premiums on & thus prices in trade via tankers & shipping . Or why labile prices on gas are complete fucking bullshit.

      And the list goes on and on and on. But who in the respective, theoretically respectable positions cares?

      Hallucinations have been build into LLMs on purpose. Above is just one of them. And it's just cooperation of magic money & enforced conformity of engineers.

      • aoeusnth1 14 hours ago ago

        I missed the part where GP said they worked solo.

        Can you share some of your prompts for the topics you listed? Asking about it straightforwardly doesn't get very far.

    • alganet a day ago ago

      It is, like you said, good enough. A (sometimes disappointing) miracle.

      So we can stop making it bigger/smarter and instead focus on making it more acessible.

    • emptiestplace a day ago ago

      I keep saying this but nobody seems to really get it or care: LLMs are magic. This isn't at all like conventional software where someone plans for every possible path, or even software where paths are generated procedurally. These models show truly emergent behaviors that go far beyond their training. And yet people dismiss them simply because they're probabilistic (more likely because they've heard some version of this, actually), as if that somehow invalidates the fact that they're doing things they were never programmed to do.

      Our understanding of consciousness is about to undergo a revolutionary transformation, and a lot of people ... aren't going to like that, either.

      • krackers a day ago ago

        >simply because they're probabilistic

        You can make them deterministic by fixing the temperature. But even still, there is some intrinsic error rate: it's like an oracle machine where some subset of queries will simply give you the wrong answer... which is just like humans (how many times have answers on StackOverflow been wrong), but it's definitely not like "conventional" programs.

        I guess just as people had to get used to "trusting" computers, I think there's probably going to be another adjustment period of "untrusting" them and finding the right balance of "trust but verify." I think this is also mirrored in adoption patterns: mathematicians seem to be more open to using LLMs because they're used to not blindly trusting supposed proofs. On the other hand, it's well known that writing code is easier than reading it, so developers are a bit more wary.

  • teruakohatu a day ago ago

    > Software also used to be:

    > much more open

    I am not sure that is the case. It is quite rare for people to be developing using closed source programming languages today outside of banking and finance, but it wasn’t the case not that long ago, such as before .NET (2014) and Java (2007) were open sourced.

    • postpawl a day ago ago

      Saying things were “much more open” before GitHub is wild.

    • a day ago ago
      [deleted]
    • optimalsolver a day ago ago

      Time was you used to have to pay for your compiler, and then manually crank it by hand. Kids these days have no idea.

  • wewtyflakes a day ago ago

    The usage of hyperbole and absolute statements in the post are distracting, and make it difficult to read through. It also seems that the author is trying to jam a LLM-sized peg into a classical programming-sized hole and has become angry that it didn't work cleanly, and so now believes its the LLM's problem.

  • gherkinnn a day ago ago

    > I will keep this site althole-free, [...] This is my life and this is where I have any control. This is the only rebellion I am capable of.

    I might agree with some of the author's sentiments, but based on this post alone, he makes the impression that computing is all his life. And now his life is going through a shift.

    The problem here isn't LLMs.

    • ianbutler a day ago ago

      I’ve been thinking about writing a blog post but the gist of it is this realization. Developers who’s identities are wrapped up in the act of coding as it stands hate LLMs because it represents a drastic shift in the way programs are being written.

      If being the smart dev writing clever code is the core of your identity and you have nothing else anchoring you you’re probably going to have a bad time over the next few years because the game is changing.

    • uludag a day ago ago

      I think saying that the author is stuck in his ways and is just fighting change is a bit too simplistic.

      - All major tech companies becoming "AI first" - The websites we use are still slow and buggy - "AI" feature bloat everywhere - Noting that the end-goal of these companies are not to enhance some technical professional's workflow, but to become truly ubiquitous, inserted into everything (how else could such high valuations be made) - Prioritization of profits above good products (enshitification)

      These are all valid points to bring up. The author is just saying that he wants his personal website to remain free from LLM usage, probably out of a desire for agency more than anything.

  • becquerel a day ago ago

    My natural instinct when I encounter this kind of person is to lavish scorn on them, but I have to hold myself back from that. Partly because it's the mature thing to do, and partly because the coming years will be hard enough on them anyway.

    As a sidenote, it amuses me that this author specifically calls out LLMs for being non-deterministic as a grave sin. Despite it being untrue (they've never heard of temperature, I guess), it's funny to me that it should be asserted as a prime directive for computing. Determinism is certainly a useful property where you can get it, but there's a whole lot of useful systems where you can't get it.

    • zkry a day ago ago

      > My natural instinct when I encounter this kind of person is to lavish scorn on them, but I have to hold myself back from that. Partly because it's the mature thing to do, and partly because the coming years will be hard enough on them anyway.

      That's an awfully patronizing way of putting things. The coming years can be hard for anyone for a number of reasons: geopolitical conflict, economic depression, natural disasters, pandemics, etc. I doubt that harboring negative views towards the current usage of LLMs will make a difference.

    • a day ago ago
      [deleted]
    • dns_snek a day ago ago

      > calls out LLMs for being non-deterministic as a grave sin. Despite it being untrue (they've never heard of temperature, I guess)

      With all that grandstanding you've missed the author's point because temperature doesn't make them deterministic in any way that matters in the real world.

      It only makes their output deterministic for the exact sequence of input tokens (I've yet to see an application where this would be useful, just save the result if you want the exact same one!), but it doesn't make them deterministic for a different but semantically equivalent sequence of tokens, which is the core value proposition of LLMs.

      The answer to "How tall is the Eiffel tower?" shouldn't change based on whether I also add "and where is its nearest bakery?", but with more complex questions, that's what often happens.

      • DemocracyFTW2 a day ago ago

        exactly, and you can graphically watch the effect in image generators that do not only behave, in my experience, in a non-deterministic way, but chaotically in the sense that any small change to the input prompt can cause arbitrary changes in the output picture

  • preordained 18 hours ago ago

    I think as-applied now, the author's take strikes a chord with me. LLMs are a technical marvel, but their results feel very cheap and tacky...it doesn't take much to catch at glimpse at the fallible man behind the curtain. They are lauded more for how impressive the illusion they render all things considered more than what they are actually GOOD FOR. Weird hallucinating search, smart but untrustworthy coding intern...people hold these things up in a way that suggests we've arrived instead of acknowledging their disappointment and saying "yes, there is a lot of power here, but we still haven't found the killer application or the thing that takes this from an impressive but flawed trick to indispensable".

  • abss a day ago ago

    I believe there are indications that suggest the era of big LLMs will come to an end because they will hit a price and performance wall. There is a serious possibility that they will remain an NLP tool, and real thinking will be formalized as various types of software in different niches. Multi-agent systems will resemble the early web, with thousands or millions of variations and different types of expert agents, not a single "god-like" software capable of doing everything.

    It’s natural for major players to try to create a "digital god"; this is their monopolistic path. If this becomes possible, they will need to be privatized, and LLMs turned into infrastructure services for all of humanity, or else we risk ending up in a dystopia. However, there is a serious chance they won’t achieve a "digital god" and will instead, unintentionally, create a world with decentralized intelligence.

    It’s better to be optimistic—we have nothing to lose, even if it’s not realistic.

  • mandymoorefan a day ago ago

    are the grammatical and spelling errors on purpose?

    • bottled_poe a day ago ago

      Should have reviewed their content with chatgpt before posting. Maybe next time.

      • fzeroracer a day ago ago

        I think you've managed to hit the nail on the head.

        Rather than accept the rather human nature of people writing in their own style and making mistakes, we'd prefer to filter it through a dispassionate void first.

        It's rather embarrassing how quickly we're willing to toss away the human elements of writing.

        • userbinator 7 hours ago ago

          Agreed. LLM writing style is disgustingly bland and "offensively inoffensive" like Corporate Memphis. Would rather have actual human style, mistakes and all.

    • yawnxyz a day ago ago

      I usually excuse typos, but when the post is literally about:

      > Software used to also be substantially better written

      I think he ends up tanking his own argument before he even started. Heck, there's a typo in the headline (which I guess is good bait, since I now clicked on and responded in this discussion...)

    • userbinator a day ago ago

      To show that it's a real human that wrote the article? Interesting idea.

      • polotics 14 hours ago ago

        It's been well know that spelling and grammatical errors are a good steganography channel, however I don't think this is what going on in The Fudged Article here.

    • uludag a day ago ago

      I think I've started to prefer text with spelling and grammatical errors now, to a certain degree at least.

    • zetalyrae a day ago ago

      The author is an ESL speaker.

  • mg a day ago ago

    Not sure the author's main point - that computers become slower and less reliable over time - holds.

    HN is snappy and rarely fails to function. I can't remember ever using a better group communication tool. The author's website is also snappy and worked perfectly to let them express their view and for me to read it.

    A lot of software I use today is software I wrote myself and that works in the browser. It is reliable and snappy too. And I write more and more of it with the help of LLMs. Just yesterday, I wrote an image editing tool with a very special feature set and was amazed how much faster I was able to implement features in perfectly elegant code via this approach:

        1: Give the existing code to an LLM and instruct it to add a feature.
        2: Test the feature to make sure it works.
        3: Go through the changes via git diff in one window and clean them up in another window in VIM.
        4: Test again.
        5: Commit.
        6: Goto 1
    
    This way, the development of this new tool was multiple times faster than if I had written it manually.

    To me, the main questionmark about the future of computers is how browsers will develop. The one thing I am missing is that currently no mobile browser supports the File System Access API. On desktop, Chrome does. And that allows users to securely safe their data on their own device. So web software does not have to be like "All your data needs to be in the cloud". Chrome already works on bringing that to mobile. Then the current state of affairs on Desktop and Android will be perfect from my perspective. And all I hope for is that Safari follows. So that iOS users can comfortably use my software too.

  • 1vuio0pswjnm7 12 hours ago ago

    Not much that the author has stated that I could disagree with.

    Not everyone is excited about LLMs.

  • DemocracyFTW2 a day ago ago

    > In unrelated news - SystemD now tries to convince people that having file-based logs is deprecated. And in few short years we will have Linux “administrators” who don’t know how to use grep(1).

    This is so in-character, railing about how everything is awful nowadays, then some drive-by ranting in the general direction of a popular target like SystemD. I wonder how this guy can justify the use of CSS and an image on their web page.

    • oefrha a day ago ago

      > I wonder how this guy can justify the use of CSS and an image on their web page.

      Probably because CSS was already a thing when they were young. Everything that came afterwards is moral decay.

  • rexpop a day ago ago

    > This [site] is my life and this is where I have any control. This is the only rebellion I am capable of.

    Jeeze, talk about defeatist! What a narrow view of the world. Is this cowardice, or mere naivete?

    Edit: I'll give credit where due—

    > aimed not at better software, but at better return of investment. We could have had much better working computers.

    True, yes, but we weren't—haven't been—able to pay for them.

    And, remember, software is only a means to an end! And that end is value for the humans invested in it! You know, a return value. This is a higher priority than "better software," because with ROI I can get done any number of things that are impossible for computers to accomplish. I want my return in the form of a fungible medium of exchange, not some programmer's idea of a good time!

    I say this as a programmer whose software has been REALLY GOOD and REALLY UNPOPULAR.

    If people knew what computers were capable of, maybe they'd pay us to build them. I'm hoping AI will at least rekindle that optimism, even if LLMs are a largely a bill of goods.

    • manmal a day ago ago

      A more favorable explanation would be that it’s resistance to let go of how things have been those last decades.

  • primaprashant a day ago ago

    Tell me you know nothing about machine learning without telling me you know nothing about machine learning

  • wayfwdmachine a day ago ago

    "Hello? We have a cheese emergency."

    • tonetegeatinst a day ago ago

      Cheesed to meet you, may I offer you an egg in this trying time?

  • DEEP-MELTDOWN a day ago ago

    [dead]