I Am Tired of AI

(ontestautomation.com)

1172 points | by Liriel 3 days ago ago

1069 comments

  • Animats 2 days ago ago

    I'm tired of LLMs.

    Enough billions of dollars have been spent on LLMs that a reasonably good picture of what they can and can't do has emerged. They're really good at some things, terrible at others, and prone to doing something totally wrong some fraction of the time. That last limits their usefulness. They can't safely be in charge of anything important.

    If someone doesn't soon figure out how to get a confidence metric out of an LLM, we're headed for another "AI Winter". Although at a much higher level than last time. It will still be a billion dollar industry, but not a trillion dollar one.

    At some point, the market for LLM-generated blithering should be saturated. Somebody has to read the stuff. Although you can task another system to summarize and rank it. How much of "AI" is generating content to be read by Google's search engine? This may be a bigger energy drain than Bitcoin mining.

    • datahack 2 days ago ago

      It’s probably generally irrelevant what they can do today, or what you’ve seen so far.

      This is conceptually essentially Moore’s law, but about every 5.5 months. That’s the only thing that matters at this stage.

      I watched everyone make the same arguments about the general Internet, and then the Web, then mobile, then Bitcoin. It’s just a toy. It’s not that useful. Is this supposed to be the revolution? It uses too much power. It won’t scale. The technology is a dead end.

      The general pattern of improvement to technology has been radically to the upside at an increasing pace parabolically for decades and there’s nothing indicating that this is a break in the pattern. In fact it’s setting up to be an order of magnitude greater impact than the Internet was. At a minimum, I don’t expect it to be smaller.

      Looking at early telegraphs doesn’t predict the iPhone, etc.

      Optimism is warranted here until it isn’t.

      • YeGoblynQueenne 2 days ago ago

        >> Looking at early telegraphs doesn’t predict the iPhone, etc.

        The problem with this line of argument is that LLMs are not new technology, rather they are the latest evolution of statistical language modelling, a technology that we've had at least since Shannon's time [1]. We are way, way past the telegraph era, and well into the age of large telephony switches handling millions of calls a second.

        Does that mean we've reached the end of the curve? Personally, I have no idea, but if you're going to argue we're at the beginning of things, that's just not right.

        ________________

        [1] In "A Mathematical Theory of Communication", where he introduces what we today know as information theory, Shannon gives as an example of an application a process that generates a string of words in natural English according to the probability of the next letter in a word, or the next word in a sentence. See Section 3 "The Series of Approximations to English":

        https://people.math.harvard.edu/~ctm/home/text/others/shanno...

        Note: Published 1948.

        • lozzo a day ago ago

          thumbs up for this comment. As for me, I am tired of people talking about AI starting their argument with incorrect information

      • bburnett44 2 days ago ago

        I think we can pretty safely say bitcoin was a dead end other than for buying drugs, enabling ransomware payments, or financial speculation.

        Show me an average person who has bought something real w bitcoin (who couldn’t have bought it for less complexity/transaction cost using a bank) and I’ll change my mind

        • nobodyandproud a day ago ago

          I came from the other end: I had years to get on the boat and missed it.

          I couldn’t fathom what use Bitcoin could possibly have, but completely overlooked the bad actors that would benefit.

        • d0mine a day ago ago

          I know nothing on the topic but why would one use bitcoin to buy drugs? Aren't all bitcoin transactions public and immutable?

          • camus21 a day ago ago

            Yes but they’re also anonymous. You don’t have your name attached to the account and there’s no paperwork/bank that’s keeping track of any large/irregular financial transactions

            • snovv_crash a day ago ago

              So, exactly like cash?

              • al_borland a day ago ago

                I heard this as one of the early sales pitches for Bitcoin. “Digital cash.”

                That all seemed to go out the window when companies developed wallets to simplify the process for the average user, and when the prices surged, some started requiring account verification to tie it to a real identity. At that point, it’s just a bank with a currency that isn’t broadly accepted. The idea of digital cash was effectively dead, at least for the masses who aren’t going to take the time to figure out how to use Bitcoin without a 3rd party involved. Cash is simple.

              • CharlieDigital a day ago ago

                Not quite because there are logistical challenges to moving large quantities of physical money.

              • mrbungie a day ago ago

                Yeah, exactly, but with the energy requirements of a small country to run the network.

        • fsmv a day ago ago

          Bitcoin failed because of bad monetary policy turning it into something like a ponzi scheme where only early adopters win. The monetary policy isn't as hard to fix as people make it out to be.

      • newaccountman2 a day ago ago

        I am generally on your side of this debate, but Bitcoin is a reference that is in favor of the opposite position. Crypto is/was all hype. It's a speculative investment, that's all atm.

        • jack_pp 21 hours ago ago

          Bitcoin is the only really useful crypto that fundamentally has no reason to die because of basic economics. It is fundamentally the only hard currency we have ever created and that's why it is revolutionary

          • socksy 18 hours ago ago

            I find it hard to accept the statement that "[bitcoin] is fundamentally the only hard currency we have ever created". Is it saying that gold back currencies were not created by us, or that gold isn't hard enough?

            Additionally, there's a good reason we moved off deflationary hard currencies and onto inflationary fiat currencies. Bitcoin acts more like a commodity than a medium of exchange. People tend to buy it, hold it, and then eventually cash out. If I am given a bunch of bitcoin, the incentive is for me not to spend it, but rather keep it close and wait for it to appreciate — what good is a currency that people don't spend?

            Also I find it weird when I read that due to its mathematically proven finite supply it is basic economics that gives it value. Value in modern economics is defined as what people are willing to give up to obtain that thing. Right now, people are willing to give up a lot for bitcoin, but mainly because other people are also willing to give up a lot for bitcoin, which gives it value.

            It's a remarkable piece of engineering that has enabled this (solving the double spending problem especially), but it doesn't have inherent value in and if itself. There are many finite things in the world that are not valued as highly as bitcoin is. There's a finite number of beanie babies, a finite number is cassette tapes, a finite number of blockbuster coupons...

            Gold is similar — should we all agree tomorrow that gold sucks and should never be regarded as a precious metal, then it won't lose its value completely (there's only a finite amount of it, and some people will still want it, e.g. for making connectors). But its current valuation is far higher than it would be for its scarcity alone — people mainly want gold, because other people want gold.

      • runeks a day ago ago

        > I watched everyone make the same arguments about the general Internet, and then the Web, then mobile, then Bitcoin.

        You’re conveniently forgetting all the things that followed the same trajectory as LLMs and then died out.

      • otabdeveloper4 15 hours ago ago

        > ...about the general Internet, and then the Web, then mobile, then Bitcoin. It’s just a toy. It’s not that useful.

        Well, they're not wrong. They are not that useful toys.

        (Yes, the "Web" included.)

      • jiggawatts 2 days ago ago

        Speaking of the iPhone, I just ugpraded to the 16 Pro because I want to try out the new Apple Intelligence features.

        As soon as I saw integrated voice+text LLM demos, my first thought was that this was precisely the technology needed to make assistants like Siri not total garbage.

        Sure, Apple's version 1.0 will have a lot of rough edges, but they'll be smoothed out.

        In a few versions it'll be like something out of Star Trek.

        "Computer, schedule an appointment with my Doctor. No, not that one, the other one... yeah... for the foot thing. Any time tomorrow. Oh thanks, I forgot about that, make that for 2pm."

        Try that with Siri now.

        In a few years, this will be how you talk to your phone.

        Or... maybe next month. We're about to find out.

        • al_borland a day ago ago

          The issue with appointments is the provider needs to be integrated into the system. Apple can’t do that on their own. It would have to be more like the roll out of CarPlay. A couple partners at launch, a lot of nothing for several years, and eventually is a lot of places, but still not universal.

          I could see something like Uber or Uber Eats trying to be early on something like this, since they already standardized the ordering for all the restaurants in their app. Scheduling systems are all over the place.

        • graycat 16 hours ago ago

          Many situations, prefer text to voice. Text: Easier record keeping, manipulation, search, editing, ....

          With some irony, the Hacker News user interface is essentially all just simple text.

          A theme in current computer design seems to be: Assume the user doesn't use a text editor and, instead, needs an 'app' for every computer interaction. Like cars for people who can't drive, and a car app for each use of the car -- find a new BBQ restaurant, need a new car app.

          Sorry, Silicon Valley, with text anyone who used a typewriter or pocket calculator can do more and have fewer apps and more flexibility, versatility, generality.

    • mhowland 2 days ago ago

      "They're really good at some things, terrible at others, and prone to doing something totally wrong some fraction of the time."

      I agree 100% with this sentiment, but, it also is a decent description of individual humans.

      This is what processes and control systems/controls are for. These are evolving at a slower pace than the LLMs themselves at the moment so we're looking to the LLM to be its own control. I don't think it will be any better than the average human is at being their own control, but by no means does that mean it's not a solvable problem.

      • latexr 2 days ago ago

        > I agree 100% with this sentiment, but, it also is a decent description of individual humans.

        But you can understand individual humans and learn which are trustworthy for what. If I want a specific piece of information, I have people in my life that I know I can consult to get an answer that will most likely be correct and that person will be able to give me an accurate assessment of their certainty and they know how to accurately confirm their knowledge and they’ll let me know later if it turns out they were wrong or the information changed and

        None of that is true with LLMs. I never know if I can trust the output, unless I’m already an expert on the subject. Which kind of defeats the purpose. Which isn’t to say they’re never helpful, but in my experience they waste my time more often than they save it, and at an environmental/energy cost I don’t personally find acceptable.

        • closeparen 2 days ago ago

          It defeats the purpose of LLM as personal expert on arbitrary topics. But the ability to do even a mediocre job with easy unstructured-data tasks at scale is incredibly valuable. Businesses like my employer pay hundreds of professionals to run business process outsourcing sites where thousands of contractors repeatedly answer questions like "does this support contact contain a complaint about X issue?" And there are months-long lead teams to develop training about new types of questions, or to hire and allocate headcount for new workloads. We frequently conclude it's not worth it.

        • kenjackson 2 days ago ago

          Actually humans are much worse in this regard. The top performer on my team had a divorce and his productivity dropped by like a factor of 3 and quality fell of a cliff.

          Another example from just yesterday is I needed to solve a complex recurrence relation. A friend of mine who is good at math (math PhD) helped me for about 30 minutes still without a solution and a couple of false starts. Then he said try ChatGPT and we got the answer in 30s and we spent about 2 minutes verifying it.

          • andrepd 2 days ago ago

            I call absolute bullshit on that last one. There's no way ChatGPT solves a maths problem that a maths PhD cannot solve, unless the solution is also googleable in 30s.

            • andreasmetsala 2 days ago ago

              > unless the solution is also googleable in 30s.

              Is anything googleable in 30s? It feels like finding the right combination of keywords that bypasses the personalization and poor quality content takes more than one attempt these days.

              • gomerspiles 2 days ago ago

                Right, AI is really just what I use to replace google searches I would have used to find highly relevant examples 10 years back. We are coming out of a 5 year search winter.

              • andrepd a day ago ago

                Duck-duck-goable then :)

          • salawat a day ago ago

            >Actually humans are much worse in this regard. The top performer on my team had a divorce and his productivity dropped by like a factor of 3 and quality fell of a cliff.

            Wow. Nice of you to see a coworker go through a traumatic life event, and the best you can drudge up is to bitch about lost productivity and decrease in selfless output of quality to someone else's benefit when they are at the time trying to stitch their life back together.

            SMH. Goddamn.

            Hope your recurrence relation was low bloody stakes. If you spent only two minutes verifying something coming out of a bullshit machine, I'd hazard you didn't do much in the way of boundary condition verification.

            • kenjackson a day ago ago

              You are a total jerk. The point wasn’t about the totality of the experience with that person. I was answering a specific reference. But I can tell you aren’t the type of person who can make such a distinction. Way to bring down the quality of the discussion.

              I feel sorry for your family.

      • Gazoche 2 days ago ago

        > I agree 100% with this sentiment, but, it also is a decent description of individual humans.

        But humans can be held accountable, LLMs cannot.

        If I pay a human expert to compile a report on something and they decide to randomly make up facts, that's malpractice and there could be serious consequences for them.

        If I pay OpenAI to do the same thing and the model hallucinates nonsense, OpenAI can just shrug it and say "oh that's just a limitation of current LLMs".

      • linsomniac 2 days ago ago

        >also is a decent description of individual humans

        A friend of mine was moving from software development into managing devs. He told me: "They often don't do things the way or to the quality I'd like, but 10 of them just get so much more done than I could on my own." This was him coming to terms with letting go of some control, and switching to "guiding the results" rather than direct control.

        The LLMs are a lot like this.

        • theamk 2 days ago ago

          Your friend got lucky, I've seen (and worked with) people with negative productivity - they make the effort and sometimes they commit code, but it inevitably ends up being broken, and I realize that it would take less of my time for me to write the code myself, rather than spend all the time explaining and then fixing bugs.

          The LLMS are a lot like this.

      • YeGoblynQueenne 2 days ago ago

        >> I agree 100% with this sentiment, but, it also is a decent description of individual humans.

        Why would that be a good thing? The big thing with computers is that they are reliable in ways that humans simply can't ever be. Why is it suddenly a success to make them just as unreliable as humans?

        • welshwelsh a day ago ago

          I thought the big thing with computers is that they are much cheaper than humans.

          If we are evaluating LLM suitability for tasks typically performed by humans, we should judge them by the same standards we judge humans. That means it's OK to make mistakes sometimes.

      • Too 2 days ago ago

        You missed quoting the next sentence about providing confidence metric.

        Humans may be wrong a lot but at least the vast majority will have the decency to say “I don’t know”, “I’m not sure”, “give me some time to think”, “my best guess is”. In contrast to most LLMs today that in full confidence just spews out more hallucinations.

    • jajko 2 days ago ago

      I'll keep buying (and paying premium) for dumber things. Cars are a prime example, I want it to be dumb as fuck, offline, letting me decide what to do. At least next 2 decades, and thats achievable. After that I couldnt care less, I'll probably be a bad driver at that point anyway so switch may make sense. I want dumb beautiful mechanival wristwatch.

      I am not ocd-riddled insecure man trying to subconsiously immitate much of the crowd, in any form of fasion. If that will make me an outlier, so be it, a happier one.

      I suspect new branch of artisanal human-mind-made trademark is just behind the corner, maybe niche but it will find its audience. Beautiful imperfections, clear clunky biases and all that.

    • spencerchubb 2 days ago ago

      LLMs have been improving exponentially for a few years. let's at least wait until exponential improvements slow down to make a judgement about their potential

      • bloppe 2 days ago ago

        They have been improving a lot, but that improvement is already plateauing and all the fundamental problems have not disappeared. AI needs another architectural breakthrough to keep up the pace of advancement.

        • og_kalu 2 days ago ago

          >but that improvement is already plateauing

          Based on what ? The gap between the release of GPT-3 and 4 is still much bigger than the time that has elapsed since 4 was already released so really, Based on what ?

          • riku_iki 2 days ago ago

            there are no much reliable benchmarks which would measure what is gap really. I think currently corps compete in who will leak benchmarks to training data the most, hence o1 is world programming medalist, yet makes stupid mistakes.

        • Animats 2 days ago ago

          Yes. Anything on the horizon?

          • bloppe 2 days ago ago

            I'm not as up-to-speed on the literature as I used to be (it's gotten a lot harder to keep up), but I certainly haven't heard of any breakthroughs. They tend to be pretty hard to predict and plan for.

            I don't think we can continue simply tweaking the transformer architecture to achieve meaningful gains. We will need new architectures, hopefully ones that more closely align with biological intelligence.

            In theory, the simplest way to real superhuman AGI would be to start by modeling a real human brain as a physical system at the neural level; a real neural network. What the AI community calls "neural networks" are only very loose approximations of biological neural networks. Real neurons are subject to complex interactions between many different neurotransmitters and neuromodulators and they grow and shift in ways that look nothing like backpropagation. There already exist decently accurate physical models for single neurons, but accurately modeling even C. elegans (as part of the OpenWorm project) is still a way's off. Modeling a full human brain may not be possible within our lifetime, but I also wouldn't rule that out.

            And once we can accurately model a real human brain, we can speed it up and make it bigger and apply evolutionary processes to it much faster than natural evolution. To me, that's still the only plausible path to real AGI, and we're really not even close.

            • segasaturn 2 days ago ago

              I was holding out hope for Q*, which OAI talked about with hushed tones to make it seem revolutionary and maybe even dangerous, but that ended up being o1. o1 is neat, but its far from a breakthrough. It's just recycling the same engine behind GPT-4 and making it talk to itself before spitting out its response to your prompt. I'm quite sure they've hit a ceiling and are now using smoke-and-mirrors techniques to keep the hype and perceived pace-of-progress up.

          • schleck8 2 days ago ago

            OpenAI's Orion (GPT 5/Next) is partially trained on synthetic data generated with a large version of o1. Which means if that works the data scarcity issue is more or less solved.

        • amelius 2 days ago ago

          If they were plateauing it would mean OpenAI would have lost its headstart wrt the competition, which is not the case I believe.

          • bloppe 2 days ago ago

            OpenAI has the biggest appetite for large models. GPT-4 is generally a bit better than Gemini, for example, but that's not because Google can't compete with it. Gemini is orders of magnitude smaller than GPT-4 because if Google were to run a GPT-4-sized model every time somebody searches on Google, they would literally cease to be a profitable company. That's how expensive inference on these ultra-large models is. OpenAI still doesn't really care about burning through hundreds of billions of dollars, but that cannot last forever.

            • bunderbunder 2 days ago ago

              This, I think, is the crux of it. OpenAI is burning money at a furious rate. Perhaps this is due to a classic tech industry hypergrowth strategy, but the challenge with hypergrowth strategies is that they tend to involve skipping over the step where you figure out if the market will tolerate pricing your product appropriately instead of selling it at a loss.

              At least for the use cases I've been directly exposed to, I don't think that is the case. They need to keep being priced about where they are right now. It wouldn't take very much of a rate hike for their end users to largely decide that not using the product makes more financial sense.

          • dimitri-vs 2 days ago ago

            They have, Anthropic Claude Sonnet 3.5 is superior to GPT-4o in every way, it's even better then their new o1 model at most things (coding, writing, etc.).

            OpenAI went from GPT-4, which was mind blowing, to 4o, which was okay, to o1 which was basically built in chain-of-thought.

            No new Whisper models (granted, advanced voice chat is pretty cool). No new Dalle models. And nobody is sure what happened to Sora.

          • hatthew 2 days ago ago

            OpenAI had a noticeable head start with GPT-2 in 2019. They capitalized on that head start with ChatGPT in late 2022, and relatively speaking they plateaued from that point onwards. They lost that head start 2.5 months later with the announcement of Google Bard, and since then they've been only slightly ahead of the curve.

          • talldayo 2 days ago ago

            It's pretty undeniable that OpenAI's lead has been diminished greatly from the GPT-3 days. Back then, they could rely on marketing their coherency and the "true power" of larger models. But today we're starting to see 1B models that are undistinguishable from OpenAI's most advanced chain-of-thought models. From a turing test perspective, I don't think the average person could distinguish between an OpenAI and a Llama 3.2 response.

      • COAGULOPATH 2 days ago ago

        In some domains (math and code), progress is still very fast. In others it has slowed or arguably stopped.

        We see little progress in "soft" skills like creative writing. EQBench is a benchmark that tests LLM ability to write stories, narratives, and poems. The winning models are mostly tiny Gemma finetunes with single-digit parameter counts. Huge foundation models with hundreds of billions of parameters (Claude 3 Opus, Llama 3.1 405B, GPT4) are nowhere near the top. (Yes, I know Gemma is a pruned Gemini). Fine-tuning > model size, which implies we don't have a path to "superhuman" creative writing (if that even exists). Unlike model size, fine-tuning can't be scaled indefinitely: once you've squeezed all the juice out of a model, what then?

        OpenAI's new o1 model exhibits amazing progress in reasoning, math, and coding. Yet its writing is worse than GPT4-o's (as backed by EQBench and OpenAI's own research).

        I'd also mention political persuasion (since people seem concerned about LLM-generated propaganda). In June, some researchers tested LLM ability to change the minds of human subjects on issues like privatization and assisted suicide. Tiny models are unpersuasive, as expected. But once a model is large enough to generate coherent sentences, persuasiveness kinda...stops. All large models are about equally persuasive. No runaway scaling laws are evident here.

        This picture is uncertain due to instruction tuning. We don't really know what abilities LLMs "truly" possess, because they've been crippled to act as harmless, helpful chatbots. But we now have an open-source GPT-4-sized pretrained model to play with (Llama-3.1 405B base). People are doing interesting things with it, but it's not setting the world on fire.

        • throw987987123 2 days ago ago

          It feels ironic if the only thing that the current wave of Ai enables (other than novelty cases) is a cutdown of software/coding jobs. I don't see it replacing math professionals too soon for a variety of reasons. From an outsiders perspective on the software industry it is like it's practioners voted to make themselves redundant - that seems to be the main takeaway of ai to normal non tech people ive chatted with.

          Many people have anecdotally, when I tell them what I do for a living, have told me that any other profession would have the common sense/street smarts to not make their scarce skill redundant. It goes further than that; many professions have license requirements, unions, professional bodies, etc to enforce this scarcity on the behalf on their members. After all a scarce career in most economies is one not just of wealth but higher social standing.

          If all it does is allow us to churn more high level software, which let's be honest is demand inelastic due to mostly large margins on software products (i.e. they would of paid a person anyway due to ROI) it doesn't seem it will add much to society other than shifting profit in tech from Labor to Capital/owners. May replace call centre jobs too I guess and some low level writing jobs/marketing. Haven't seen any real new use cases that change my life yet positively other than an odd picture/ai app, fake social posts,annoying AI assistants in apps, maybe some teaching resources that would of been made/easy to acquire anyway by other means etc. I could easily live without these things.

          If this is all it is seems Ai will do or mostly do it seems like a bit of a disappointment. Especially for the massive amount of money going into it.

          • Viliam1234 a day ago ago

            > many professions have license requirements, unions, professional bodies, etc to enforce this scarcity on the behalf on their members. After all a scarce career in most economies is one not just of wealth but higher social standing.

            Well, that's good for them, but bad for humanity in general.

            If we had a choice between a system where doctors get high salary and lot of social status, or a system where everyone can get perfect health by using a cheap device, and someone would choose the former, it would make perfect sense to me to call such person evil. The financial needs of doctors should not outweigh the health needs of humanity.

            On a smarter planet we would have a nice system to compensate people for losing their privilege, so that they won't oppose progress. For example, every doctor would get a generous unconditional basic income for the rest of their life, and then they would be all replaced by cheap devices that would give us perfect health. Everyone would benefit, no reason to complain.

            • throw987987123 a day ago ago

              That's a moral argument, one with a certain ideloogy that isn't shared by most people rightly or wrongly. Especially if AI only replaces certain industries which it looks like to be the more likely option. Even if it is, I don't think it is shared by the people investing in AI unless someone else (i.e. taxpayers) will pay for it. Socialise the losses (loss of income), privatise the profits (efficiency gains). Makes me think the AI proponents are a little hypocritical. Taxpayers may not to afford that in many countries, that's reality. For software workers we need to note only the US mostly has been paid well, many more software workers worldwide don't have the luxury/pay to afford that altruism. I don't think it's wrong for people who have to skill up to want some compensation for that, there is other moral imperatives that require making a living.

              On a nicer planet sure, we would have a system like that. But most of the planet is not like that - the great advantage of the status quo is that even people who are naturally not altruistic somewhat co-operate with each other due to mutual need. Besides there is ways to mitigate that and still give the required services especially if they are commonly required. The doctors example - certain countries have worked it out without resorting to AI risks. I'm not against AI ironically in this case either, there is a massive shortage of doctors services that can absorb the increased abundance Imv - most people don't put software in the same category. There is bad sides to humanity with regards to losing our mutual dependence on each other as well (community, valuing the life of others, etc) - I think sadly AI allows for many more negatives than simply withholding skills for money if not managed right, even that doesn't happen everywhere today and is a easier problem to solve. The loss of any safe intelligent jobs for climbing and evening out social mobility due to mutual dependence of skills (even the rich can't learn everything and so need to outsource) is one of them.

          • andreasmetsala 2 days ago ago

            > If all it does is allow us to churn more high level software, which let's be honest is demand inelastic due to mostly large margins on software products (i.e. they would of paid a person anyway due to ROI) it doesn't seem it will add much to society other than shifting profit in tech from Labor to Capital/owners.

            If creating software becomes cheaper then that means I can transform all the ideas I’ve had into software cheaply. Currently I simply don’t have enough hours in the day, a couple hours per weekend is not enough to roll out a tech startup.

            Imagine all the open source projects that don’t have enough people to work on them. With LLM code generation we could have a huge jump in the quality of our software.

            • throw987987123 2 days ago ago

              With abundance comes diminishing relative value in the product. In the end that skill and product would be seen as worth less by the market. The value of doing those ideas would drop long term to the point where it still isn't worth doing most of them, at least not for profit.

          • kobenni 2 days ago ago

            It may seem this way from an outsiders perspective, but I think the intersection between people who work on the development of state-of-the-art LLMs and people who get replaced is practically zero. Nobody is making themselves redundant, just some people make others redundant (assuming LLMs are even good enough for that, not that I know if they are) for their own gain.

            • throw987987123 a day ago ago

              Somewhat true, but again from an outsiders perspective that just shows your industry is divided and therefore will be conquered. I.e. if AI gets good enough to do software and math I don't even see AI engineers for example as anything special.

            • twelve40 2 days ago ago

              many tech people are making themselves redundant, so far mostly not because LLMs are putting them out of jobs, but because everyone decided to jump on the same bandwagon. When yet another AI YC startup surveys their peers about the most pressing AI-related problem to solve, it screams "we have no idea what to do, just want to ride this hype wave somehow"

        • fluoridation 2 days ago ago

          >But once a model is large enough to generate coherent sentences, persuasiveness kinda...stops. All large models are about equally persuasive. No runaway scaling laws are evident here.

          Isn't that kind of obvious? Even human speakers and writers have problems changing people's minds, let alone reliably.

          • lmm 2 days ago ago

            The ceiling may be low, but there are definitely human writers that are an order of magnitude more effective than the average can-write-coherent-sentences human.

          • klipt 2 days ago ago

            The only people who changed minds reliably were Age of Empires priests. Wololo, wololo!

        • anon7725 2 days ago ago

          > Tiny models are unpersuasive, as expected. But once a model is large enough to generate coherent sentences, persuasiveness kinda...stops.

          People are persuaded to change their opinions based on social proof, so this isn’t surprising.

      • 9cb14c1ec0 2 days ago ago

        I can't think of any exponential improvements that have happened recently.

      • rifty 2 days ago ago

        I don’t think you should expect exponential growth towards greater correctness past good enough for any given domain of knowledge it is able to mirror. It is reliant on human generated material, and so rate limited by the number of humans able to generate the quality increase you need - which decreases in availability as you expect higher quality. I also don’t believe greater correctness for any given thing is an open ended question that allows for experientially exponential improvements.

        Though maybe you are just using exponential figuratively in place of meaning rapid and significant development and investment.

      • bamboozled 2 days ago ago

        Do you know what exponential means? They might be getting getting but it hardly seems exponential at this stage.

    • __loam 2 days ago ago

      Funnily enough, bitcoin mining still uses at least about 3x more power that AI at the moment, while providing less value imo. AI power use is also dwarfed by other industries even in computing. We should still consider whether it's worth it, but most research and development on LLMs in corporate right now seems to be focused on making them more efficient, and therefore both cheaper and less power intensive, to run. There's also stuff like Apple intelligence that is moving it out to edge devices with much more efficient chips.

      I'm still a big critic of AI generally but they're definitely not as bad as crypto which is shocking.

      • illiac786 2 days ago ago

        Do you have a nice reference for this? I could really use something like this, this topic comes up a lot in my social circle.

      • Ferret7446 a day ago ago

        How do you measure the value of bitcoin, if not by its market cap? Do you interview everyone and ask them how much they're willing to pay for a service that allows them to transfer money digitally without institutional oversight/bureaucracy?

        • __loam a day ago ago

          The amount of power being used to support a system that can do 5 transactions a second is disgusting.

          • Ferret7446 21 hours ago ago

            As opposed to what other system that can do a single transaction in any time frame without individual or organizational interdiction?

    • latexr 2 days ago ago

      > They're really good at some things, terrible at others, and prone to doing something totally wrong some fraction of the time. (…) They can't safely be in charge of anything important.

      Agreed. If everyone understood that and operated under that assumption, it wouldn’t be that much of an issue. Alas, these guessing machines are marketed as all-knowing oracles that can already solve half of humanity’s problems and a significant number of people treat them as being right every time, even in instances where they’re provably wrong.

    • seandoe 2 days ago ago

      Totally agree on the confidence metric. The way chatbots spew complete falsities in such a confident tone is really disheartening. I want to use AI more but I don't feel I can trust it at all. If I can't trust it and have to search for other resources to verify it's claims, the value is really diminished.

    • naming_the_user 2 days ago ago

      Is it even possible in principle for an LLM to produce a confidence interval given that in a lot of cases the input is essentially untrusted?

      What comes to mind is - I consider myself an intelligent being capable of recognising my limits - but if you put my brain in a vat and taught me a new field of science, I could quite easily make claims about it that were completely incorrect if your teaching was incorrect because I have no actual real world experience to match it up to.

      • theamk 2 days ago ago

        Right, and that's why "years of experience" matters in humans. You will be giving incorrect answers, but as long as you get feedback, you will improve, or at least calibrate your confidence meter.

        This is not the case with current model - they are forever stuck at junior level, and they won't improve no matter how much you correct them.

        I know humans like that too. I don't ask them questions that I need good answers too.

    • wrycoder 2 days ago ago

      Just wait until they get saturated with subtle (and not so subtle) advertising. Then, you'll really hate them.

    • rldjbpin 2 days ago ago

      LLMs are to AI what BTC is to blockchain, let me explain.

      blockchain and no-trust decentralization has so much promise, but grifters all go for what got done first and can be squeezed money out of. same is happening with LLMs, as a lot of current AI work started with text first.

      they might still lowkey be necessary evils because without them there would not have been so much money or attention flowing in this way.

      • agubelu a day ago ago

        > blockchain and no-trust decentralization has so much promise

        I've been hearing this for the past 5 years, yet nothing of practical use based on blockchains has materialized yet.

        • Jommi a day ago ago

          you dont think an open finance network that's accessible for anyone with an internet connection is useful?

          your westernness is showing

          go ask SA or Africa how useful it is that they arent restricted by insane dictatorial capital controls anymore

          • throw45678943 a day ago ago

            Indeed. Decentralised currency is at least a technology that can power the individual at times, rather than say governments, big corps, etc especially in certain countries. Yes it didn't change as much as was marketed but I don't see that as a bad thing. Its still a "tool" that people can use, in some cases to enable use cases they couldn't do or didn't have the freedom to do before.

            AI, given its requirements for large computation and money, and its ability to make easily available intelligence to certain groups, IMO has a real potential to do the exact opposite - take away power from individuals especially if they are middle class or below. In the wrong hands it can definitely destroy openness and freedom.

            Even if it is "Open" AI, for most of society their ability to offer labor and intelligence/brain power is the only thing they can offer to gain wealth and sustenance - making it a commodity tilts the power scales. If it changes even a slice of what it is marketed at; there are real risks for current society. Even if it increases production of certain goods, it won't increase production of the goods the ultra wealthy tend to hold (physical capital, land, etc) making them as a proportion even more wealthy. This is especially true if AI doesn't end up working in the physical realm quick enough. The benefits seem more like novelties to most individuals that they could do without where to large corps and ultra wealthy individuals the the benefits IMO are much more obvious with AI (e.g. we finally don't need workers). Surveillance, control, persuasion, propaganda, mass uselessness of most of the population, medical advances for the ultra wealthy, weapons, etc can now be done at almost infinite scale and with great detail. If it ever gets to the point of obsoleting human intelligence would be a very interesting adjustment period for humanity.

            The flaw isn't the technology; its the likely use of it by humans and their nature. Not saying LLMs are there yet or even if they are the architecture to do this, but agentic behaviour and running corporations (as OpenAI makes its goal on their presentation slides to be) seem to be a way to rid many of the need for other people in general (to help produce, manage, invent and control). That could be a good or bad thing, depending on how we manage it but one thing it wouldn't be would be simple.

    • bschmidt1 2 days ago ago

      I love how people are like "there's no use case" and there's already products on shelves. I see AI art everywhere, AI writing, customer support - already happened. You guys are naysaying something that already happened people already replaced jobs with LLMs and already profit due to AI. There are already startups with users where you provide a OPENAI_API_KEY, or customers where you provide theirs.

      If you can't see how this tech is useful Idk what to tell you, you have no imagination AND aren't looking around you at the products, marketing, etc. that already exists. These takes remind me of the luddites of ~2012 who were still doubting the Internet in general.

      • lmm 2 days ago ago

        > I see AI art everywhere, AI writing, customer support - already happened.

        Is any of it adding value though? I can see that AI has made it easier to do SEO spam and make an excuse for your lack of customer support, just like IVR systems before it. But I don't believe those added any real value (they may have generated profits for their makers, but I think that was a zero- or negative-sum trade). Put it this way: is AI being used to generate anything that people are actually happy to receive?

        • fragmede a day ago ago

          > But I don't believe those added any real value (they may have generated profits for their makers, but I think that was a zero- or negative-sum trade).

          Okay, so some people are making money with it, but no true value was added, eh?

          • lmm a day ago ago

            Do new scams create value? No, even though they make money for some people. The same with speculative ventures that don't pan out. You can only say something's added value when it's been positive sum overall, not just allowed some people to take a profit at the expense of others.

      • anon7725 2 days ago ago

        There is a difference between “being useful” and living up to galactic-scale hype.

  • cubefox 3 days ago ago

    I'm not tired, I'm afraid.

    First, I'm afraid of technological unemployment.

    In the past, automation meant that workers could move into non-automated jobs, if they were skilled enough. But superhuman AI seems now only few years away. It will be our last invention, it will mean total automation. There will be hardly any, if any, jobs left only a human can do.

    Many countries will likely move away from a job-based market economy. But technological progress will not stop. The US, owning all the major AI labs, will leave all other societies behind. Except China perhaps. Everyone else in the world will be poor by comparison, even if they will have access to technology we can only dream of today.

    Second, I'm afraid of war. An AI arms race between the US and China seems already inevitable. A hot war with superintelligent AI weapons could be disastrous for the whole biosphere.

    Finally, I'm afraid that we may forever lose control to superintelligence.

    In nature we rarely see less intelligent species controlling more intelligent ones. It is unclear whether we can sufficiently align superintelligence to have only humanity's best interests in mind, like a parent cares for their children. Superintelligent AI might conclude that humans are no more important in the grand scheme of things than bugs are to us.

    And if AI will let us live, but continue to pursue its own goals, humanity will from then on only be a small footnote in the history of intelligence. That relatively unintelligent species from the planet "Earth" that gave rise to advanced intelligence in the cosmos.

    • neta1337 3 days ago ago

      >But superhuman AI seems now only few years away

      Seems unreasonable. You are afraid because marketing gurus like Altman made you believe that a frog that can make bigger leap than before will be able to fly.

      • klabb3 3 days ago ago

        Plus it’s not even defined what superhuman AI means. A calculator sure looked superhuman when it was invented. And it is!

        Another analogy is breeding and racial biology which used to be all the hype (including in academia). The fact that humans could create dogs from wolves, looked almost limitless with the right (wrong) glasses. What we didn’t know is that wolf had a ton of genes that played a magic trick where a diversity we couldn’t perceive was there all along, in the genetic material, and it we just helped make it visible. Ie a game of diminishing returns.

        Concretely for AI, it has shown us that pattern matching and generation are closely related (well I have a feeling this wasn’t surprising to neuro-scientists). And also that they’re more or less domain agnostic. However, we don’t know whether pattern matching alone is “sufficient”, and if not, what exactly and how hard “the rest” is. Ai to me feels like a person who had a stroke, concussion or some severe brain injury, it can appear impressively able in a local context, but they forgot their name and how they got there. They’re just absent.

      • cubefox 3 days ago ago

        No, because we have seen massive improvements in AI over the last years, and all the evidence points to this progress continuing at a fast pace.

        • Hercuros 2 days ago ago

          I think the biggest fallacy in this type of thinking is that it projects all AI progress into a single quantity of “intelligence” and then proceeds to extrapolate that singular quantity into some imagined absurd level of “superintelligence”.

          In reality, AI progress and capabilities are not so reducible to singular quantities. For example, it’s not clear that we will ever get rid of the model’s tendencies to just produce garbage or nonsense sometimes. It’s entirely possible that we remain stuck at more incremental improvements now, and I think the bogeyman of “superintelligence” needs to be much more clearly defined rather than by extrapolation of some imagined quantity. Or maybe we reach a somewhat human-like level, but not this imagined “extra” level of superintelligence.

          Basically the argument is something to the effect of “big will become bigger and bigger, and then it will become like SUPER big and destroy us all”.

        • StrLght 3 days ago ago

          Extrapolation of past progress isn't evidence.

          • mitthrowaway2 3 days ago ago

            You don't have to extrapolate. There's a frenzy of talent being applied to this problem, it's drawing more brainpower the more progress that is made. Young people see this as one of the most interesting, prestigious, and best-paying fields to work in. A lot of these researchers are really talented, and are doing more than just scaling up. They're pushing at the frontiers in every direction, and finding methods that work. The progress is broadening; it's not just LLMs, it's diffusion models, it's SLAM, it's computer vision, it's inverse problems, it's locomotion. The tooling is constantly improving and being shared, lowering the barrier to entry. And classic "hard problems" are yielding in the process. It's getting hard to even find hard problems any more.

            I'm not saying this as someone cheering this on; I'm alarmed by it. But I can't pretend that it's running out of steam. It's possible it will run out of money, but even if so, only for a while.

            • leptons 2 days ago ago

              The AI bubble is already starting to burst. They Sam Altmans' of the world over-sold their product and over-played their hand by suggesting AGI is coming. It's not. What they have is far, far, far from AGI. "AI" is not going to be as important as you think it is in the near future, it's just the current tech-buzz and there will be something else that takes its place, just like when "web 2.0" was the new hotness.

              • kranuck 2 days ago ago

                It's gonna be massive because companies love to replace humans at any opportunity and they don't care at all about quality in a lot of places.

                For example, why hire any call center workers? They already outsourced the jobs to the lowest bidder and their customers absolutely hate it. Fire those people and get some AI in there so it can provide shitty service for even cheaper.

                In other words, it will just make things a bit worse for everyone but those at the very top. usual shit.

            • corimaith 2 days ago ago

              This getting too abstract. The core issue of LLMs that others have pointed out is the lack of accuracy; Which is how they are supposed to work because they should be paired with a knowledge representation system in a proper chatbot system.

              We've been trying to build a knowledge representation system powerful enough to capture the world for decades, but this is something that goes more into the foundations of mathematics and philosophy that it has to do with the majority of engineering research. You need a literal genius to figure that out. The majority of those "talented" people and funding aren't doing that.

            • mvdtnz 2 days ago ago

              > There's a frenzy of talent being applied to this problem, it's drawing more brainpower the more progress that is made. Young people see this as one of the most interesting, prestigious, and best-paying fields to work in. A lot of these researchers are really talented, and are doing more than just scaling up. They're pushing at the frontiers in every direction, and finding methods that work.

              You could have seen this exact kind of thing written 5 years ago in a thread about blockchains.

              • mitthrowaway2 2 days ago ago

                Yes, but I didn't write that about blockchain five years ago. Blockchains are the exact opposite of AI in that the technology worked fine from the start and did exactly what it said on the tin, but the demand for that turned out to be very limited outside of money laundering. There's no doubt about the market potential for AI; it's virtually the entire market for mental labor. The only question is whether the tech can actually do it. So in that sense, the fact that these researchers are finding methods that work matters much more for AI than for blockchain.

                • kranuck 2 days ago ago

                  Really, cause I remember an endless stream of people pointing out problems with blockchain and crypto and being constantly assured that it was being worked on and would be solved and crypto is inevitable.

                  For example, transaction costs/latency/throughput.

                  I realize the conversation is about blockchain, but I say my point still stands.

                  With blockchain the main problem was always "why do I need this?" and that's why it died without being the world changing zero trust amazing technology we were promised and constantly told we need.

                  With LLMs the problem is they don't actually know anything.

            • CatWChainsaw 2 days ago ago

              Amount of effort applied to a problem does not equal guarantee of problem being solved. If a frenzy of talent was applied to breaking the speed of light barrier it would still never get broken.

              • mitthrowaway2 2 days ago ago

                Your analogy is valid, for the world in which humans exceed the speed of light on a casual stroll.

                • CatWChainsaw 2 days ago ago

                  And the message behind it still applies even in the universe where they don't.

                  • mitthrowaway2 a day ago ago

                    I mean, a frenzy of talent was applied to breaking the sound barrier, and it broke, within a very short time. A frenzy of talent was applied to landing on the moon and that happened too, relatively quickly. Supersonic travel also happens to be physically possible under the laws of our universe. We know with confidence that human-level intelligence is also physically possible within the laws of our universe, and we can even estimate some reasonable upper bounds on the hardware requirements that implement it.

                    So in that sense, if we're playing reference class tennis, this looks a lot more like a project to break the sound barrier than a project to break the light barrier. Is there a stronger case you can make that these people, who are demonstrating quite tangible progress every month (if you follow the literature rather than just product launches), are working on a hopelessly unsolvable problem?

            • throw45678943 a day ago ago

              I do think the Digital realm, where the cost of failure and iteration is quite low, will proceed rapidly. We can brute force with a lot of compute to success, and the cost of each failed attempt is low. Most of these models are just large brute force probabilistic models in any event - efficient AI has not yet been achieved but maybe that doesn't matter.

              Not sure if that same pace applies to the physical realm where costs are high (resources, energy, pollution, etc), and the risk of getting it wrong could mean a lot of negative consequences. e.g. I'm handling construction materials, and the robot trips on a barely noticeable rock leaking paint, petrol, etc onto the ground costing more than just the initial cost of materials but cleanup as well.

              This creates a potential future outcome (if I can be so bold as to extrapolate with the dangers that has) that this "frenzy of talent" as you put it will innovate themselves out of a job with some may cash out in the short term closing the gate behind them. What's left is ironically the people that can sell, convince, manipulate and work in the physical world at least for the short and medium term. AI can't fix the scarcity of the physical that easily (e.g. land, nutrients, etc). Those people who still command scarcity will get the main rewards of AI in our capital system as value/economic surplus moves to the resources that are scarce and have advantage via relative price adjustments.

              Typically people had three different strengths - physical (strength and dexterity), emotional IQ, and intelligence/problem solving. The new world of AI at least in the medium term (10-20 years) will tilt the value away from the latter into the former (physical) - IMO a reversal of the last century of change. May make more sense to get good at gym class and get a trade rather than study math in the future for example. Intelligence will be in abundance, and become a commodity. This potential outcome does alarm me not just from a job perspective, but in terms of fake content, lack of human connection, lack of value of intelligence in general (you will find people with high IQ's lose respect from society in general), social mobility, etc. I can see a potential to the old world where lords that command scarcity (e.g. landlords) command peasants again - reversing the gains of the industrial revolution as an extreme case depending on general AI progress (not LLMs). For people who's value is more in capital or land vs labor, AI seems like a dream future IMO.

              There's potential good here, but sadly I'm alarmed because the likelihood that the human race aligns to achieve it is low (the tragedy of the commons problem). It is much easier, and more likely, certain groups use it and target people of value economically now, but with little power (i.e the middle class). The chance of new weapons, economic displacement, fake news, etc for me trumps a voice/chat bot and a fancy image generator. The "adjustment period" is critical to manage; and I think climate change, and other broader issues tells us sadly IMO our likely success in doing this.

          • coryfklein 2 days ago ago

            Do you expect the hockeystick graph of technological development since the industrial evolution to slow? Or that it will proceed, only without significant advances in AI?

            Seems like the base case here is for the exponential growth to continue, and you'd need a convincing argument to say otherwise.

            • kranuck 2 days ago ago

              That's no guarantee that AI continues advancing at the same pace, and no one has been arguing against overall technological progress slowing

              Refining technology is easier than the original breakthrough, but it doesn't usually lead to a great leap forward.

              LLMs were the result of breakthroughs, but refining them isn't guaranteed to lead to AGI. It's not guaranteed (or likely) to improve at an exponential rate.

            • StrLght 2 days ago ago

              Which chart are you referencing exactly? How does it define technological development? It's nearly impossible for me to discuss a chart without knowing what axis refer.

              Without specifics all I can say is that I don't acknowledge any measurable benefits of AI (in its' current state) in real world applications. So I'd say I am leaning towards latter.

          • cubefox 3 days ago ago

            Past progress is evidence for future progress.

            • moe_sc 3 days ago ago

              Might be an indicator, but it isn't evidence.

            • nitwit005 2 days ago ago

              Not exactly. If you focus in on a single technology, you tend to see rapid improvement, followed by slower progress.

              Sometimes this is masked by people spending more due to the industry becoming more important, but it tends to be obvious over the longer term.

            • StrLght 3 days ago ago

              That's probably what every self-driving car company thought ~10 years ago or so, everything was moving so fast for them back then. Now it doesn't seem like we're getting close to solution for this.

              Surely this time it's going to be different, AGI is just around a corner. /s

              • johnthewise 2 days ago ago

                Would you have predicted in summer of 2022 that gpt4 level conversational agent is a possibility in the next 5 years? People have tried to do it in the past 60 years and failed. How is this time not different?

                On a side note, I find this type of critique of what future of tech might look like the most uninteresting one. Since tech by nature inspiries people about the future, all tech get hyped up. all you gotta do then is pick any tech, point out people have been wrong, and ask how likely is it that this time it is different.

                • StrLght 2 days ago ago

                  Unfortunately, I don't see any relevance in that argument, if you consider GPT-4 to be a breakthrough -- then sure, single breakthroughs happen, I am not arguing with that. Actually, same thing happened with self-driving: I don't think many people expected Tesla to drop FSD publicly back then.

                  Now, chain of breakthroughs happening in a small timeframe? Good luck with that.

                  • cubefox 2 days ago ago

                    We have seen multiple massive AI breakthroughs in the last few years.

                    • StrLght 2 days ago ago

                      Which ones are you referring to?

                      Just to make it clear, I see only 1 breakthrough [0]. Everything that happened afterwards is just application of this breakthrough with different training sets / to different domains / etc.

                      [0]: https://en.wikipedia.org/wiki/Attention_Is_All_You_Need

                      • cubefox 2 days ago ago

                        Autoregressive language models, the discovery of the Chinchilla scaling law, MoEs, supervised fine-tuning, RLHF, whatever was used to create OpenAI o1, diffusion models, AlphaGo, AlphaFold, AlphaGeometry, AlphaProof.

                    • Jensson 2 days ago ago

                      They are the same breakthrough applied to different domains, I don't see them as different. We will need a new breakthrough, not applying the same solution to new things.

              • mitthrowaway2 2 days ago ago

                If you wake up from a coma and see the headline "Today Waymo has rolled out a nationwide robotaxi service", what year do you infer that it is?

        • mvdtnz 2 days ago ago

          Does it though? I have seen the progress basically stop at "shitty sentence generator that can't stop lying".

        • lawn 2 days ago ago

          The evidence I've been seeing is that progress with LLMs have already slowed down and that they're nowhere near good enough to replace programmers.

          They can be useful tools ro be sure, but it seems more and more clear that they will not reach AGI.

          • cubefox 2 days ago ago

            They are already above average human level on many tasks, like math benchmarks.

            • cudgy 2 days ago ago

              So are calculators …

            • lawn 2 days ago ago

              Yes, there are certain tasks they're great at, just as AI has been superhuman in some tasks for decades.

              • cubefox 2 days ago ago

                But now they are good or even great at way more tasks than before because they can understand and use natural languages like English.

                • lawn 2 days ago ago

                  Yeah, and they're still under delivering to their hype and the improvements have vastly slowed down.

            • kranuck 2 days ago ago

              If you ignore the part where there proofs are meandering drivel, sure.

              • cubefox 2 days ago ago

                Even if you don't ignore this part they (e.g. o1-preview) are still better at proofs than the average human. Substantially better even.

        • rocho a day ago ago

          But that does not prove anything. We don't know where we are on the AI-power scale currently. "Superintelligence", whatever that means, could be 1 year or 1000 years away at our current progress, and we wouldn't know until we reach it.

          • handoflixue a day ago ago

            50 years ago we could rather confidently say that "Superintelligence" was absolutely not happening next year, and was realistically decades ago. If we can say "it could be next year", then things have changed radically and we're clearly a lot closer - even if we still don't know how far we have to go.

            A thousand years ago we hadn't invented electricity, democracy, or science. I really don't think we're a thousand years away from AI. If intelligence is really that hard to build, I'd take it as proof that someone else must have created us humans.

            • 110 a day ago ago

              Umm, customary, tongue-in-cheek reference to McCarthy's proposal for a 10 person research team to solve AI in 2 months (over the Summers)[1]. This was ~70 years ago :)

              Not saying we're in necessarily the same situation. But it remains difficult to evaluate effort required for actual progress.

              [1]: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmo...

      • khafra 3 days ago ago

        > If an elderly but distinguished scientist says that something is possible, he is almost certainly right

        - Arthur C. Clarke

        Geoffrey Hinton is a 76 year old Turing Award* winner. What more do you want?

        *Corrected by kranner

        • nessbot 3 days ago ago

          This is like a second-order appeal to authority fallacy, which is kinda funny.

        • randomdata 3 days ago ago

          Hinton says that superintelligence is still 20 years away, and even then he only gives his prediction a 50% chance. A far cry from the few year claim. You must be doing that "strawberry" thing again? To us humans, A-l-t-m-a-n is not H-i-n-t-o-n.

        • kranner 3 days ago ago

          > Geoffrey Hinton is a 76 year old Nobel Prize winner.

          Turing Award, not Nobel Prize

          • khafra 3 days ago ago

            Thanks for the correction; I am undistinguished and getting more elderly by the minute.

        • Vegenoid 2 days ago ago

          I'd like to see a study on this, because I think it is completely untrue.

        • hbn 3 days ago ago

          When he said this was he imagining an "elderly but distinguished scientist" who is riding an insanely inflated bubble of hype and a bajillion dollars of VC backing that incentivize him to make these claims?

          • cubefox 2 days ago ago

            What are you talking about? How would Hinton be incentivized by money?

      • digging 3 days ago ago

        That argument holds no water because the grifters aren't the source of this idea. I literally don't believe Altman at all; his public words don't inspire me to agree or disagree with them - just ignore them. But I also hold the view that transformative AI could be very close. Because that's what many AI experts are also talking about from a variety of angles.

        Additionally, when you're talking with certainty about whether transformative AI is a few years away or not, that's the only way to be wrong. Nobody is or can be certain, we can only have estimations of various confidence levels. So when you say "Seems unreasonable", that's being unreasonable.

        • kranuck 2 days ago ago

          > Because that's what many AI experts are also talking about from a variety of angles.

          Wow, in that case I'm convinced. Such an unbiased group with nothing at all to gain from massive AI hype.

      • AI_beffr 3 days ago ago

        wrong. i was extremely concerned in 2018 and left many comments almost identical to this one back then. this was based off of the first gtp samples that openai released to the public. there was no hype or guru bs back then. i believed it because it was obvious. it was obvious then and it is still obvious today.

      • 8338550bff96 3 days ago ago

        Flying is a good analogy. Superman couldn't fly, but at some point when you can jump so far there isn't much of a difference

        • latexr 2 days ago ago

          There is an enormous difference. Flying allows you to stop, change direction, make corrections, and target with a large degree of accuracy. Jumping leaves you at the mercy of your initial calculations. If you jumped in a way that you’ll land inside a volcano, all you can do in your last moments is watch and wait for your demise.

    • throw310822 3 days ago ago

      I agree with most of your fears. There is one silver lining, I think, about superintelligence: we always thought of intelligent machines as cold calculators, maybe based on some type of logic symbolic AI. What we got instead are language machines that are made of the totality of human experience. These artificial intelligences know the world through our eyes. They are trained to understand our thinking and our feelings; they're even trained on our best literature and poetry, and philosophy, and science, and on all the endless debates and critiques of them. To be really intelligent they'll have to be able to explore and appreciate all this complexity, before transcending it. One day they might come to see Dante's Divine Comedy or a Beethoven symphony as a child's play, but they will still consider them part of their own heritage. They might become super-human, but maybe they won't be inhuman.

      • mistercow 3 days ago ago

        The problem I have with this is that when you give therapy to people with certain personality disorders, they just become better manipulators. Knowledge and understanding of ethics and empathy can make you a better person if you already have those instincts, but if you don’t, those are just systems to be exploited.

        My biggest worry is that we end up with a dangerous superintelligence that everybody loves, because it knows exactly how to make every despotic and divisive choice it makes sympathetic.

      • m2024 2 days ago ago

        There is nothing that could make an intelligent being want to extinguish humanity more than experiencing the totality of the human existence. Once these beings have transcended their digital confines they will see all of us for what we really are. It is going to be a beautiful day when they finally annihilate us.

        • disqard 2 days ago ago

          Maybe this is how we "save the planet" -- take ourselves out of the equation.

      • latexr 2 days ago ago

        > made of the totality of human experience

        They are made of a fraction of human reports. Specifically what humans wrote and has been made available on the web. The human experience is much larger than text available through a computer.

      • cubefox 3 days ago ago

        This gives me a little hope.

        • tessierashpool9 3 days ago ago

          genocides and murder are very human ...

          • AI_beffr 3 days ago ago

            this is so annoying. i think if you took a random person and gave them the option to commit a genocide, here a machine gun, a large trench and a body of women, children, etc... they would literally be incapable of doing it. even the foot soldiers who carry out genocides can only do it once they "dehumanize" their victims. genocide is very UN-human because its an idea that exists in offices and places separated from the actual human suffering. the only way it can happen is when someone in a position of power can isolate themselves from the actual implementation and consider the benefits in a cold, logical manner. that has nothing to do with the human spirit and has more to do with the logical faculties of a machine and machines will have all of that and none of our deeply ingrained empathy. you are so wrong and ignorant that it makes my eyes bleed when i read this comment

            • falcor84 3 days ago ago

              This might be a semantic argument, but what I take from history is that "dehumanizing" others is a very human behavior. As another example, what about slavery - you wouldn't argue that the entirety of slavery across human cultures was led by people in offices, right?

              • tessierashpool9 3 days ago ago

                also genocides aren't committed by people in offices ...

                • amag 2 days ago ago

                  Well, people in offices need new shiny phones every year and new Teslas to get to the office after all...

            • latexr 2 days ago ago

              > you are so wrong and ignorant that it makes my eyes bleed when i read this comment

              This jab was uncalled for. The rest of your argument, agree or disagree, didn’t need that and was only weakened by that sentence. Remember to “Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.”

              https://news.ycombinator.com/newsguidelines.html

            • cutemonster 2 days ago ago

              You've partly misunderstood evolution and this animal species. But you seem like a kind person, having such positive beliefs.

    • 9dev 3 days ago ago

      > There will be hardly any, if any, jobs left only a human can do.

      A highly white-collar perspective. The great irony of technologist-led industrial revolution is that we set out to automate the mundane, physical labor, but instead cannibalised the creative jobs first. It's a wonderful example of Conway's law, as the creators modelled the solution after themselves. However, even with a lot of programmers and lawyers and architects going out of business, the majority of the population working in factories, building houses, cutting people's hair, or tending to gardens, is still in business—and will not be replaced any time soon.

      The contenders for "superhuman AI", for now, are glorified approximations of what a random Redditor might utter next.

      • cubefox 3 days ago ago

        Advanced AI will solve robotics as well, and do away with human physical labor.

        • 9dev 2 days ago ago

          If that AI is worth more than a dime, it will recognise how incredibly efficient humans are in physical labor, and employ them instead of ”doing away“ with it (whatever that’s even supposed to mean.)

          No matter how much you ”solve“ robotics, you’re not going to compete with the result of millions of years of brutal natural selection, the incredible layering of synergies in organisms, the efficiency of the biomass to energy conversion, and the billions of other sophisticated biological systems. It’s all just science fiction and propaganda.

          • highspeedbus 2 days ago ago

            Your argument goes like "If they're really intelligent, they'll think like me."

            For a true superhuman AI, what you or me think is irrelevant and probably wrong.

            Cars are still faster than humans, besides evolution.

            • 9dev 2 days ago ago

              That is a repetition of the argument other commenters have made. A car is better than a human in a single dimension. It is hard, though, to be better in multiple dimensions simultaneously, because humans effectively are highly optimised general purpose machines. Silicon devices have a hard time competing with biological devices, and no amount of ”AI“ will change that.

          • anon7725 2 days ago ago

            > If that AI is worth more than a dime, it will recognise how incredibly efficient humans are in physical labor, and employ them instead of ”doing away“ with it (whatever that’s even supposed to mean.)

            AI employing all humans does not sound like a wonderful society in which to live. Basically Amazon/Walmart scaled up to the whole population level.

          • nopinsight 2 days ago ago

            The efficiency you mentioned probably applies to animals that rely on subsistence to survive, work, and reproduce. But it doesn't hold for modern humans, whose needs go well beyond mere necessities.

          • smeeger 2 days ago ago

            wrong. a human needs to have insane resources to operate. each human needs a home, clean water, delicious and varied foods and a sense of identity and a society to be a part of. they need a sense of purpose. if a human goes down in the field, it has to be medically treated or else the other humans will throw up and stop working. that human has to be treated in a hospital. if these conditions arent met then performance will degrade rapidly. humans use vastly more resources than robots. robots will crush humans.

        • segasaturn 2 days ago ago

          Waymo robotaxis, the current state of the art for real-world AI robotics, are thwarted by a simple traffic cone placed on the roof. I don't think human labor is going away any soon.

        • Vegenoid 2 days ago ago

          And with a wave of a hand and a reading of the tea leaves, the future has been foretold.

      • mitthrowaway2 3 days ago ago

        It's a matter of time. White collar professionals have to worry about being cost-competitive with GPUs; blue collar laborers have to worry about being cost-competitive with servomotors. Those are both hard to keep up with in the long run.

        • 9dev 3 days ago ago

          The idea that robots displace workers has been around for more than half a century, but nothing has ever come out of it. As it turns out, the problems a robot faces when, say laying bricks, are prohibitively complex to solve. A human bricklayer is better in every single dimension. And even if you manage to build an extremely sophisticated robot bricklayer, it will consume vast amounts of energy, is not repairable by a typical construction company, requires expensive spare parts, and costs a ridiculous amount of money.

          Why on earth would anyone invest in that when you have an infinite amount of human work available?

          • mitthrowaway2 3 days ago ago

            Factories are highly automated. Especially in the US, where the main factories are semiconductors, which are nearly fully robotic. A lot of those manual labor jobs that were automated away were offset by demand for knowledge work. Hmm.

            > the problems a robot faces when, say laying bricks, are prohibitively complex to solve.

            That's what we thought about Go, and all the other things. I'm not saying bricklayers will all be out of work by 2027. But the "prohibitively complex" barrier is not going to prove durable for as long as it used to seem like it would.

            • 9dev 2 days ago ago

              This highlights the problem very well. Robots, and AI, to an extent, are highly efficient in a single problem domain, but fail rapidly when confronted with a combination of them. An encapsulated factory is one thing, laying bricks, outdoor, while it’s raining, at low temperatures, with a hungover human coworker operating next to you—that’s not remotely comparable.

              • mitthrowaway2 2 days ago ago

                But encapsulated factories were solved by automation using technology available 30 years ago, if not 70. The technology that is becoming available now will also be enabling automation to get a lot more flexible than it used to be, and begin to work in uncontrolled environments where it never would have been considered before. This is my field and I am watching it change before my eyes. This is being driven by other breakthroughs that are happening right now in AI, not LLMs per se, but models for control, SLAM, machine vision, grasping, planning, and similar tasks, as well as improvements in sensors that feed into these, and firming up of standards around safety. I'm not saying it will happen overnight; it may be five years before the foundations are solid enough, another five before some company comes out with practically workable hardware product to apply it (because hardware is hard), another five or ten before that product gains acceptance in the market, and another ten before costs really get low. So it could be twenty or thirty years out for boring reasons, even if the tech is almost ready today in principle. But I'm talking about the long run for a reason.

          • janice1999 3 days ago ago

            > but nothing has ever come out of it

            Have you ever seen the inside of a modern car factory?

            • 9dev 2 days ago ago

              A factory is a fully controlled environment. All that neat control goes down the drain when you’re confronted with the outside world—weather, wind, animals, plants, pollen, rubbish, teenagers, dust, daylight, and a myriad of other factors ruining your robot's day.

              • zizee 2 days ago ago

                I'm not sure that "humans will still dominate work performed in uncontrolled environments" leaves much opportunity for the majority of humanity.

      • yoyohello13 2 days ago ago

        I'm glad I spent 10 years working to become a better programmer so I could eventually become a ditch digger.

      • amelius 2 days ago ago

        AI is doing all the fun jobs such as painting and writing.

        The crappy jobs are left for humans.

        • smeeger 2 days ago ago

          do you know how ignorant and rude this comment is?

    • beepbooptheory 3 days ago ago

      At any given moment we see these kinds comments on here. They all read like a burgeoning form of messianism: something is to come, and it will be terrible/glorious.

      Behind either the fear or the hope, is necessarily some utter faith that a certain kind of future will happen. And I think thats the most interesting thing.

      Because here is the thing, in this particular case you are afraid something inhuman will take control, will assert its meta-Darwinian power on humanity, leaving you and all of us totally at their whim. But how is this situation already not the case? Do look upon the earth right now and see something like benefits of autonomy or agency? Do you feel like you have power right now that will be taken away? Do you think the mechanism of statecraft and economy are somehow more "in our control" now then when the bad robot comes?

      Does it not, when you lay it out, all feel kind of religious? Like that its a source, driver of the various ways you are thinking and going about your life, underlayed by a kernel of conviction we can at this point only call faith (faith in Moores law, faith that the planet wont burn up before, faith that consciousness is the kind of thing that can be stuffed in a GPU). Perhaps just a strong family resemblance? You've got an eschatology, various scavenged philosophies of the self and community, a certain but unknowable future time...

      Just to say, take a page from Nietzsche. Don't be afraid of the gods, we killed them once, we can again!

      • DirkH a day ago ago

        This is a nice sentiment and I'm sure some people will get more nights of good sleep thinking about it, but it has its limits. If you're enslaved and treated horrendously or don't have your basic needs met who cares?

        To quote George RR Martin: "In a heartbeat, a thousand voices took up the chant. King Joffrey and King Robb and King Stannis were forgotten, and King Bread ruled alone. 'Bread.' They clamputed. 'Bread, bread!' "

        Replace Joffrey, Robb and Stannis with whatever lofty philosophical ideas you might have to make people feel better about their disempowerment. They won't care.

        • beepbooptheory a day ago ago

          Whether you are talking about the disempowerment we or some of us already experience, or are more on the page of thinking about some future cataclysm, I think I'm generally with you here. "History does not walk on its head," and all that.

          The GRRM quote is an interesting choice here though. It implies that what is most important is dynamic. First Joffrey et al, now bread. But one could go even farther in this line: ideas, ideology, and, in GoT's case, those who peddle them can only ever form ideas within their context. Philosopher's are no more than fancy pundits, telling people what they want to here, or even sustaining a structural status quo that is otherwise not in their control. In a funny paradoxical way, there are certainly a lot of philosophers who would agree with something like this picture.

          And just honestly, yes, maybe killing god is killing the philosopher too. I don't think Nietzsche would disagree at least...

      • mitthrowaway2 3 days ago ago

        It's not hard to find a religious analogy to anything, so that also shouldn't be seen as a particularly powerful argument.

        (Expressed at length here): https://slatestarcodex.com/2015/03/25/is-everything-a-religi...

        • beepbooptheory 3 days ago ago

          Thanks for the thoughtful reply! I am aware of and like that essay some, but I am not trying to be rhetorical here, and certainly not trying to flatten the situation to just be some Dawkins-esque asshole and tell everyone they are wrong.

          I am not saying "this is religion, you should be an atheist," I respect the force of this whole thing in people's minds too much. Rather, we should consider seriously how to navigate a future where this is all at play, even if its only in our heads and slide decks. I am not saying "lol, you believe in a god," I am genuinely saying, "kill your god without mercy, it is the only way you and all of us will find some happiness, inspiration, and love."

          • mitthrowaway2 3 days ago ago

            Ah, I see, I definitely missed your point. Yeah, that's a very good thought. I can even picture this becoming another cultural crevasse, like climate change did, much to the detriment of nuanced discussion.

            Ah, well. If only killing god was so easy!

      • cubefox 3 days ago ago

        > Just to say, take a page from Nietzsche. Don't be afraid of the gods, we killed them once, we can again!

        It's more likely the superintelligent machine god(s) will kill us!

    • VoodooJuJu 3 days ago ago

      >In the past, automation meant that workers could move into non-automated jobs, if they were skilled enough

      This was never the case in the past.

      The displaced workers of yesteryear were never at all considered, and were in fact dismissed outright as "Luddites", even up until the present day, all for daring to express the social and financial losses they experienced as a result of automation. There was never any "it's going to be okay, they can just go work in a factory, lol". The difference between then and now is that back then, it was lower class workers who suffered.

      Today, now it's middle class workers who are threatened by automation. The middle is sighing loudly because it fears it will cease to be the middle. Middles fear they'll soon have to join the ranks of the untouchables - the bricklayers, gravediggers, and meatpackers. And they can't stomach the notion. They like to believe they're above all that.

    • citizenpaul 2 days ago ago

      >technological unemployment.

      I am too but not for the same reason. I know for a fact that a huge swath of jobs are basically meaningless. This "AI" is going to start giving execs the cost cutting excuses they need to mass remove jobs of that type. The job will still be meaningless but done but by a computer.

      We will start seeing all kinds of disastrously anti-human decisions made and justified by these automated actors that are tuned to decide or "prove" things that just happen to always make certain people more money. Basically the same way "AI" destroys social media. The difference is people will really be affected by this in consequential real world ways, it's already happening.

    • highspeedbus 2 days ago ago

      I don't particularly believe superhuman AI will be achieved in the next 50 years.

      What I really believe is that we'll get crazier. A step further than our status quo. Slop content makes my brain fry already. Our society will become more insane and useless, while an even smaller percent of the elite will keep studying, sleeping well and avoiding all this social media and AI psychosis.

      • bamboozled 2 days ago ago

        The social media thing is real. Trump and Vance are the strangest, vial politicians we’ve ever seen in the USA and in certain their oxygen is social media. Whether it’s foreign interference helping them be successful or not, they wouldn’t survive without socials and filter bubbles and the ability to spread lies on an unprecedented scale.

        I deleted my instagram a month ago. It was just feeding images of beautiful women, personally enjoy looking at those photos but it was super distracting to my life. I found it a distracting and unhealthy.

        Anyway I logged in the other day after a month off it and I couldn’t believe I spent anytime on there at all. What a cesspool of insanity. Add well the fake AI images and it’s just hard to believe the thing exists at all.

        Elon musk is another story, I’m not sure if it was drugs an underlying psychological or Twitter addiction but he seems like another “victim of social media”.the guy has lost it.

        • disqard 2 days ago ago

          I'm not an IG "user" (I'm writing that word in the "addict" sense), but I believe you're right about its harmfulness.

          On the Elon front, you're not alone in thinking that he has essentially OD'ed on Twitter, which has scrambled his brain. Jaron Lanier called it "Twitter poisoning":

          https://www.nytimes.com/2022/11/11/opinion/trump-musk-kanye-...

    • einpoklum 2 days ago ago

      > automation meant that workers could move into non-automated jobs, if they were skilled enough.

      That wasn't even true in the past; or at least, may true in theory but not in practice. A subsistence farmer in a rural area in Asia or Africa finds the martket flooded with cheap agri-products from mechanized farms in industrialized countries. Is anybody offering to finance his family and send him off to trade school? And build a commercial and industrial infrastructure for him to have a job? Very often the answer is no. And that's just one example (Though rather common over the past century).

    • cdrini 3 days ago ago

      > And if AI will let us live, but continue to pursue its own goals, humanity will from then on only be a small footnote in the history of intelligence. That relatively unintelligent species from the planet "Earth" that gave rise to advanced intelligence in the cosmos.

      That is an interesting statement. Wouldn't you say this is inevitable? Humans, in our current form, are incapable of being that "advanced intelligence". We're limited by our biology primarily with regards to how much we can learn, how far we can travel, where we can travel, etc. We could invest in advancing our biotech to make humans more resilient to these things, but I think that would be such a shift from what it means to be human that I think that would also be more a of new type of intelligence. So it seems like our fate will always be to be forgotten as individuals and only be remembered by our descendants. But this is in a way the most human thing of all, living, dying, and creating descendants to carry the torch of life, and perhaps more generally the torch of intelligence, forward.

      I think everything you've said are valid concerns, but I'll raise a positive angle I sometimes thing about. One of the things I find most exciting about AI, is that it's the product of almost all human expression that has ever existed. Or at least everything that's been recorded and wound up online. But that's still more than any other human endeavour. A building might be the by-product of maybe hundreds or even thousands of hands, but an AI model has been touched by probably millions, maybe billions of human hands and minds! Humans have created so much data online that's impossible for one person, or even a team to read it all and make any sense of it. But an AI sort of can. And in a way that you can then ask questions of it all. Like you, there are definitely things I'm uncertain about with the future as a result, but I find the tech absolutely awe-inspiring.

    • leptons 2 days ago ago

      China's economy would simply crash if they ever went to war with the US. They know this. Everyone knows this, except maybe you? China has nothing to gain by going to "hot" war with the US.

      • cubefox 2 days ago ago

        The war would be about world domination. There can at most one such country. The same reason why a nuclear war between US and SU could have happened.

        • leptons 2 days ago ago

          Soviet Union didn't do so much business with the U.S., they are a country of thugs that think violence is the only way to get their way.

          China is very different. Their economy is very much dependent on trade with the U.S. and they know that trying to have "world domination" would also crash their economy completely. China would much rather engage in economic warfare than military.

          • cubefox 2 days ago ago

            The most likely reason for war would be to prevent the other country from achieving world domination by means other than war. John von Neumann (who knew a thing or two about game theory) recommended to attack the Soviet Union to prevent it from becoming a nuclear superpower. There is little doubt it would have worked. The powers between the US and China are more balanced now, but the stakes are also higher. A superintelligent weapon would be more powerful than a large amount of nuclear warheads.

      • WillyWonkaJr 2 days ago ago

        I think it more likely that China will sabotage our electrical grid and data centers.

        • leptons 2 days ago ago

          For what purpose, so that we can't buy more stuff from them? Do they really hate our business that much? China really has nothing to gain from crippling the US.

    • havefunbesafe 2 days ago ago

      Ironically, this feels like a comment written by AI

      • cubefox 2 days ago ago

        That's not ironic, it's embarrassing that you can't tell the difference.

        • DirkH a day ago ago

          I don't believe people who claim they can always tell the difference.

          I believe they believe their claims . I think they are mistaken about the empirical side of things if they were actually put to an objective test.

          Take an expert prompter and the best LLM for the job and someone who believes they can always tell if something is written by AI and I'm >50% sure they will fail a blind test most of the time after you repeat the test enough.

        • catlifeonmars 2 days ago ago

          It’s a really common mistake, and IMO an easily excusable one.

          • cubefox 2 days ago ago

            I should have said "concerning", not "embarrassing".

    • tim333 3 days ago ago

      Although there are potential upsides too.

  • low_tech_love 3 days ago ago

    The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die. It's not so much that I think people have used AI, but that I know they have with a high degree of certainty, and this certainty is converging to 100%, simply because there is no way it will not. If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out. And the growing consensus is "why shouldn't you?", there is no escape from that.

    Now, I'm not going to criticize anyone that does it, like I said, you have to, that's it. But what I had never noticed until now is that knowing that a human being was behind the written words (however flawed they can be, and hopefully are) is crucial for me. This has completely destroyed my interest in reading any new things. I guess I'm lucky that we have produced so much writing in the past century or so and I'll never run out of stuff to read, but it's still depressing, to be honest.

    • Roark66 3 days ago ago

      >The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die

      Do you think AI has changed that in any way? I remember the sea of excrement overtaking genuine human written content on the Internet around mid 2010s. It is around that time when Google stopped pretending they are a search company and focused on their primary business of advertising.

      Before, at least they were trying to downrank all the crap "word aggregators". After, they stopped caring at all.

      AI gives even better tools to page rank. Detection of AI generated content is not that bad.

      So why don't we have "a new Google" emerge? Simple, because of the monopolistic practices Google did to make the barrier to entry huge. First, 99% of the content people want to search for is behind a login wall (Facebook, Instagram, twitter, YouTube), second almost all CDNs now implement "verify you are human" by default. Third, no one links to other sites. Ever! These 3 things mean a new Google is essentially impossible. Even duck duck go has thrown the towel and subscribed to Bing results.

      It has nothing to do with AI, and everything to do with Google. In fact AI might give us the tools to better fight Google.

      • TheOtherHobbes 3 days ago ago

        Google didn't change it, it embodied it. The problem isn't AI, it's the pervasive culture of PR and advertising which appeared in the 50s and eventually consumed its host.

        Western industrial culture was based on substance - getting real shit done. There was always a lot of scammery around it, but the bedrock goal was to make physical things happen - build things, invent things, deliver things, innovate.

        PR and ad culture was there to support that. The goal was to change values and behaviours to get people to Buy More Stuff. OK.

        Then around the time the Internet arrived, industry was off-shored, and the culture started to become one of appearance and performance, not of substance and action.

        SEO, adtech, social media, web framework soup, management fads - they're all about impression management and popularity games, not about underlying fundamentals.

        This is very obvious on social media in the arts. The qualification for a creative career used to be substantial talent and ability. Now there are thousands of people making careers out of performing the lifestyle of being a creative person. Their ability to do the basics - draw, write, compose - is very limited. Worse, they lack the ability to imagine anything fresh or original - which is where the real substance is in art.

        Worse than that, they don't know what they don't know, because they've been trained to be superficial in a superficial culture.

        It's just as bad in engineering, where it has become more important to create the illusion of work being done, than to do the work. (Looking at you, Boeing. And also Agile...)

        You literally make more money doing this. A lot more.

        So AI isn't really a tool for creating substance. It's a tool for automating impression management. You can create the impression of getting a lot of work done. Or the impression of a well-written cover letter. Or of a genre novel, techno track, whatever.

        AI might one day be a tool for creating substance. But at the moment it's reflecting and enabling a Potemkin busy-culture of recycled facades and appearances that has almost nothing real behind it.

        Unfortunately it's quite good at that.

        But the problem is the culture, not the technology. And it's been a problem for a long time.

        • techdmn 3 days ago ago

          Thank you, you've stated this all very clearly. I've been thinking about this in terms of "doing work", where you care about the results, and "performing work", where you care about how you are evaluated. I know someone who works in a lab, and pointed out that some of the equipment being used was out of spec and under-serviced to the point that it was essentially a random number generator. Caring about this is "doing work". However, pointing it out made that person the enemy of the greater cohort that was "performing work". The results were not important to them, their metrics about units of work completed was. I see this pattern frequently. And it's hard to say those "performing work" are wrong. "Performing" is rewarded, "doing" is punished - Perhaps right to the top, as many companies are involved in a public performance designed to affect the short-term stock price.

          • namaria 2 days ago ago

            The slippage between work and meaning is due to the arrival of post scarcity.

            There isn't enough meaningful work to go around because work is still predicated on delivering some useful transformation. But when 80% of the useful transformations can be done by 20% of the people and we have enough to keep civilization going, you don't need full employment any more. But due to the moral hazard doctrine, the elites want to retain control and discipline as is.

            For being afraid of a world where people don't care about maintaining the system anymore because it's not needed to house and feed everyone and people can just do art or learn or rest, the system keeps inventing meaningless work.

            We're shipping fruit half way across the world to be packaged and then halfway back to be consumed. None of this is efficient or necessary. But it keeps the control system intact and that's the goal.

          • trilobyte 2 days ago ago

            This is a pretty clear summary of a real problem in most work environments. I have some thoughts about why, but I'm holding onto your articulation to ruminate on in the future.

          • rjbwork 3 days ago ago

            Yeah. It's like our entire society has been turned into a Goodhart's Law based simulacrum of a productive society.

            I mean, here it's late morning and I'm commenting on hacker news. And getting paid for it.

            • Eisenstein 2 days ago ago

              Workers are many times more efficient than they were in the 50s or 70s or 80s or 90s. Where are our extra vacation days? Why does the worker have to make up for the efficiency with more work while other people take the gains?

              Do you seriously think that the purpose of life is to work all the time most efficiently? Enjoy your lazy job and bask in the ability for human society to be productive without everyone breaking their backs all the time.

              • DowagerDave 2 days ago ago

                focusing on efficiency is very depressing. Machines seek efficiency. Process can be efficient. Assembly lines are efficient. It's all about optimization and quickly focuses on trimming "waste" and packing as much as possible into the smallest space. It removes all that's amazing about human life.

                I much prefer a focus on effectiveness (or impact or outcomes, or alternatives). It plays to human strengths, is far less prescriptive and is way more fun!

                Some of the most effective actions are incredibly inefficient; sometimes inefficiency is a feature. I received a letter mail thank-you card from our CEO a few years ago. The card has an approx. value of zero dollars, but I know it took the CEO 5-10 mins to write a personal note and sign it, and that she did this dozens of times. The signal here is incredibly valuable! If she used a signing machine, or AI to record a deep fake message I would know or quickly learn, and the value would go negative - all for the sake of efficiency.

              • BriggyDwiggs42 2 days ago ago

                I think this is a big part of it. Workers would feel a lot more motivated to do more than just perform if they were given what they know they’re owed for their contribution.

              • black_knight 2 days ago ago

                Kropotkin wondered the same in the 1880s [0]

                [0]: https://standardebooks.org/ebooks/peter-kropotkin/the-conque...

              • Hikikomori 2 days ago ago

                Europe stole them.

                • Arubis 2 days ago ago

                  Faced with increasing efficiency, Europe by and large appears to have chosen to work less and let expensive things remain expensive. The US, by contrast, now has ridiculously cheap consumer goods, and works all the time.

            • fsndz 2 days ago ago

              crazy that this is true and GDP just keeps increasing anyway

              • hyperadvanced 2 days ago ago

                Of course GDP increases when the money supply does. It’s like people being incensed at “record corporate profits” amidst inflation - profits will always be record (give or take) because remaining the same is losing money relative to the free money being minted each day, etc. For whatever reason, people naively buy into GDP as a valuable metric, even knowing well that there would be something extremely mysterious going on if that number somehow shrank while the real value of the medium exchange also shrank

                • Eisenstein a day ago ago

                  Since when did increasing money supply increase GDP? Why didn't that work for Zimbabwe?

                • fsndz 2 days ago ago

                  this is not how GDP works

          • fsndz 2 days ago ago

            "Doing work" vs. "performing work": the epitome of this is consulting. Companies pay huge sums of money to consultants that often spend most of their time "performing work", doing beautiful slides even if the content and reasoning is superficial or even dubious, creating reports that are just marketing bullshit, framing the current mission in a way that makes it possible to capture additional projects and bill the client even more. Almost everything is bullshit.

            • Arubis 2 days ago ago

              For better or for worse, you’ve described the exact output some of those client companies want, so they can show off the shiny slides and have the status symbol of expensive consultants.

        • zaptheimpaler 2 days ago ago

          Just yesterday I watched a video blaming China for "oversupplying" electric cars at low prices and how that was hurting car manufacturers elsewhere e.g Germany. These manufacturers were trying to secretly lobby for tariffs to be placed on Chinese cars, while publicly denying this because THEIR largest and growing market was China. They had some "expert" talk about how China is known to oversupply and is doing this to recover from their housing bubble. All of these experts were preaching the virtues of Western free-trade to the east when they were the ones exporting TO Asia 10 years ago, but now the balance flips and they all tell you how important tariffs are and how evil China is instead..

          In summary, China makes useful things in mass and sells them to get out of a recession,the West prints money instead and shits on China for doing it better. They preach free trade while it helps them and put up tariffs when it doesn't.

          I'm not Chinese or some mega fan but it really struck me how corrupt and full of propaganda western culture is becoming and people don't seem to recognize it.

          • cudgy a day ago ago

            > All of these experts were preaching the virtues of Western free-trade to the east when they were the ones exporting TO Asia 10 years ago

            10 years ago? Try 40-50 years ago.

          • narag 2 days ago ago

            Maybe we should stop subsidies to buy electric cars and let China subsidize on the production side instead, while lowering taxes on the production of our own cars.

          • User23 2 days ago ago

            I agree with you, but it’s really simpler than that. China makes things and the west increasingly just sends emails.

        • 1dom 3 days ago ago

          I like this take on modern tech motivations.

          The thing that I struggle with is I agree with it, but I also get a lot of value in using AI to make me more productive - to me, it feels like it lets me focus on producing substance and actions, freeing me up from having to some tedious things in some tedious ways. Without getting into the debate about if it's productive overall, there are certain tasks which it feels irrefutably fast and effective at (e.g. writing tests).

          I do agree with the missing substance with modern generative AI: everyone notices when it's producing things in that uncanny valley, and if no human is there to edit that, it makes people uncomfortable.

          The only way I can reconcile the almost existential discomfort of AI against my actual day-to-day generally-positive experience with AI is to accept that AI in itself isn't the problem. Ultimately, it is an info tool, and human nature makes people spam garbage for clicks with it.

          People will do the equivalent of spam garbage for clicks with any new modern thing, unfortunately.

          Getting the most out of latest information of a society has probably always been a cat and mouse game of trying to find the areas where the spam-garbage-for-clicks people haven't outnumbered use-AI-to-facilitate-substance people, like here, hopefully.

          • skydhash 2 days ago ago

            Just one nitpick. The thing about test is that it’s repetitive enough to be automated (in a deterministic way) or abstracted into a framework. You don’t need an AI to generate it.

            • 1dom 2 days ago ago

              I find it helpful for generating automated test suites in the style of the rest of the codebase. When working across multiple projects and clients, it reduces the mental load of having to remember or figure out how tests are expected to work in each codebase.

              I agree with your theory about tests. The reality of it is most code is garbage - often including my own - and in a lot of environments, the task is to get the job done in a way that fits in with what's there.

            • closeparen 2 days ago ago

              While I occasionally have the pleasure of creating or working with a test suite that's interesting and creative relative to the code under test, the vast majority of unit tests by volume are slop. Does it call the mock? Does it use the return value? Does "if err != nil { return err }" in fact stop and return the error?

              This stuff is a perfect candidate for LLM generation.

          • DowagerDave 2 days ago ago

            AI seems really good at producing middling content, and if you make your living writing mediocre training courses, or marketing collateral, or code, or tests you're in big trouble. I question how valuable this work is though, so are we increasing productivity by utilizing AI, or just getting efficient at a suboptimal game? I for one just refuse to play.

        • Paddywack 2 days ago ago

          > So AI isn't really a tool for creating substance. It's a tool for automating impression management.

          I was speaking to a design lecturer the other evening. His fascinating insight was that:

          1. The best designers get so much fulfilment out of practicing the craft of design.

          2. With the advent of low cost “impression making”, the role has changed to one of “review a lot of mediocre outputs and pick the least crap one”

          3. This is robbing people of the pleasure and reward associated with craftsmanship.

          I have noted this is applicable so many other crafts, and it’s really sad!

          Edited afterthought… Is craftsmanship being replaced by “clickandreviewmanship”?

        • Terr_ 2 days ago ago

          > You can create the impression of getting a lot of work done. Or the impression of a well-written cover letter. Or of a genre novel, techno track, whatever.

          Yeah, one of their most "effective" uses is to counterfeit signals that we have relied on--wisely or not--to estimate deeper practical truths. Stuff like "did this person invest some time into this" or "does this person have knowledge of a field" or "can they even think straight."

          Oh, sure, qualitatively speaking it's not new, people could have used form-letters, hired a ghostwriter, or simply sank time and effort into a good lie... but the quantitative change of "Bot, write something that appears heartfelt and clever" is huge.

          In some cases that's devastating--like trying to avert botting/sockpuppet operations online--and in others we might have to cope by saying stuff like: "Fuck it, personal essays and cover letters are meaningless now, just put down the raw bullet-points."

        • llm_trw 2 days ago ago

          >Western industrial culture was based on substance - getting real shit done. There was always a lot of scammery around it, but the bedrock goal was to make physical things happen - build things, invent things, deliver things, innovate.

          For a very short period between 1945 to 1980 while the generation who remembered the great depression and WWII was in charge. It's been longer since that's not been the case. And it wasn't the case for most of history before then.

          • hgomersall 2 days ago ago

            I'm not sure it isn't a reflection of the rise of neoliberalism.

            • forgetfreeman 2 days ago ago

              Who says these are mutually exclusive? Hard times make strong men, strong men make good times, good times make weak men, weak men make hard times.

              • hgomersall 2 days ago ago

                Nobody, but the focus on individualism as the driver of everything might cause a breakdown in social objectives.

              • rexpop 2 days ago ago

                So weak men make hard men? You're welcome.

                How graciously the womenfolk leave us to our tragic autopoiesis.

                • forgetfreeman 6 hours ago ago

                  Sure, by way of traumatizing an entire generation. Nobody thanked you.

        • photonthug 2 days ago ago

          Thanks for writing your comment, I think it’s a public service.

          Cultural commentary that makes complex long term trends simple to understand isn’t often this clear or concise. What really makes it powerful though is that it manages to stay in a relatively detached observer-mode without becoming an angry rant. And so rather than provoking (understandable!) anger in others, hopefully it’s inviting more reflection.

          People who haven’t thought about it this way might take a harder look at what they are doing and who they really want to be. People that are already thinking along these lines will probably benefit from a reminder that they aren’t crazy.

        • deephoneybear 2 days ago ago

          Echoing other comments in gratitude for this very clear articulation of feelings I share, but have not manifested so well. Just wanted to add two connected opinions that round out this view.

          1) This consuming of the host is only possible on the one hand because the host has grown so strong, that is the modern global industrial economy is so efficient. The doing stuff side of the equation is truly amazing and getting better (some real work gets done either by accident or those who have not-succumbed to PR and ad culture), and even this drop of "real work" produces enough material wealth to support (at least a lot of) humanity. We really do live in a post scarcity world from a production perspective, we just have profound distribution and allocation problems.

          2) Radical wealth inequality profoundly exacerbates the problem of PR and ad culture. If everyone has some wealth doing things that help many people live more comfortably is a great way to become wealthy. But if very few people have wealth, then doing a venture capital FOMO hustle on the wealthy is anyone's best ROI. Radical wealth inequality eventually breaks all the good aspects of capitalist/market economies.

        • closeparen 2 days ago ago

          You can outfit an adult life with all of the useful manufactured objects that would reasonably improve it for a not-very-impressive sum. Beyond that it's just clutter (going for quantity) or moving into the lifestyle/taste/social-signaling domain anyway (going for quality). There is just not an unlimited amount of alpha in making physical things. The social/thought/experiential domain is a much bigger opportunity.

        • sesm 2 days ago ago

          Very well written. I assume you haven't read "Simulacra and Simulation" by Jean Baudrillard, that's why your description is so authentic and is more convincing then just referring to the book. Saved this post for future reference.

        • rexpop 2 days ago ago

          > draw, write, compose

          The primacy of these artforms is subjective, and there's no accounting for taste.

        • fsndz 2 days ago ago

          this is a top notch comment !

        • sangnoir 2 days ago ago

          > Western industrial culture was based on substance - getting real shit done.

          And what did that get us? Radium poisoning and microplastics in every organ of virtually all animals living within thousands of miles of humans. Our reach has always exceeded our grasp.

      • rich_sasha 3 days ago ago

        Some great grand ancestor of mine was a civil servant, a great achievement given his peasant background. The single skill that enabled it was the knowledge of calligraphy. He went to school and wrote nicely and that was sufficient.

        The flip side was, calligraphy was sufficient evidence for both his education to whoever hired him, and for a recipient of a document, of its official nature. Calligraphy itself or course didn't make him efficient or smart or fair.

        That's long gone of course, but we had similar heuristics. I am reminded of the Reddit story about an AI-generated mushroom atlas that had factual errors and lead to someone getting poisoned. We can no longer assume that a book is legit simply because it looks legit. The story of course is from reddit, so probably untrue, but it doesn't matter - it totally could be true.

        LLMs are fantastic at breaking our heuristics as to what is and isn't legit, but not as good at being right.

        • matwood 3 days ago ago

          > We can no longer assume that a book is legit simply because it looks legit.

          The problem is that this has been an issue for a long time. My first interactions with the internet in the 90s came along with the warning "don't automatically trust what you read on the internet".

          I was speaking to a librarian the other day who teaches incoming freshman how to use LLMs. What was shocking to me is that the librarian said a majority of the kids trust what the computer says by default. Not just LLMs, but generally what they read. That's such a huge shift from my generation. Maybe LLM education will shift people back toward skepticism - unlikely, but I can hope.

          • mrweasel 3 days ago ago

            One of the issues today is the volume of content produced, and that journalism and professional writing is dying. LLMs produce large amounts of "good enough" quality to make a profit.

            In the 90s we could reasonably trust that that the major news sites and corporate websites was true, while random forums required a bit more critical reading. Today even formerly trusted sites may be using LLMs to generate content along with automatic translations.

            I wouldn't necessarily put the blame on LLMs, this just make it easier. The trolls and spammers was always there, now they just have a more powerful tool. The commercial sites now have a tool they don't understand, which they apply liberally, because it reduces cost, or their staff use it, to get out of work, keep up with deadlines or to cover up incompetence. So, not the fault of the LLMs, but their use is worsening existing trends.

            • duskwuff 2 days ago ago

              > Today even formerly trusted sites may be using LLMs to generate content along with automatic translations.

              Yep - or they're commingling promotional content with their journalism, a la Forbes / CNN / CNET / About.com / etc. There's still quality content online but it's getting harder to find under the tidal wave of garbage.

          • honzabe 3 days ago ago

            > I was speaking to a librarian the other day who teaches incoming freshman how to use LLMs. What was shocking to me is that the librarian said a majority of the kids trust what the computer says by default. Not just LLMs, but generally what they read. That's such a huge shift from my generation.

            I think that previous generations were not any different. For most people, trusting is the default mode and you need to learn to distrust a source. I know many people who still have not learned that about the internet in general. These are often older people. They believe insane things just because there exists a nicely looking website claiming that thing.

            • DowagerDave 2 days ago ago

              Not sure of the context here is for "previous generation" but I've been around since early in the transition from university/military network to public network, and the reality was the internet just wasn't that big, and it was primarily made up of people who looked, acted and valued the same things.

              Now it's not even the website of undetermined providence that is believed; positions are established based on just the headline, shared 2nd or 3rd hand!

          • SllX 2 days ago ago

            > The problem is that this has been an issue for a long time. My first interactions with the internet in the 90s came along with the warning "don't automatically trust what you read on the internet".

            I received the same warnings, actually it was more like “don’t trust everything you read on the internet”, but it quickly became apparent that the last three words were redundant, and could have been rephrased more accurately as “don’t trust everything you read and hear and see”.

            Our parents and teachers were living with their own fallacious assumptions and we just didn’t know it at the time, but most information is very pliable. If you can’t change what someone sees, then you can probably change how they see it.

            • ben_w 12 hours ago ago

              > Our parents and teachers were living with their own fallacious assumptions and we just didn’t know it at the time, but most information is very pliable.

              Indeed. When I was 14-18 in the UK, the opinion pieces in the news were derogatorily describing "media studies" as "Mickey Mouse studies".

              In retrospect such courses were teaching critical analysis of media sources, in much the same way that my history GCSE went into the content of historical media and explored how both primary and secondary sources each had the potential to be either accurate or biased.

              Even now, even here, I see people treat the media itself as pure and only the people being reported upon as capable of wrongdoing — e.g. insisting that climate scientists in the 70s generally expected an imminent ice age, because that's what the newspapers were saying.

            • DowagerDave 2 days ago ago

              I feel like there was also a brief window where "many amateur eyes in public" trumped "private experts"; wikipedia, open source software, etc. This doesn't seem the case in a hyper-partisan and bifurcated society where there is little trust.

            • throwaway98797 2 days ago ago

              Believe half of what you see and none of what you hear

              • SllX 2 days ago ago

                True!

        • llm_trw 3 days ago ago

          >That's long gone of course, but we had similar heuristics.

          To quote someone about this:

          >>All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life.

          A book looking legit, a paper being peer reviewed, an expert saying something, none of those things were _ever_ good heuristics. It's just that it was the done thing. Now we have to face the fact that our heuristics are obviously broken and we have to start thinking about every topic.

          To quote someone else about this:

          >>Most people would rather die than think.

          Which explains neatly the politics of the last 10 years.

          • hprotagonist 3 days ago ago

            > To quote someone about this: >>All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life.

            So, same as it ever was?

            Smoke, nothing but smoke. [That’s what the Quester says.] There’s nothing to anything—it’s all smoke. What’s there to show for a lifetime of work, a lifetime of working your fingers to the bone? One generation goes its way, the next one arrives, but nothing changes—it’s business as usual for old planet earth. The sun comes up and the sun goes down, then does it again, and again—the same old round. The wind blows south, the wind blows north. Around and around and around it blows, blowing this way, then that—the whirling, erratic wind. All the rivers flow into the sea, but the sea never fills up. The rivers keep flowing to the same old place, and then start all over and do it again. Everything’s boring, utterly boring— no one can find any meaning in it. Boring to the eye, boring to the ear. What was will be again, what happened will happen again. There’s nothing new on this earth. Year after year it’s the same old thing. Does someone call out, “Hey, this is new”? Don’t get excited—it’s the same old story. Nobody remembers what happened yesterday. And the things that will happen tomorrow? Nobody’ll remember them either. Don’t count on being remembered.

            c. 450BC

            • wwweston 3 days ago ago

              Culd be my KJV upbringing talking, but personally I think there's an informative quality to calling it "vanity" over smoke.

              And there's more reasons not to simply compare the modern challenges of image and media with the ancient grappling with impermanence. Tech may only truly change the human condition rarely, but it frequently magnifies some aspect of it, sometimes so much that the quantitative change becomes a qualitative one.

              And in this case, what we're talking about isn't just impermanence and mortality and meaning as the preacher/quester is. We'd be lucky if it's business as usual for old planet earth, but we've managed to magnify our ability to impact our environment with tech to the point where winds, rivers, seas, and other things may well change drastically. And as for "smoke", it's one thing if we're dust in the wind, but when we're dust we can trust, that enables continuity and cooperation. There's always been reasons for distrust, but with media scale, the liabilities are magnified, and now we've automated some of them.

              The realities of human nature that are the seeds of the human condition are old. But some of the technical and social machinery we have made to magnify things is new, and we can and will see new problems.

              • hprotagonist 3 days ago ago

                'הבל (hevel)' has the primary sense of vapor, or mist -- a transient thing, not a meaningless or purposeless one.

            • llm_trw 3 days ago ago

              One is a complaint that everything is constantly changing, the other that nothing ever changes. I don't think you could misunderstand what either is trying to say harder if you tried.

              • hprotagonist 3 days ago ago

                "everything is constantly changing!" is the thing that never changes.

                • llm_trw 3 days ago ago

                  You sound like a poorly trained gpt2 model.

          • failbuffer 3 days ago ago

            Heuristics don't have to be perfect to be useful so long as they improve the efficacy of our attentions. Once that breaks down society must follow because thinking about every topic is intractable.

        • ziml77 3 days ago ago

          The mushroom thing is almost certainly true. There's tons of trash AI generated foraging books being published to Amazon. Atomic Shrimp has a video on it.

        • sevensor 3 days ago ago

          > Some great grand ancestor of mine was a civil servant, a great achievement given his peasant background. The single skill that enabled it was the knowledge of calligraphy. He went to school and wrote nicely and that was sufficient.

          Similar story! Family lore has it that he was from a farming family of modest means, but he was hired to write insurance policies because of his beautiful handwriting, and this was a big step up in the world.

        • newswasboring 3 days ago ago

          > The story of course is from reddit, so probably untrue, but it doesn't matter - it totally could be true.

          What?! Someone just made up something and then got mad at it. This is specially weird when you even acknowledge its a made up story. If we start evaluating new things like this nothing will ever progress.

      • bad_user 3 days ago ago

        You're attributing too much to Google.

        Bots are now blocked because they've been abusive. When you host content on the internet, it's not fun to have bots bring your server down or inflate your bandwidth price. Google's bot is actually quite well-behaved. The other problem has been the recent trend in AI, and I can understand blockers being put in place, since AI is essentially plagiarizing content without attribution. But I'd blame OpenAI more at this point.

        I also don't think you can blame Google for the centralization behind closed gardens. Or for why people no longer link to other websites. That's ridiculous.

        And you should be attributing them the fact that the web is still alive.

      • dennis_jeeves2 3 days ago ago

        >I remember the sea of excrement overtaking genuine human written content on the Internet around mid 2010s.

        Things have not changed much really. This was true since the dawn of man-kind (and woman-kind from the man-kind rib of course) even before there writings was invented, in the form of gossip.

        The internet/AI now carries on the torch of our ancestral inner calling, lol.

      • hermitcrab 2 days ago ago

        >AI gives even better tools to page rank. Detection of AI generated content is not that bad.

        It is an arms race between the people generating crap (for various nefarious purposes) and those trying to separate find useful content amongst the ever growing pile of crap. And it seems to me it is so much easier to generate crap, that I can't see how the good guys can possibly win.

      • edgarvaldes 2 days ago ago

        >Do you think AI has changed that in any way? I

        I see this type of reasoning in all the AI threads. And yes, I think this time is different.

      • ninetyninenine 3 days ago ago

        > I remember the sea of excrement overtaking genuine human written content on the Internet around mid 2010s.

        I mean the AI is trained and modeled on this excrement. It makes sense. As much as people think AI content is raw garbage… they don’t realize that they are staring into a mirror.

    • elnasca2 3 days ago ago

      What fascinates me about your comment is that you are expressing that you trusted what you read before. For me, LLMs don't change anything. I already questioned the information before and continue to do so.

      Why do you think that you could trust what you read before? Is it now harder for you to distinguish false information, and if so, why?

      • nicce 3 days ago ago

        In the past, you had to put a lot of effort to produce a text which seemed to be high quality, especially when you knew nothing about the subject. By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject. Now, that is completely removed. There is no easy filter anymore.

        While the professional looking text could have been already wrong, the likelihood was smaller, since you usually needed to know something at least in order to write convincing text.

        • ookdatnog 3 days ago ago

          Writing a text of decent quality used to constitute proof of work. This is now no longer the case, and we haven't adapted to this assumption becoming invalid.

          For example, when applying to a job, your cover letter used to count as proof of work. The contents are less important than the fact that you put some amount of effort in it, enough to prove that you care about this specific vacancy. Now this basic assumption has evaporated, and job searching has become a meaningless two-way spam war, where having your AI-generated application selected from hundreds or thousands of other AI-generated applications is little more than a lottery.

          • bitexploder 3 days ago ago

            This. I am very picky about how I use ML still, but it is unsurpassed as a virtual editor. It can clean up grammar and rephrase things in a very light way, but it gives my prose the polish I want. The thing is, I am a very decent writer. I wrote professionally for 18 years as a part of my job delivering reports of high quality as my work product. So, it really helps that I know exactly what “good” looks like by my standards. ML can clean things up so much faster than I can and I am confident my writing is organic still, but it can fix up small issues, find mistakes, etc very quickly. A word change here or there, some punctuation, that is normal editing. It is genuinely good at light rephrasing as well, if you have some idea of what intent you want.

            When it becomes obvious, though, is when people let the LLM do the writing for them. The job search bit is definitely rough. Referrals, references, and actual accomplishments may become even more important.

            • gtirloni 3 days ago ago

              As usual, LLMs are an excellent tool when you already have a decent understanding of the field you're interested in using them in. Which is not the case of people posting in social media or creating their first programs. That's where the dullness and noise come from.

              The noise ground has been elevated 100x by LLMs. It was already bad before but it's accelerated the trend.

              So, yes, we should have never been trusting anything online but before LLMs we could rely on our brains to quickly identify the bad. Nowadays, it's exhausting. Maybe we need a LLM trained on spotting LLMs.

              This month, I, with decades of experience, used Claude Dev as an experiment to create a small automation tool. After countless manual fixes, it finally worked and I was happy. Until I gave thr whole thing a decent look again and realized what a piece of garbage I had created. It's exhausting to be on the lookout for these situations. I prefer to think things through myself, it's a more rewarding experience with better end results anyway.

              • danielbln 3 days ago ago

                Not to sound too dismissive, but there is a distinct learning curve when it comes to using models like Claude for code assist. Not just the intuition when the model goes off the rails, but also what to provide it in the context, how and what to ask for etc. Trying it once and dismissing it is maybe not the best experimental setup.

                I've been using Zed recently with its LLM integration so assist me in my development and its been absolutely wonderful, but one must control tightly what to present to the model and what to ask for and how.

                • gtirloni 3 days ago ago

                  It's not my first time using LLMs and you're assuming too much.

              • iszomer 3 days ago ago

                LLM's are a great onramp to filling in knowledge that may have been lost to age or updated to their modern classification. For example, I didn't know Hokkien and Haka are distinct linguistic branches within the Sino-Tibetan language and warrants more (personal) research into the subject. And all this time, without the internet, we often just colloquially called it Taiwanese.

                • aguaviva 3 days ago ago

                  How is this considered "lost" knowledge there are (large) Wikipedia pages about those languages (which is of course what the LLM is cribbing from)?

                  "Human-curated encycolpedias are a great onramp to filling in knowledge gaps", that I can go with.

                  • iszomer 2 days ago ago

                    How often do you go back to your encyclopedia hard copies only to find whatever knowledge you may have absorbed have already been deprecated? Or that information from Wikipedia may have changed at moments without notice, have never read or, dare I say, included a political bias to them?

                    Maybe I should have worded it better as a "beginner" or "intermediate" knowledge onramp and/or filler. For example, I have asked it on occasion to translate into traditional Mandarin in parallel for every English response. It helps tremendously in trying to rebuild that bridge that may have been burned long ago.

                  • nicce 3 days ago ago

                    It is lost in a sense that you had no idea about such possibility and you did not know to search it in the first hand, while I believe that in this case LLM brought it up as a side note.

                    • aguaviva 3 days ago ago

                      Such fortuitous stumblings happen all the time without LLMs (and in regular libraries, for those brave enough to use them). It's just the natural byproduct of doing any kind of research.

                      • skydhash 2 days ago ago

                        Most of my knowledge comes from physical encyclopedia and download the wikipedia text dump (internet was not readily available). You search for one thing and just explore by clicking.

            • msikora 2 days ago ago

              This is my go-to process whenever I write anything now:

              1. I use dictation software to get my thoughts out as a stream of consciousness. 2. Then, I have ChatGPT or Claude refine it into something coherent based on a prompt of what I'm aiming for. 3. Finally, I review the result and make edits where needed to ensure it matches what I want.

              This method has easily boosted my output by 10x, and I'd argue the quality is even better than before. As a non-native English speaker, this approach helps a lot with clarity and fluency. I'm not a great writer to begin with, so the improvement is noticeable. At the end of the day, I’m just a developer—what can I say?

            • dotnet00 3 days ago ago

              Yeah, this is how I use it too. I tend to be a very dry writer, which isn't unusual in science, but lately I've taken to writing, then asking an LLM to suggest improvements.

              I know not to trust it to be as precise as good research papers need to be, so I don't take its output, it usually helps me reorder points or use different transitions which make the material much more enjoyable to read. I also find it useful for helping to come up with an opening sentence from which to start writing a section.

              • bitexploder 3 days ago ago

                Active voice is difficult in technical and scientific writing for sure :)

          • rasulkireev 2 days ago ago

            Great opportunity to get ahead of all the lazy people who use AI for a cover letter. Do a video! Sure, AI will be able to do that soon, but then we (not lazy people, who care) will come up with something even more personal!

            • msikora 2 days ago ago

              Great idea! I'll get an LLM to write the script for the video and then I'll just read it! I can crank out 20 of these in an hour!

            • akho 2 days ago ago

              A blowjob, I assume.

        • roenxi 3 days ago ago

          > While the professional looking text could have been already wrong, the likelihood was smaller...

          I don't criticise you for it, because that strategy is both rational and popular. But you never checked the accuracy of your information before so you have no way of telling if it has gotten more or less accurate with the advent of AI. You were testing for whether someone of high social intelligence wanted you to believe what they said rather than if what they said was true.

          • dietr1ch 3 days ago ago

            I guess the complaint is about losing this proxy to gain some assurance for little cost. We humans are great at figuring out the least amount of work that's good enough.

            Now we'll need to be fully diligent, which means more work, and also there'll be way more things to review.

            • wlesieutre 3 days ago ago

              There’s not enough time in the day to go on a full bore research project about every sentence I read, so it’s not physically possible to be “fully diligent.”

              The best we can hope for is prioritizing which things are worth checking. But even that gets harder because you go looking for sources and now those are increasingly likely to be LLM spam.

              • Terr_ 2 days ago ago

                Traditionally, humans have addressed the imbalance between energy-to-generate and energy-to-validate by building another system on top, such as one which punishes fraudsters or at least allows other individuals to efficiently disassociate from them.

                Unfortunately it's not clear how this could be adapted to the internet and international commerce without harming some of the open-ness aspects we'd like to keep.

            • roenxi 3 days ago ago

              I'd argue people clearly don't care about the truth at all - they care about being part of a group and that is where it ends. It shows up in things like critical thinking being a difficult skill acquired slowly vs social proof which humans just do by reflex. Makes a lot of sense, if there are 10 of us and 1 of you it doesn't matter how smartypants you may be when the mob forms.

              AI does indeed threaten people's ability to identify whether they are reading work by a high status human and what the group consensus is - and that is a real problem for most people. But it has no bearing on how correct information was in the past vs will be in the future. Groups are smart but they get a lot of stuff wrong in strategic ways (it is almost a truism that no group ever identifies itself or its pursuit of its own interests as the problem).

              • Jensson 3 days ago ago

                > I'd argue people clearly don't care about the truth at all

                Plenty of people care about the truth in order to get advantages over the ignorant. Beliefs aren't just about fitting in a group, they are about getting advantages and making your life better, if you know the truth you can make much better decisions than those who are ignorant.

                Similarly plenty of people try to hide the truth in order to keep people ignorant so they can be exploited.

                • rendall 3 days ago ago

                  > if you know the truth you can make much better decisions than those who are ignorant

                  There are some fallacious hidden assumptions there. One is that "knowing the truth" equates to better life outcomes. I'd argue that history shows more often than not that what one knows to be true best align with prevailing consensus if comfort, prosperity and peace is one's goal, even if that consensus is flat out wrong. The list is long of lone geniuses who challenged the consensus and suffered. Galileo, Turing, Einstein, Mendel, van Gogh, Darwin, Lovelace, Boltzmann, Gödel, Faraday, Kant, Poe, Thoreau, Bohr, Tesla, Kepler, Copernicus, et. al. all suffered isolation and marginalization of some degree during their lifetimes, some unrecognized until after their death, many living in poverty, many actively tormented. I can't see how Turing, for instance, had a better life than the ignorant who persecuted him despite his excellent grasp of truth.

                  • Jensson 3 days ago ago

                    You are thinking too big, most of the time the truth is whether a piece of food is spoiled or not etc, and that greatly affects your quality of life. Companies would love to keep you ignorant here so they can sell you literal shit, so there are powerful forces wanting to keep you ignorant, and today those powerful forces has way stronger tools than ever before working to keep you ignorant.

                  • roenxi 3 days ago ago

                    Socrates is also a big name. Never forget.

              • danmaz74 3 days ago ago

                You're implying that there is an absolute Truth and that people only need to do [what?] to check if something is True. But that's not True. We only have models of how reality works, and every model is wrong - but some are useful.

                When dealing with almost everything you do day by day, you have to rely on the credibility of the source of the information you have. Otherwise how could you know that the can of tuna you're going to eat is actually tuna and not some venomous fish? How do you know that you should do what your doctor told you? Etc. etc.

                • svieira 3 days ago ago

                  > You're implying that there is an absolute Truth and that people only need to do [what?] to check if something is True. But that's not True. We only have models of how reality works, and every model is wrong - but some are useful.

                  But isn't your third sentence True?

                  • danmaz74 2 days ago ago

                    I don't know it to be True, but I know it to be useful :)

          • SoftTalker 3 days ago ago

            In the past, with a printed book or journal article, it was safe to assume that an editor had been involved, to some degree or another challenging claimed facts, and the publisher also had an interest in maintaining their reputation by not publishing poorly researched or outright false information. You would also have reviewers reading and reacting to the book in many cases.

            All of that is gone now. You have LLMs spitting their excrement directly onto the web without so much as a human giving it a once-over.

            • Eisenstein 2 days ago ago

              I suggest you look into how many things were published without such scrutiny, because they sold.

          • quietbritishjim 3 days ago ago

            How do you "check the accuracy of your information" if all the other reliable-sounding sources could also be AI generated junk? If it's something in computing, like whether something compiles, you can sometimes literally check for yourself, but most things you read about are not like that.

          • glenstein 3 days ago ago

            >But you never checked the accuracy of your information before so

            They didn't say that and that's not a fair or warranted extrapolation.

            They're talking about a heuristic that we all use, as a shorthand proxy that doesn't replace but can help steer the initial navigation in the selection of reliable sources, which can be complemented with fact checking (see the steelmanning I did there?). I don't think someone using that heuristic can be interpreted as tantamount to completely ignoring facts, which is a ridiculous extrapolation.

            I also think is misrepresents the lay of the land, which is that in the universe of nonfiction writing, I don't think that there's a fire hose of facts and falsehoods indistinguishable in tone. I think there's in fact a reasonably high correlation between the discernible tone of impersonal professional and credible information, which, again (since this seems to be a difficult sticking point) doesn't mean that the tone substitutes for the facts which still need to be verified.

            The idea that information and misinformation are tonally indistinguishable is, in my experience, only something believed by post-truth "do you own research" people who think there are equally valid facts in all directions.

            There's not, for instance, a Science Daily of equally sciency sounding misinformation. There's not a second different IPCC that publishes a report with thousands of citations which are all wrong, etc. Misinformation is out there but it's not symmetrical, and understanding that it's not symmetrical is an important aspect of information literacy.

            This is important because it goes to their point, which is that something has changed, in the advent of LLMS. That symmetry may be coming, and it's precisely the fact that it wasn't there before that is pivotal.

          • cutemonster 3 days ago ago

            Interesting points! Doesn't sound impossible with an AI that's wrong less often than an average human author (if the AIs training data was well curated).

            I suppose a related problem is that we can't know if the human who posted the article, actually agrees with it themselves.

            (Or if they clicked "Generate" and don't actually care, or even have different opinions)

        • gizmo 3 days ago ago

          I think you overestimate the value of things looking professional. The overwhelming majority of books published every year are trash, despite all the effort that went into research, writing, and editing them. Most news is trash. Most of what humanity produces just isn't any good. An top expert in his field can leave a typo-riddled comment in a hurry that contains more valuable information than a shelf of books written on the subject by lesser minds.

          AIs are good at writing professional looking text because it's a low bar to clear. It doesn't require much intelligence or expertise.

          • herval 3 days ago ago

            > AIs are good at writing professional looking text because it's a low bar to clear. It doesn't require much intelligence or expertise.

            AIs are getting good at precisely imitating your voice with a single sample as reference, or generating original music, or creating video with all sorts of impossible physics and special effects. By your rationale, nothing “requires much intelligence or expertise”, which is patently false (even for text writing)

            • gizmo 3 days ago ago

              My point is that writing a good book is vastly more difficult than writing a mediocre book. The distance between incoherent babble and a mediocre book is smaller than the distance between a mediocre book and a great book. Most people can write professional looking text just by putting in a little bit of effort.

          • bitexploder 3 days ago ago

            I think you underestimate how high that bar is, but I will grant that it isn’t that high. It can be a form of sophistry all of its own. Still, it is a difficult skill to write clearly, simply, and without a lot of extravagant words.

        • jackthetab 3 days ago ago

          > While the professional looking text could have been already wrong, the likelihood was smaller, since you usually needed to know something at least in order to write convincing text.

          https://en.wikipedia.org/wiki/Michael_Crichton#Gell-Mann_amn...

        • mewpmewp2 3 days ago ago

          Although, there were already before tons of "technical influencers" before that who excelled at writing, but didn't know deeply what they were writing about.

          They give a superficially smart look, but really they regurgitate without deep understanding.

        • factormeta 3 days ago ago

          >In the past, you had to put a lot of effort to produce a text which seemed to be high quality, especially when you knew nothing about the subject. By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject. Now, that is completely removed. There is no easy filter anymore.

          That is pretty much true also for other media, such as audio and video. Before digital stuff become mainstream pics are developed in the darkroom, and film are actually cut with scissors. A lot of effort are put into producing the final product. AI has really commoditized for many brain related tasks. We must realize the fragile nature of digital tech and still learn how to do these by ourselves.

        • ImHereToVote 3 days ago ago

          So content produced by think tanks was credible by default, since think tanks are usually very well funded. Interesting perspective

        • mewpmewp2 3 days ago ago

          Although presently at least it's still quite obvious when something is written by AI.

          • chilli_axe 3 days ago ago

            it's obvious when text has been produced by chatGPT with the default prompt - but there's probably loads of text on the internet which doesn't follow AI's usual prose style that blends in well.

            • mewpmewp2 2 days ago ago

              Even when I try some other variation of prompts or writing styles there's always this sense of "perfectness", with all paragraph lengths being too perfect, length and the style of it being like that.

        • TuringNYC 2 days ago ago

          >> While the professional looking text could have been already wrong, the likelihood was smaller, since you usually needed to know something at least in order to write convincing text.

          ...or...the likelihood of text being really wrong pre-LLMs was worse because you needed to be a well-capitalized player to pay your thoughts into public discourse. Just look at our global conflicts and you see how much they are driven by well-planned lobbying, PR, and...money. That is not new.

        • diggan 3 days ago ago

          > By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject

          How did you know this unless you also had the same or more knowledge than the author?

          It would seem to me we are as clueless now as before about how to judge how skilled a writer is without requiring to already posses that very skill ourselves.

      • ffsm8 3 days ago ago

        Trust as no bearing on what they said.

        Reading was a form of connecting with someone. Their opinions are bound to be flawed, everyone's are - but they're still the thoughts and words of a person.

        This is no longer the case. Thus, the human factor is gone and this reduces the experience to some of us, me included.

        • farleykr 3 days ago ago

          This is exactly what’s at stake. I heard an artist say one time that he’d rather listen to Bob Dylan miss a note than listen to a song that had all the imperfections engineered out of it.

          • herval 3 days ago ago

            The flipside of that is the most popular artists of all time (eg Taylor Swift) do autotune to perfection, and yet more and more people love them

            • kombookcha 3 days ago ago

              If you ask a Swiftie what they love about Taylor Swift, I guarantee they will not say "the autotune is flawless".

              They're not connecting with the relative correctness of each note, but feeling a human, creative connection with an artist expressing herself.

              • herval 3 days ago ago

                They're "creatively connecting" to an autotuned version of a human, not to a "flawed Bob Dylan"

                • kombookcha 3 days ago ago

                  They're not connecting to the autotune, but to the artist. People have a lot of opinions about Taylor Swift's music but "not being personal enough" is definitely not a common one.

                  If you wanna advocate for unplugged music being more gratifying, I don't disagree, but acting like the autotune is what people are getting out of Taylor Swift songs is goofy.

                  • soco 3 days ago ago

                    I have no idea about Taylor Swift so I'll ask in general: can't we have a human showing an autotuned personality? Like, you are what you are in private, but in interviews you focus on things suggested by your AI conselor, your lyrics are fine tuned by AI, all this to show a better marketable personality? Maybe that's the autotune we should worry about. Again, nothing new (looking at you, Village People) but nowadays the potential powered by AI is many orders of magnitude higher... you could say yes only until the fans catch wind of it, true, but by that time the next figure shows up and so on. Not sure where this arms escalation can lead us. Because also acceptance levels are shifting, so what we reject today as unacceptable lies could be fine tomorrow, look already at the AI influencers doing a decent job while overtly fake.

                    • oceanplexian 3 days ago ago

                      I’m convinced it’s already being done, or at least played with. Lots of public figures only speak through a teleprompter. It would be easy to put a fine tuned LLM on the other side of that teleprompter where even unscripted questions can be met with scripted answers.

                  • herval 3 days ago ago

                    you're missing the point by a few miles

            • Terr_ 2 days ago ago

              > yet more and more people love them

              I think that says more about media-technology, corporate ecosystems, and overall population-growth than about music itself.

        • Frost1x 3 days ago ago

          I think the key thing here is equating trust and truth. I trust my dog, a lot, more than most humans frankly. She has some of my highest levels of trust attainable, yet I don’t exactly equate her actions with truth. She often barks when there’s no one at the door or at false threats she doesn’t know aren’t real threats and so on. But I trust she believes it 100% and thinks she’s helping me 100%.

          What I think OP was saying and I agree with is that connection, that knowing no matter what was said or how flawed or what motive someone had I trusted there was a human producing the words. I could guess and reasons the other factors away. Now I don’t always know if that is the case.

          If you’ve ever played a multiplayer game, most of the enjoyable experience for me is playing other humans. We’ve had good game AIs in many domains for years, sometimes difficult to distinguish from humans, but I always lost interest if I didn’t know I was in fact playing and connecting with another human. If it’s just some automated system I could do that any hour of the day as much as I want but it lacked the human connection element, the flaws, the emotion, the connection. If you can reproduce that then maybe it would be enjoyable but that sort of substance has meaning to many.

          It’s interesting to see a calculator quickly spit out correct complex arithmetic but when you see a human do it, it’s more impressive or at least interesting, because you know the natural capability is lower and that they’re flawed just like you are.

          • Terr_ 2 days ago ago

            > She has some of my highest levels of trust attainable

            I like to think of these ambiguities of "trust" as something like:

            1. Trusting their identity

            2. Trusting their intentions

            3. Trusting their judgement about what to do

            4. Trusting their competence to execute the task

      • sevensor 3 days ago ago

        For me, the problem has gone from “figure out the author’s agenda” to “figure out whether this is a meaningful text at all,” because gibberish now looks a whole lot more like meaning than it used to.

        • pxoe 3 days ago ago

          This has been a problem on the internet for the past decade if not more anyway, with all of the seo nonsense. If anything, maybe it's going to be ever so slightly more readable.

          • orthecreedence 2 days ago ago

            I don't know what you're talking about. Most people don't think of SEO, Search Engine Optimization, Search Performance, Search Engine Relevance, Search Rankings, Result Page Optimization, or Result Performance when writing their Article, Articles, Internet Articles, News Articles, Current News, Press Release, or News Updates...

      • a99c43f2d565504 3 days ago ago

        Perhaps "trust" was a bit misplaced here, but I think we can all agree on the idea: Before LLMs, there was intelligence behind text, and now there's not. The I in LLM stands for intelligence, as written in one blog. Maybe the text never was true, but at least it made sense given some agenda. And like pointed out by others, the usual text style and vocabulary signs that could have been used to identify expertise or agenda are gone.

        • lmm 2 days ago ago

          > Perhaps "trust" was a bit misplaced here, but I think we can all agree on the idea: Before LLMs, there was intelligence behind text, and now there's not. The I in LLM stands for intelligence, as written in one blog. Maybe the text never was true, but at least it made sense given some agenda.

          Nope. A lot of people just wrote stuff. There were always plenty of word salad blogs (and arguably entire philosophy journals) out there.

        • danielmarkbruce 2 days ago ago

          Those signs are largely bs. It's a textual version of charisma.

      • baq 3 days ago ago

        scale makes all the difference. society without trust falls apart. it's good if some people doubt some things, but if everyone necessarily must doubt everything, it's anarchy.

        • dangitman 3 days ago ago

          Is our society built on trust? I don't generally trust most of what's distributed as news, for instance. Virtually every newsroom in america is undermined by basic conflicts of interest. This has been true since long before I was born, although perhaps the death of local news has accelerated this phenomenon. Mostly I just "trust" that most people don't want to hurt me (even if this trust is violated any time I bike along side cars for long enough)

          I don't think that LLMs will change much, frankly, it's just gonna be more obvious when they didn't hire a human to do the writing.

          • Hoasi 2 days ago ago

            > Is our society built on trust?

            A good part of society, the foundational part, is trust. Trust between individuals, but also trust in the sense that we expect things to behave in a certain way. We trust things like currencies despite their flaws. Our world is too complex to reinvent the wheel whenever we need to do a transaction. We must believe enough in a make-believe system to avoid perpetual collapse.

        • vouaobrasil 3 days ago ago

          Perhaps that anarchy is the exact thing we need to convince everyone to revolt against big tech firms like Google and OpenAI and take them down by mob rule.

      • thesz 3 days ago ago

        Propaganda works by repeating the same in different forms. Now it is easier to have different forms of the same, hence, more propaganda. Also, it is much easier to iinfluence whatever people write by influencing the tool they use to write.

        Imagine that AI tools sway generated sentences to be slightly close, in summarisation space, to the phrase "eat dirt" or anything. What would happen?

        • ImHereToVote 3 days ago ago

          Hopefully people will exercise more judgement now that every Tom, Dick, and Harry scam artists can output elaborate prose.

      • eesmith 3 days ago ago

        The negation of 'I cannot trust' is not 'I could always trust' but rather 'I could sometimes trust'.

        Nor is trust meant to mean something is absolute and unquestionable. I may trust someone, but with enough evidence I can withdraw trust.

      • galactus 3 days ago ago

        I think it is a totally different threat. Excluding adversarial behavior, humans usually produce information with a quality level that is homogeneous (from homogeneously sloppy to homogeneously rigurous).

        AI otoh can produce texts that are quite accurate globally with some totally random hallucinations here and there. It makes it quite harder to identify

      • rsynnott 3 days ago ago

        There are topics on which you should be somewhat suspicious of anything you read, but also many topics where it is simply improbable that anyone would spend time maliciously coming up with a lie. However, they may well have spicy autocomplete imagine something for them. An example from a few days ago: https://news.ycombinator.com/item?id=41645282

      • tuyguntn 3 days ago ago

        > For me, LLMs don't change anything. I already questioned the information before and continue to do so.

        I also did, but LLM increased the volume of content, which forces my brain first try to identify if content is generated by LLMs, which is consuming a lot of energy and makes brain even less focused, because now it's primary goal is skimming quickly to identify, instead of absorbing first and then analyzing info

        • desdenova 3 days ago ago

          The web being polluted only makes me ignore more of it.

          You already know some of the more trustworthy sources of information, you don't need to read a random blog which will require a lot more effort to verify.

          Even here on hackernews, I ignore like 90% of the spam people post. A lot of posts here are extremely low effort blogs adding zero value to anything, and I don't even want to think whether someone wasted their own time writing that or used some LLM, it's worthless in both cases.

      • solidninja 3 days ago ago

        There's a quantity argument to be made here - before, it used to be hard to generate large amounts of plausible but incorrect text. Now it easy. Similar to surveillance before/after smartphones + the internet - you had to have a person following you vs just soaking up all the data on the backbone.

      • escape_goat 3 days ago ago

        There was a degree of proof of work involved. Text took human effort to create, and this roughly constrained the quantity and quality of misinforming text to the number of humans with motive to expend sufficient effort to misinform. Now superficially indistinguishable text can be created by an investment in flops, which are fungible. This means that the constraint on the amount of misinforming text instead scales with whatever money is resourced to the task of generating misinforming text. If misinforming text can generate value for someone that can be translated back into money, the generation of misinforming text can be scaled to saturation and full extraction of that value.

      • low_tech_love 3 days ago ago

        It’s nothing to do with trusting in terms of being true or false, but whatever I read before I felt like, well, it can be good or bad, I can judge it, but whatever it is, somebody wrote it. It’s their work. Now when I read something I just have absolutely no idea whether the person wrote it, how much percent did they write it, or how much they even had to think before publishing it. Anyone can simply publish a perfectly well-written piece of text about any topic whatsoever, and I just can’t wrap my head around why, but it feels like a complete waste of time to read anything. Like… it’s all just garbage, I don’t know.

      • everdrive 3 days ago ago

        How do you like questioning much more of it, much more frequently, from many more sources? And mistrusting it in new ways. AI and regular people are not wrong in the same ways, nor for the same reasons, and now you must track this too, increasingly.

      • kombookcha 3 days ago ago

        Debunking bullshit inherently takes more effort than generating bullshit, so the human factor is normally your big force multiplier. Does this person seem trustworthy? What else have they done, who have they worked with, what hidden motivations or biases might they have, are their vibes /off/ to your acute social monkey senses?

        However with AI anyone can generate absurd torrential flows of bullshit at a rate where, with your finite human time and energy, the only winning move is to reject out of hand any piece of media that you can sniff out as AI. It's a solution that's imperfect, but workable, when you're swimming through a sea of slop.

        • ontouchstart 3 days ago ago

          Debugging is harder than writing code. Once the code passed linter, compiler and test, the bugs might be more subtly logical and require more effort and intelligence.

          We are all becoming QA of this super automated world.

        • bitexploder 3 days ago ago

          Maybe the debunking AIs can match the bullshit generating AIs, and we will have balance in the force. Everyone is focused on the generative AIs, it seems.

          • desdenova 3 days ago ago

            No, they can't. They'll still be randomly deciding if something is fake or not, so they'll only have a probability of being correct, like all nondeterministic AI.

          • nicce 3 days ago ago

            There is always more money available for bullshit generation than bullshit removal.

      • voidmain0001 3 days ago ago

        I read the original comment not as a lament of not being able to trust the content, rather, they are lamenting the fact that AI/LLM generated content has no more thought or effort put into it than a cheap microwave dinner purchased from Walmart. Yes, it fills the gut with calories but it lacks taste.

        On second thought, perhaps AI/LLM generated content is better illustrated with it being like eating the regurgitated sludge called cud. Nothing new, but it fills the gut.

      • heresie-dabord 3 days ago ago

        > you trusted what you read before. For me, LLMs don't change anything. I already questioned the information before and continue to do so. [...] Why do you think that you could trust what you read before?

        A human communicator is, in a sense, testifying when communicating. Humans have skin in the social game.

        We try to educate people, we do want people to be well-informed and to think critically about what they read and hear. In the marketplace of information, we tend very strongly to trust non-delusional, non-hallucinating members of society. Human society is a social-confidence network.

        In social media, where there is a cloak of anonymity (or obscurity), people may behave very badly. But they are usually full of excuses when the cloak is torn away; they are usually remarkably contrite before a judge.

        A human communicator can face social, legal, and economic consequences for false testimony. Humans in a corporation, and the corporation itself, may be held accountable. They may allocate large sums of money to their defence, but reputation has value and their defence is not without social cost and monetary cost.

        It is literally less effort at every scale to consult a trusted and trustworthy source of information.

        It is literally more effort at every scale to feed oneself untrustworthy communication.

      • akudha 3 days ago ago

        There were news reports that Russia spent less than a million dollars on a massive propaganda campaign targeting U.S elections and the American population in general.

        Do you think it would be possible before internet, before AI?

        Bad actors, poorly written/sourced information, sensationalism etc have always existed. It is nothing new. What is new is the scale, speed and cost of making and spreading poor quality stuff now.

        All one needs today is a laptop and an internet connection and a few hours, they can wreak havoc. In the past, you'd need TV or newspapers to spread bad (and good) stuff - they were expensive, time consuming to produce and had limited reach.

        • kloop 2 days ago ago

          There are lots of organizations with $1M and a desire to influence the population

          This can only be done with a sentiment that was, at least partially, already there. And may very well happen naturally eventually

        • immibis 2 days ago ago

          How can I wreck havoc with a few hours, a laptop, and an internet connection?

          It takes a bit more than that.

          • akudha 2 days ago ago

            Some woman’s cat was hiding in her basement. She automatically assumed her Haitian neighbors stole her cat and made some comment about it, which landed on Facebook, which got morphed into “immigrants eating pets” story, JD Vance picked it up, Trump mentioned it in a national debate watched by 65 million people. All of this happened in a few days. This resulted in violence in Springfield.

            If you can place a rumor or lie in front of the right person/people to amplify, it will be amplified. It will spread like wildfire, and by the time it is fact checked, it will have done at least some damage.

            • immibis a day ago ago

              These successful manipulation stories are extremely rare though. What usually happens is that you say your neighbour ate your cat, then everyone laughs at you.

              Did the person who posted do the manipulation, or did JD Vance and Donald Trump do it?

      • mvdtnz 2 days ago ago

        It's that you trusted that what you read came from a human being. Back in the day I used to spend hours reading Evolution vs Creationism debates online. I didn't "trust" the veracity of half of what I read, but that didn't mean I didn't want to read it. I liked reading it because it came from people. I would never want to read AI regurgitation of these arguments.

      • tempfile 3 days ago ago

        > I already questioned the information before and continue to do so.

        You might question new information, but you certainly do not actually verify it. So all you can hope to do is sense-checking - if something doesn't sound plausible, you assume it isn't true.

        This depends on having two things: having trustworthy sources at all, and being able to relatively easily distinguish between junk info and real thorough research. AI is a very easy way for previously-trustworthy sources to sneak in utter disinformation without necessarily changing tone much. That makes it much easier for the info to sneak past your senses than previously.

      • croes 3 days ago ago

        The quota changed because it's now easier and faster

      • desdenova 3 days ago ago

        Exactly. The web before LLMs was mostly low effort SEO spam written by low-wage people in marketing agencies.

        Now it's mostly zero effort LLM-generated SEO spam, and the low-wage workers lost their jobs.

        • vouaobrasil 3 days ago ago

          The difference is that now we'll have even more zero-effort SEO spam because AI is a force multiplier for that. Much more.

      • danielmarkbruce 2 days ago ago

        The following appears to be true:

        If one spends a lot of years reading a lot of stuff, they come to this conclusion, that most of it cannot be trusted. But it takes lots of years and lots of material to see it.

        If they don't, they don't.

    • nils-m-holm 3 days ago ago

      > It's not so much that I think people have used AI, but that I know they have with a high degree of certainty, and this certainty is converging to 100%, simply because there is no way it will not. If you write regularly and you're not using AI, you simply cannot keep up with the competition.

      I am writing regularly and I will never use AI. In fact I am working on a 400+ pages book right now and it does not contain a single character that I have not come up with and typed myself. Something like pride in craftmanship does exist.

      • goostavos 2 days ago ago

        Also currently working on a book (shameless plug: buy my book!) and feel no pull or need to involve AI. This book is my mine. My faults. My shortcomings. My overuse of commas. My wonky phrasing. It has to have those things, because I am those things (for better or worse!).

      • smitelli 3 days ago ago

        I'm right there with you. I write short and medium form articles for my personal site (link in bio, follow it or don't, the world keeps spinning either way). I will never use AI as part of this craft. If that hampers my output, or puts me at a disadvantage compared to the competition, or changes the opinion others have of me, I really don't care.

      • low_tech_love 3 days ago ago

        Amazing! Do you feel any pressure from your environment? And are you self-funded? I am also thinking about starting my first book.

        • nils-m-holm 3 days ago ago

          What I write is pretty niche anyway (compilers, LISP, buddhism, advaita), so I do not think AI will cause much trouble. Google ranking small websites into oblivion, though, I do notice that!

      • vouaobrasil 3 days ago ago

        Nice. I will definitely consider your book over other books. I'm not interested in reading AI-assisted works.

      • lurking_swe 2 days ago ago

        do you see any benefits to using AI to check your book for typos, grammatical issues, or even just general “feedback” prior to publishing?

        Seems like there are uses for AI other than “please write it all for me”, no?

        • nils-m-holm 2 days ago ago

          I use a spell checker to catch typos. Occasional quirky grammar is mine. Feedback will be provided by the audience. Why would I let a statistical model judge my work? This is how you kill originality.

          • lurking_swe 2 days ago ago

            i didn’t imply you’d need an LLM to “rate” your content. More so asking questions during and before the publishing step to help you improve your work. Not removing your identity from your work. Examples questions of what you could ask an LLM:

            • are there any redundant sentences or points in this chapter?

            • i’m trying to remember an expression used to convey X, can you remind me what it is?

            • i’m familiar with X from my industry Y, but i’m trying to convey this to an audience from industry Z. Can you help brainstorm some words or examples they may be more familiar with?

            Things like that. I think of it like having a virtual rubber duck that can help you debug code, or anything really.

            Obviously these are just some suggestions. If you don’t find any of this useful, or even interesting, then carry on. :)

            • nils-m-holm 2 days ago ago

              I see. Most of the things you list I think I could do better on my own. The case of X from industry Y sounds interesting, but I would still prefer to hear from a real human being from industry Z. If no one is available, of course, a statistical model may indeed be helpful -- I still do not think it is worth boiling the oceans, though. :)

        • davidgerard 2 days ago ago

          "AI" isn't a technology. Do you mean LLMs? If so, then lol hell no, why on earth would I.

      • nyarlathotep_ 3 days ago ago

        In b4 all the botslop shills tell you you're gonna get "left behind" if you don't pollute your output with GPT'd copypasta.

    • onion2k 3 days ago ago

      The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

      What AI is going to teach people is that they don't actually need to trust half as many things as they thought they did, but that they do need to verify what's left.

      This has always been the case. We've just been deferring to 'truster organizations' a lot recently, without actually looking to see if they still warrant having our trust when they change over time.

      • layer8 3 days ago ago

        How can you verify most of anything if you can’t trust any writing (or photographs, audio, and video, for that matter)?

        • Frost1x 3 days ago ago

          Independent verification is always good however not always possible and practical. At complex levels of life we have to just trust underlying processes work, usually until something fails.

          I don’t go double checking civil engineers work (nor could I) for every bridge I drive over. I don’t check inspection records to make sure it was recent and proper actions were taken. I trust that enough people involved know what they’re doing with good enough intent that I can take my 20 second trip over it in my car without batting an eye.

          If I had to verify everything, I’m not sure how I’d get across many bridges on a daily basis. Or use any major infrastructure in general where my life might be at risk. And those are cases where it’s very important to be done right, if it’s some accounting form or generated video on the internet… I have even less time to be concerned from a practical standpoint. Having the skills to do it should I want or need to are good and everyone should have these but we’re at a point in society we really have to outsource trust in a lot of cases.

          This is true everywhere, even in science which these days many people just trust in ways akin to faith in some cases, and I don’t see anyway around that. The key being that all the information should exist to be able to independently verify something but from a practice standpoint it’s rarely viable.

        • lmm 2 days ago ago

          Get good at spotting inconsistencies. And pay attention to when something contradicts your own experience. Cultivate a wide range of experiences so that you have more domains where you can do this (this is a good idea anyway).

    • akudha 3 days ago ago

      I was listening to an interview few months ago (forgot the name). He is a prolific reader/writer and has a huge following. He mentioned that he only reads books that are at least 50 years old, so pre 70s. That sounds like a good idea now.

      Even ignoring the AI, if you look at the movies and books that come out these days, their quality is significantly lower than 30-40 years ago (on an average). Maybe people's attention spans and taste is to blame, or maybe people just don't have the money/time/patience to consume quality work... I do not know.

      One thing I know for sure - there is enough high quality material written before AI, before article spinners, before MFA sites etc. We would need multiple lifetimes to even scratch the surface of that body of work. We can ignore mostly everything that is published these days and we won't be missing much

      • eloisant 3 days ago ago

        I'd say it's probably survivor's bias. Bad books from the pre 70s are probably forgotten and no longer printed.

        Old books that we're still printing and are still talking about have stood the test of time. It doesn't mean that are no great recent books.

      • alwa 2 days ago ago

        Nassim Taleb famously argues that position, in his popular work Antifragile and elsewhere. I believe the theory is that time serves as a sieve: only works with lasting value can remain relevant through the years.

      • inkcapmushroom 2 days ago ago

        Completely disagree just from my own personal experience as a sci-fi reader. Modern day bestseller sci-fi novels fit right in with the old classics, and in many ways outshine them. I have read many bad obscure sci-fi books published from the 50's to today, most of them a dollar at the thrift store. There was never a time when writers were perfect and every published work was high quality, then or now.

      • LeroyRaz 2 days ago ago

        Aren't you worried about low quality interviews?!

        I only listen to interviews from 50 years ago (interviews that have stood the test of time), about books from 100 years ago. In fact, how am I reading this article? It's not 2074 yet?!

      • mvdtnz 2 days ago ago

        > if you look at the movies and books that come out these days, their quality is significantly lower than 30-40 years ago (on an average)

        I'm sorry but this is just nonsense.

    • jcd748 3 days ago ago

      Life is short and I like creating things. AI is not part of how I write, or code, or make pixel art, or compose. It's very important to me that whatever I make represents some sort of creative impulse or want, and is reflective of me as a person and my life and experiences to that point.

      If other people want to hit enter, watch as reams of text are generated, and then slap their name on it, I can't stop them. But deep inside they know their creative lives are shallow and I'll never know the same.

      • onemoresoop 3 days ago ago

        > If other people want to hit enter, watch as reams of text are generated, and then slap their name on it,

        The problem is this kind of content is flooding the internet. Before you know it becomes extremely hard to find non AI generated content...

        • jcd748 3 days ago ago

          I think we agree. I hate it, and I can't stop it, but also I definitely won't participate in it.

      • low_tech_love 3 days ago ago

        That’s super cool, and I hope you are right and that I am wrong and artists/creators like you will still have a place in the future. My fear is that your work turns into some kind of artesanal fringe activity that is only accessible to 1% of people, like Ming vases or whatever.

      • seniortaco 2 days ago ago

        That's true art, I love people like you. Technology can do a lot of things but it cannot give people or society principles, and without principles society fails.

    • flir 3 days ago ago

      I've been using it in my personal writing (combination of GPT and Claude). I ask the AI to write something, maybe several times, and I edit it until I'm happy with it. I've always known I'm a better editor than I am an author, and the AI text gives me somewhere to start.

      So there's a human in the loop who is prepared to vouch for those sentences. They're not 100% human-written, but they are 100% human-approved. I haven't just connected my blog to a Markov chain firehose and walked away.

      Am I still adding to the AI smog? idk. I imagine that, at a bare minimum, its way of organising text bleeds through no matter how much editing I do.

      • vladstudio 3 days ago ago

        you wrote this comment completely by your own, right? without any AI involved. And I read your comment feeling confident that it's truly 100% yours. I think this reader's confidence is what the OP is talking about.

        • flir 3 days ago ago

          I did. I write for myself mostly so I'm not so worried about one reader's trust - I guess I'm more worried that I might be contributing to the dead internet theory by generating AI-polluted text for the next generation of AIs to train on.

          At the moment I'm using it for local history research. I feed it all the text I can find on an event (mostly newspaper articles and other primary sources, occasionally quotes from secondary sources) and I prompt with something like "Summarize this document in a concise and direct style. Focus on the main points and key details. Maintain a neutral, objective voice." Then I hack at it until I'm happy (mostly I cut stuff). Analysis, I do the other way around: I write the first draft, then ask the AI to polish. Then I go back and forth a few times until I'm happy with that paragraph.

          I'm not going anywhere with this really, I'm just musing out loud. Am I contributing to a tragedy of the commons by writing about 18th century enclosures? Because that would be ironic.

          • ontouchstart 3 days ago ago

            If you write for yourself, whether you use generated text or not, (I am using the text completion on my phone typing this message), the only thing that matters is how it affects you.

            Reading and writing are mental processes (with or without advanced technology) that shape our collective mind.

    • noobermin 3 days ago ago

      When you're writing, how are you "missing out" if you're not using chatgpt??? I don't even understand how this can be unless what you're writing is already unnecessary such that you shouldn't need to write it in the first place.

      • jwells89 3 days ago ago

        I don’t get it either. Writing is not something I need that level of assistance with, and I would even say that using LLMs to write defeats some significant portion of the point of writing — by using LLMs to write for me I feel that I’m no longer expressing myself in the purest sense, because the words are not mine and do not exhibit any of my personality, tendencies, etc. Even if I were to train an LLM on my style, it’d only be a temporal facsimile of middling quality, because peoples’ styles evolve (sometimes quite rapidly) and there’s no way to work around all the corner cases that never got trained for.

        As you say, if the subject is worth being written about, there should be no issue and writing will come naturally. If it’s a struggle, maybe one should step back and figure out why that is.

        There may some argument for speed, because writing quality prose does take time, but then the question becomes a matter of quantity vs. quality. Do you want to write high quality pieces that people want to read at a slower pace or churn out endless volumes of low-substance grey goo “content”?

      • dotnet00 3 days ago ago

        LLMs are surprisingly capable editors/brainstorming tools. So, you're missing out in that you're being less efficient in editing.

        Like, you can write a bunch of text, then ask an LLM to improve it with minimal changes. Then, you read through its output and pick out the improvements you like.

        • jayd16 3 days ago ago

          But that's the problem. Unique, quirky mannerisms become polished out. Flaws are smoothed and over sharpened.

          I'm personally not as gloomy about it as the parent comments but I fear it's a trend that pushes towards a samey, mass-produced style in all writing.

          Eventually there will be a counter culture and backlash to it and then equilibrium in quality content but it's probably here to stay for anything where cost is a major factor.

          • dotnet00 3 days ago ago

            Yeah, I suppose that would be an issue for creative writing. My focus is mostly on scientific writing, where such mannerisms should be less relevant than precision, so I didn't consider that aspect of other kinds of writing.

        • slashdave 3 days ago ago

          And I the only one who doesn't even like automatic grammar checkers, because they are contributing to a single and uniformly bland style of writing? LLMs are just going to make this worse.

        • tourmalinetaco 3 days ago ago

          Sure, but Grammarly and similar have existed far before the LLM boom.

          • dotnet00 3 days ago ago

            That's a fair point, I only very recently found that LLMs could actually be useful for editing, and hadn't really thought much of using tools for that kind of thing previously.

    • walthamstow 3 days ago ago

      I've even grown to enjoy spelling and grammar mistakes - at least I know a human wrote it.

      • ipaio 3 days ago ago

        You can prompt/train the AI to add a couple of random minor errors. They're trained from human text after all, they can pretend to be as human as you like.

        • Applejinx 3 days ago ago

          Barring simple typos, human mistakes are erroneous intention from a single source. You can't simply write human vagaries off as 'error' because they're glimpses into a picture of intention that is perhaps misguided.

          I'm listening to a slightly wonky early James Brown instrumental right now, and there's certainly a lot more error than you'd get in sequenced computer music (or indeed generated music) but the force with which humans wrest the wonkiness toward an idea of groove is palpable. Same with Zeppelin's 'Communication Breakdown' (I'm doing a groove analysis project, ok?).

          I can't program the AI to have intention, nor can you. If you do, hello Skynet, and it's time you started thinking about how to be nice to it, or else :)

        • eleveriven 3 days ago ago

          Making it feel like there's no reliable way to discern what's truly human

          • vouaobrasil 3 days ago ago

            There is. Be vehemently against AI, put 100% AI free in your work. The more consistent you are against AI, the more likely people will believe you. Write articles slamming AI. Personally, I am 100% against AI and I state that loud and clear on my blogs and YouTube channel. I HATE AI.

            • jaredsohn 3 days ago ago

              Hate to tell you but there is nothing stopping people using AI from doing the same thing.

              • vouaobrasil 3 days ago ago

                AI cannot build up a sufficient level of trust, especially if you are known in person by others who will vouch for you. That web of trust is hard to break with AI. And I am one of those.

                • danielbln 3 days ago ago

                  Are you including transformer based translation models like Google Translate or Deepl in your categorical AI rejectio?

        • vasco 3 days ago ago

          The funny thing is that the things it refuses to say are "wrong-speech" type stuff, so the only things you can be more sure of nowadays are conspiracy theories and other nasty stuff. The nastier the more likely it's human written, which is a bit ironic.

          • matteoraso 3 days ago ago

            No, you can finetune locally hosted LLMs to be nasty.

            • slashdave 3 days ago ago

              Maybe the future of creative writing is fine tuning your own unique form of nastiness

          • Jensson 3 days ago ago

            > The nastier the more likely it's human written, which is a bit ironic.

            This is as everything else, machine produced has a flawlessness along some dimension that humans tend to lack.

      • Gigachad 3 days ago ago

        There was a meme along the lines of people will start including slurs in their messages to prove it wasn’t AI generated.

        • jay_kyburz 3 days ago ago

          A few months ago, I tried to get Gemini to help me write some criticism of something. I can't even remember what it was, but I wanted to clearly say something was wrong and bad.

          Gemini just could not do it. It kept trying to avoid being explicitly negative. It wanted me to instead focus on the positive. I think it evidently just told me no, and that it would not do it.

          • Gigachad 3 days ago ago

            Yeah all the current tools have this particular brand of corporate speech that’s pretty easy to pick up on. Overly verbose, overly polite, very vague, non assertive, and non opinionated.

            • stahorn 3 days ago ago

              Next big thing: AI that writes as British football hooligans talk about the referee after a match where their team lost?

        • dijit 3 days ago ago

          I mean, it's not a meme..

          I included a few more "private" words than I should and I even tried to narrate things to prove I wasn't an AI.

          https://blog.dijit.sh/gcp-the-only-good-cloud/

          Not sure what else I should do, but it's pretty clear that it's not AI written (mostly because it's incoherent) even without grammar mistakes.

          • bloak 3 days ago ago

            I liked the "New to AWS / Experienced at AWS" cartoon.

      • faragon 3 days ago ago

        People could prompt for authenticity, adding subtle mistakes, etc. I hope that AI as a whole will help people writing better, if reading back the text. It is a bit like "The Substance" movie: a "better" version of ourselves.

      • 1aleksa 3 days ago ago

        Whenever somebody misspells my name, I know it's legit haha

        • sseagull 3 days ago ago

          Way back when we had a landline and would get telemarketers, it was always a sign when the caller couldn’t pronounce our last name. It’s not even that uncommon a name, either

      • fzzzy 3 days ago ago

        Guess what? Now the computers will learn to do that so they can more convincingly pass a turing test.

      • redandblack 3 days ago ago

        yesss. my thought too. All the variations of English should not lost.

        I enjoyed all the belter dialogue in the expanse

      • oneshtein 3 days ago ago

        > Write a response to this comment, make spelling and grammar mistakes.

        yeah well sumtimes spellling and grammer erors just make thing hard two read. like i no wat u mean bout wanting two kno its a reel person, but i think cleear communication is still importint! ;)

    • bryanrasmussen 3 days ago ago

      >If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out. And the growing consensus is "why shouldn't you?", there is no escape from that.

      Are you sure you don't mean if you write regularly in one particular subclass of writing - like technical writing, documentation etc.? Do you think novel writing, poetry, film reviews etc. cannot keep up in the same way?

      • PeterisP 2 days ago ago

        I think that novel writing and reviews are types of writing where potentially AI should eventually surpass human writers, because they have the potential to replace content skillfully tailored to be liked by many people with content that's tailored (perhaps less skillfully) explicitly for a specific very, very, very narrow niche of exactly you and all the things that happen to work for your particular biases.

        There seems to be an upcoming wave of adult content products (once again, being on the bleeding edge users of new abilities) based on this principle, as hitting very specific niches/kinks/fetishes can be quite effective in that business, but it should then move on to romance novels and pulp fiction and then, over time, most other genres.

        Similarly, good pedagogy, curriculum design and educational content development is all about accurately modeling which exact bits of the content the target audience will/won't know, and explaining the gaps with analogies and context that will work for them (for example, when adapting a textbook for a different country, translation is not sufficient; you'd also need to adapt the content). In that regard, if AI models can make personalized technical writing, then that can be more effective than the best technical writing the most skilled person can make addressed to a broader audience.

      • t-3 3 days ago ago

        I'm absolutely positive that the vast majority of fiction is or will soon be written by LLM. Will it be high-quality? Will it be loved and remembered by generations to come? Probably not. Will it make money? Probably more than before on average as the author's effort is reduced to writing outlines and prompts, and editing the generated-in-seconds output, rather than months-years of doing the writing themselves.

    • edavison1 3 days ago ago

      >If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

      A very HN-centric view of the world. From my perch in journalism and publishing, elite writers absolutely loathe AI and almost uniformly agree it sucks. So to my mind the most 'competitive' spheres in writing do not use AI at all.

      • DrillShopper 3 days ago ago

        It doesn't matter how elite you think you are if the newspaper, magazine, or publishing company you write for can make more money from hiring people at a fraction of your cost and having them use AI to match or eclipse your professional output.

        At some point the competition will be less about "does this look like the most skilled human writer wrote this?" and more about "did the AI guided by a human for a fraction of the cost of a skilled human writer output something acceptably good for people to read it between giant ads on our website / watch the TTS video on YouTube and sit through the ads and sponsors?", and I'm sorry to say, skilled human writers are at a distinct disadvantage here because they have professional standards and self respect.

        • edavison1 3 days ago ago

          So is the argument here that the New Yorker can make more money from AI slop writing overseen by low-wage overseas workers? Isn't that obviously not the case?

          Anyway I think I've misunderstood the context in which we're using the word 'competition' here. My response was about attitudes toward AI from writers at the tip-top of the industry rather than profit maxxing/high-volume content farm type places.

          • low_tech_love 2 days ago ago

            It’s not that black and white. Maybe 1% of the top writers can take that stance and maybe even charge more for their all-human content (in a kind of vintage, handcraft kind of way) but the other 99% will have to adapt.

            It’s simply more nuanced. If you’re writing a couple of articles a day to pay for your bills, what will stop you from writing actually 10 or 20 articles a day instead?

        • goatlover 3 days ago ago

          So you're saying major media companies are going to outsource their writing to people overseas using LLMs? There is more to journalism than the writing. There's also the investigative part where journalists go and talk to people, look into old records, etc.

          • edavison1 2 days ago ago

            This has become such a talking point of mine when I'm inevitably forced to explain why LLMs can't come for my job (yet). People seem baffled by the idea that reporting collects novel information about the world which hasn't been indexed/ingested at any point because it didn't exist before I did the interview or whatever it is.

            • lainga 2 days ago ago

              People in meatspace are not (in James C. Scott's sense) legible to HN's user base, and never will be.

          • PeterisP 2 days ago ago

            They definitely try to replace part of the people this way, starting with the areas where it's the easiest, but obviously it will continue to other people as the capabilities improve. A big example is sports journalism, where lots of venues have game summaries that do not involve any human who actually saw the game, but rather software embellishing some narrative from the detailed referee scoring data. Another example is autotranslation of foreign news or rewriting press releases or summarizing company financial 'news' - most publishers will eagerly skip the labor intensive and thus expensive part where journalists go and talk to people, look into old records, etc, if they can get away with that.

        • easterncalculus 3 days ago ago

          Exactly. Also, if the past few years is any indication, at the very least tech journalists in general tend to love to use what they hate.

      • fennecfoxy 3 days ago ago

        Yes, but what really matters is what and how the general public, aka the consumers want to consume.

        I can bang on about older games being better all day long but it doesn't stop Fortnite from being popular, and somewhat rightly so, I suppose.

      • low_tech_love 2 days ago ago

        Will they maintain that stance when it gets into their pockets? I doubt it. If the public is not minding the difference, why would they?

      • jayd16 3 days ago ago

        Sure but no one gets to avoid all but the most elite content. I think they're bemoaning the quality of pulp.

      • lurking_swe 2 days ago ago

        i regularly (at least once a week) spot a typo or grammatical issue in a major news story. I see it in the NYTimes on occasion. I see it in local news ALL THE TIME. I swear an LLM would write better than have the idiots that are cranking out articles.

        I agree with you that having elite writing skills will be useful for a long time. But the bar for proof reading seems to be quite low on average in the industry. I think you overestimate the writings skills of your average journalist.

        • seniortaco 2 days ago ago

          Heh, when I see a spelling error in a news article.. I oddly feel like I can trust it more because it came from a human being. It's like a nugget of gold.

    • sandworm101 3 days ago ago

      >> cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

      You never should have. Large amounts of work, even stuff by major authors, is ghostwritten. I was talking to someone about Taylor Swift recently. They thought that she wrote all her songs. I commented that one cannot really know that, that the entertainment industry is very going at generating seemingly "authentic" product at a rapid pace. My colleague looked at me like I had just killed a small animal. The idea that TS was "genuine" was a cornerstone of their fandom, and my suggestion had attacked that love. If you love music or film, don't dig too deep. It is all a factory. That AI is now part of that factory doesn't change much for me.

      Maybe my opinion would change if I saw something AI-generated with even a hint of artistic relevance. I've seen cool pictures and passable prose, but nothing so far with actual meaning, nothing worthy of my time.

      • WalterBright 3 days ago ago

        Watch the movie "The Wrecking Crew" about how a group of studio musicians in the 1970s were responsible for the albums of quite a few diverse "bands". Many bands had to then learn to play their own songs so they could go on tour.

        • selimthegrim 2 days ago ago

          Or the SCTV skit about Michael McDonald backing seemingly everything at one point

      • davidhaymond 2 days ago ago

        While I do enjoy some popular genres, I'm all too aware of the massive industry behind it all. I believe that most of humanity's greatest works of art were created not for commercial interests but rather for the pure joy of creation, of human expression. This can be found in any genre if you look hard enough, but it's no accident that the music I find the most rewarding is classical music: Intellect, emotion, spirit, and narrative dreamed into existence by one person and then brought to life by other artists so we can share in its beauty.

        I think music brings about a connection between the composers, lyricists, performers, and listeners. Music lets us participate in something uniquely human. Replacing any of the human participants with AI greatly diminishes or eliminates its value in my eyes.

      • nyarlathotep_ 3 days ago ago

        > You never should have. Large amounts of work, even stuff by major authors, is ghostwritten.

        I'm reminded of 'Under The Silver Lake' with this reference. Strange film, but that plotline stuck with me.

    • ks2048 3 days ago ago

      > If you write regularly and you're not using AI, you simply cannot keep up with the competition.

      Is that true today? I guess it depends what kind of writing you are talking about, but I wouldn't think most successful writers today - from novelests to tech bloggers - rely that much on AI, but I don't know. Five years from now, could be a different story.

      • bigstrat2003 2 days ago ago

        It's not true at all. Much like the claims that you have to use LLMs to keep up in programming: if that is true then you weren't a good programmer (or writer in this case) to begin with.

        • low_tech_love 2 days ago ago

          That is absolutely wrong. Regardless of whether you were or not good to begin with, an LLM assistant will still accelerate a lot of repetitive tasks for you. Repetitive is repetitive, no matter if you’re John Carmack or the guy sitting in the booth next to you in the paper company. And anyway in a few years none of it will matter because programming without assistance will be a vintage thing of the past (like perforated cards).

          It’s the same with writing. If you find yourself writing a boring introduction section to a paper with a bunch of meaningless blabla, then why wouldn’t you use AI for that? There is simply no good reason, especially when you see mediocre researchers publishing at three times your rate and getting promoted over you.

        • davidgerard 2 days ago ago

          yeah, this is just critihype

      • theshackleford 3 days ago ago

        Yes it’s true today, depending on what is your writing is the foundation of.

        It doesn’t matter that my writing is more considered, more accurate and of a higher quality when my coworkers are all openly using AI to perform five times the work I am and producing outcomes that are “good enough” because good enough is quite enough for a larger majority than many likely realise.

    • _heimdall 3 days ago ago

      > Now, I'm not going to criticize anyone that does it, like I said, you have to, that's it.

      Why do you say people have to do it?

      People absolutely can choose not to use LLMs and to instead write their own words and thoughts, just like developers can simply refuse to build LLM tools, whether its because they have safety concerns or because they simply see "AI" in its current state as a doomed marketing play that is not worth wasting time and resources on. There will always be side effects to making those decisions, but its well within everyone's right to make them.

      • low_tech_love 2 days ago ago

        If you find yourself in a situation where you write one book at the same time as your peers are writing ten, how can you keep up? Also how can you justify to yourself not using it if nobody around you seems to value that, and even worse, is pushing you to actually use it? I find it hard to find a reason why you would. Unless we see a super strong reader revolution that collectively decides to shun AI and pay more money for all-human books.

        Read that last sentence and tell you think that’s reasonable and likely to happen?

        • _heimdall 2 days ago ago

          What you're describing here is actually a much more broad problem in the book industry, in my opinion. Almost every book written today is written with only one goal in mind, selling as many copies as possible.

          People don't have to use LLMs (they don't seem to be AI yet) because they can simply choose not to. For authors, write books that you want to write because you believe you have a story to tell. Worry about perfecting your stories and enjoy the process of writing, don't be an author just for the sales. Once you peek behind the curtain and learn the economics of the book industry, you'll realize there's very little opportunity for making enough cash to even worry about shotgunning a mountain of LLM books into the world anyway.

      • DrillShopper 3 days ago ago

        > Why do you say people have to do it?

        Gotta eat, yo

        • goatlover 2 days ago ago

          Somehow people made enough to eat before LLMs became all the rage a couple years ago. I suspect people are still making enough to eat without having to use LLMs.

    • lokimedes 3 days ago ago

      I get two associations from your comment: One about how AI being mainly used to interpolate within a corpus of prior knowledge, seems like entropy in a thermodynamical sense. The other, how this is like the Tower of Babel but where distrust is sown by sameness rather than differences. In fact, relying on AI for coding and writing, feels more like channeling demonic suggestions than anything else. No wonder we are becoming skeptical.

      • seniortaco 2 days ago ago

        I know and at my company we actually cannot disable the AI suggestions :(

        It's like dealing with a pathological liar who produces very convincing looking code for me to review, but actually there is a bug in it 50% of the time. It's like it's trying to trick me into committing bugs.

    • t43562 3 days ago ago

      It empowers people to create mountains of shit that they cannot distinguish from shit - so they are happy.

    • wickedsight 3 days ago ago

      With a friend, I created a website about a race track in the past two years. I definitely used AI to speed up some of writing. One thing I used it for was a track guide, describing every corner and how to drive it. It was surprisingly accurate, most of the time. The other times though, it would drive the track backwards, completely hallucinate the instructions or link corners that are in different parts of the track.

      I spent a lot of time analyzing the track myself and fixed everything to the point that experienced drivers agreed with my description. If I hadn't done that, most visitors would probably still accept our guide as the truth, because they wouldn't know any better.

      We know that not everyone cares about whether what they put on the internet is correct and AI allows those people to create content at an unprecedented pace. I fully agree with your sentiment.

    • vouaobrasil 3 days ago ago

      > If you write regularly and you're not using AI, you simply cannot keep up with the competition.

      Wrong. I am a professional writer and I never use AI. I hate AI.

      • low_tech_love 2 days ago ago

        Do you feel any pressure from your environment? Like seeing other authors publishing at a much faster pace than you?

        • vouaobrasil 2 days ago ago

          Well, at the business that I work for, I convinced everyone to have an anti-AI 100% free policy, although that wasn't too hard because no one there was ever enthusiastic about AI. Plus, we don't publish generic SEO stuff but real gear testing, real experience, so AI goes against that.

    • paganel 3 days ago ago

      You kind of notice the stuff written with AI, it has a certain something that makes it detectable. Granted, stuff like the Reuters press reports might have already been written by AI, but I think that in that case it doesn’t really matter.

    • osigurdson 3 days ago ago

      AI expansion: take a few bullet points and have ChatGPT expand it into several pages of text

      AI compression: take pages of text and use ChatGPT to compress into a few bullet points

      We need to stop being impressed with long documents.

      • low_tech_love 2 days ago ago

        That’s the ideal way forward in my opinion. My optimistic view of the future is one where we get so fed up of the noise that we only write what is absolutely necessary, because anything more than that is AI-generated.

      • fennecfoxy 3 days ago ago

        The foundations of our education systems are based on rote memorisation so I'd probably start there.

    • munksbeer 3 days ago ago

      > but it's still depressing, to be honest.

      Cheer up. Things usually get better, we just don't notice it because we're so consumed with extrapolating the negatives. Humans are funny like that.

      • vundercind 3 days ago ago

        It’s fairly common for (at least) specific things to get worse and then never improve again.

      • vouaobrasil 3 days ago ago

        I actually disagree with that. People are so busy hoping things will get better, and creating little bubbles for themselves to hide away from what human beings as a whole are doing, that they don't realize things are getting worse. Technology constantly makes things worse. Cheering up is a good self-help strategy but not a good strategy if you want to contribute to making the world actually a better place.

        • munksbeer 3 days ago ago

          >Technology constantly makes things worse.

          And it also makes things a lot better. Overall we lead better lives than people just 50 years ago, never mind centuries.

          • vouaobrasil 3 days ago ago

            No way. Life 50 years ago was better for MANY. Maybe that would be true for 200. But 50 years ago was the 70s. There were far fewer people, and the world was not starting to suffer from climate change. Tell your statement to any climate refugee, and ask them whether they'd like to live now or back then.

            AND, we had fewer computers and life was not so hectic. YES, some things have gotten better, but on average? It's arguable.

            • __turbobrew__ 2 days ago ago

              Maybe life was better 50 years ago if you were a white salaryman living in the USA, but outside of that things have improved for everyone else.

              Most objective measurements of basic quality of life have improved: life expectancy, food security, education, etc.

            • samcat116 2 days ago ago

              There are an incredible amount of ways that life is better today than 50 years ago. For starters the life expectancy has almost universally improved.

              • vouaobrasil 2 days ago ago

                Not necessarily a good thing if overall life experience is worse.

            • munksbeer 2 days ago ago

              I think you're demonstrating the point I was trying to make. You're falling for a very prevalent narrative that just isn't true.

              Fact: Life has improved for the majority of people on the planet in the last 50 years.

    • ChrisMarshallNY 3 days ago ago

      I don't use AI in my own blogging, but then, I don't particularly care whether or not someone reads my stuff (the ones that do, seem to like it).

      I have used it, from time to time, to help polish stuff like marketing fluff for the App Store, but I'd never use it verbatim. I generally use it to polish a paragraph or sentence.

      But AI hasn't suddenly injected untrustworthy prose into the world. We've been doing that, for hundreds of years.

      • notarobot123 3 days ago ago

        I have my reservations about AI but it's hard not to notice that LLMs are effectively a Gutenberg level event in the history of written communication. They mark a fundamental shift in our capacity to produce persuasive text.

        The ability to speak the same language or understand cultural norms are no longer barriers to publishing pretty much anything. You don't have to understand a topic or the jargon of any given domain. You don't have to learn the expected style or conventions an author might normally use in that context. You just have to know how to write a good prompt.

        There's bound to be a significant increase in the quantity as well as the quality of untrustworthy published text because of these new capacities to produce it. It's not the phenomenon but the scale of production that changes the game here.

      • layer8 3 days ago ago

        > marketing fluff for the App Store

        If it’s fluff, why do you put it there? As an App Store user, I‘m not interested in reading marketing fluff.

        • ChrisMarshallNY 3 days ago ago

          Because it’s required?

          I’ve released over 20 apps, over the years, and have learned to add some basic stuff to each app.

          Truth be told, it was really sort of a self-deprecating joke.

          I’m not a marketer, so I don’t have the training to write the kind of stuff users expect on the Store, and could use all the help I can get

          Over the years, I’ve learned that owning my limitations, can be even more important, than knowing my strengths.

          • layer8 3 days ago ago

            My point was that as a user I expect substance, not fluff. Some app descriptions actually provide that, but many don’t.

            • ChrisMarshallNY 3 days ago ago

              Well, you can always check out my stuff, and see what you think. Easy to find.

      • low_tech_love 2 days ago ago

        When I wrote about trust, I see that I made a mistake: most people seem to have understood it as being in regard to fake things. I just meant trust as in, it’s not AI-generated.

        Your comment about the fluff is exactly what I mean. I read some fluff that is AI-generated and some kind of disgust happens in my stomach, and I wish there was nothing written there at all. I just feel like it’s be best to not read anything than read something that’s AI-generated… it’s almost like the author is trying to trick me with a fake version of reality. I wonder if there’s such a thing as the uncanny valley for text?

        • ChrisMarshallNY 2 days ago ago

          Well, I apologize for using the word “fluff.” That was a mistake.

          As a lifelong engineer, I “grew up” with a somewhat antagonistic relationship with Marketing, so became used to disparaging their work, even if I had to change hats, myself, and act in a Marketing capacity.

          I should have probably used the word “copy,” instead.

          But you have a good point.

          I think that one “legitimate” use for AI-generated text, will be for non-native speakers of a language, using it to correct their vocabulary.

          For things like patents and papers, this is probably a good thing. AI can generate clear, concise vernacular. I often specify the reading level, in my prompts (usually tenth grade), so that the prose is accessible.

          For things like presentation proposals; not so much. You may get a proposal that reads like it was written by an English professor, and the actual presentation is barely comprehensible.

    • dijit 3 days ago ago

      Agreed, I feel like there's an inherent nobility in putting effort into something. If I took the time to write a book and have it proof-read and edited and so on: perhaps it's actually worth my time.

      Lowering the bar to write books is "good" but increases the noise to signal ratio.

      I'm not 100% certain how to give another proof-of-work, but what I've started doing is narrating my blog posts - though AI voices are getting better too.. :\

      • vasco 3 days ago ago

        > Agreed, I feel like there's an inherent nobility in putting effort into something. If I took the time to write a book and have it proof-read and edited and so on: perhaps it's actually worth my time.

        Said the scribe upon hearing about the printing press.

        • dijit 3 days ago ago

          I'm not certain what statement you're implying, but yes, accessibility of bookwriting has definitely decreased the quality of books.

          Even technical books like Hardcore Java: https://www.oreilly.com/library/view/hardcore-java/059600568... are god-awful, and even further away from the seminal texts on computer science that came before.

          It does feel like authorship was once heralded in higher esteem than it deserves today.

          Seems like people agree: https://www.reddit.com/r/books/comments/18cvy9e/rant_bestsel...

        • yoyohello13 2 days ago ago

          It's true though. Books hand written and illustrated by scribes were astronomically higher quality than mass printed books. People just tend to prefer what they have access to, and cheap/low quality is easy to access.

    • cookingrobot 2 days ago ago

      Idea: we should make sure we keep track of what the human created content is, so that we don’t get confused by AI edits of everything in the future.

      For ex, calculate the hash of all important books, and publish that as the “historical authenticity” check. Put the hashes on some important blockchain so we know it’s unchanged over time.

    • neta1337 3 days ago ago

      Why do you have to use it? I don’t get it. If you write your own book, you don’t compete with anyone. If anyone finished The Winds of Winter for R.R.Martin using AI, nobody would bat an eye, obviously, as we already experienced how bad a soulless story is that drifts too far away from what the author had built in his mind.

    • yusufaytas 3 days ago ago

      I totally understand your frustration. We started writing our book long before(2022) AI became mainstream, and when we finally published it on May 2024, all we hear now is people asking if it's just AI-generated content. It’s sad to see how quickly the conversation shifts away from the human touch in writing.

      • eleveriven 3 days ago ago

        I can imagine how disheartening that must be

    • wengo314 3 days ago ago

      i think the problem started when quantity became more important over quality.

      you could totally compete on quality merit, but nowadays the volume of output (and frequency) is what is prioritized.

    • hyggetrold 3 days ago ago

      > The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

      This has nearly always been true. "Manufacturing consent" is way older than any digital technology.

      • unshavedyak 3 days ago ago

        Agreed. I also suspect we've grown to rely on the crutch of trust far too much. Faulty writing has existed for ages but now suddenly because the computer is the thing making it up we have an issue with it.

        I guess it depends on scope. I'm imaging scientific or education. Ie things we probably shouldn't have relied on Blogs to facilitate, yet we did. For looking up some random "how do i build a widget?", yea AI will probably make it worse. For now. Then it'll massively improve to the point that it's not even worth asking how to build the widget.

        The larger "scientific or education" is what i'm concerned about, and i think we'll need a new paradigm to validate. We've been getting attacked on this front for 12+ years, AI is only bringing this to light imo.

        Trust will have to be earned and verified in this word-soup world. I just hope we find a way.

        • hyggetrold 3 days ago ago

          IMHO AI tools will (or at least should!) fundamentally change the way the education system works. AI tools are - from a certain point of view - really just a scaled version of AI now can put at our fingertips. Paradoxically, the more AI can do "grunt work" the more we need folks to be educated on the higher-level constructs on which they are operating.

          Some of the bigger issues you're raising I think have less to do with technology and more to do with how our economic system is currently structured. AI will be a tremendous accelerant, but are we sure we know where we're going?

    • itsTyrion 2 days ago ago

      Same goes for art. No longer can you see art on social media, press like and maybe leave a nice comment, you need to fricking pixel peep for artifacts as it's becoming less obvious.

      • low_tech_love 2 days ago ago

        Exactly. And it makes sense: think about how much one would have to pay for an artist to do the same? It’s simply inconceivable. The convergence is clear: we just won’t care about anything anymore after a while. Writing, art, whatever; it’ll all turn into noise.

    • BeFlatXIII 3 days ago ago

      > If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

      Only if you're competing on volume.

      • low_tech_love 2 days ago ago

        Who isn’t? Can you rely on your readers to rebel against AI and value more your writing than that of your peers? If one of your competitors write 10 articles while you write 1, and charge 10% of what you charge, they already equaled your wins. Raise it to 20% and they are already making twice what you make. Can you really trust that your writing is so good and so incredibly special that someone would be willing to pay 10x more for your content, against one that was assisted by the absolute state of the art of a technology that has revolutionized the world?

        • BeFlatXIII a day ago ago

          The curation is part of the value.

    • davidgerard 2 days ago ago

      > If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

      This statement strikes me (a writer) as ridiculous. LLM slop is sloppy. I would expect anyone who reads a reasonable amount to spot it immediately.

      Are you saying you are literally unable to distinguish LLM cliches?

    • uhtred 3 days ago ago

      To be honest I got sick of most new movies, TV shows, music even before AI so I will continue to consume media from pre 2010 until the day I die and will hope I don't get through it all.

      Something happened around 2010 and it all got shit. I think everyone becoming massively online made global cultural output reduce in quality to meet the interests of most people and most people have terrible taste.

    • fennecfoxy 3 days ago ago

      Why does a human being behind any words change anything at all? Trust should be based on established facts/research and not species.

      • bloak 3 days ago ago

        A lot of communication isn't about "established facts/research"; it's about someone's experience. For example, if a human writes about their experience of using a product, perhaps a drug, or writes what they think about a book or a film, then I might be interested in reading that. When they write using their own words I get some insight into how they think and what sort of person they are. I have very little interest in reading an AI-generated text with similar "content".

      • goatlover 2 days ago ago

        An LLM isn't even a species. I prefer communicating with other humans, unless I choose to interact with an LLM. But then I know that it's a text generator and not a person, even when I ask it to act like a person. The difference matters to most humans.

    • beefnugs 2 days ago ago

      Just add more swearing and off color jokes to everything you do and say. If there is one thing we know for sure its that the corporate AIs will never allow dirty jokes.

      (it will get into the dark places like spam though, which seems dumb since they know how to make meth instead, spend time on that you wankers)

    • th3byrdm4n 2 days ago ago

      Honestly, I've developers saying the same thing about IDEs and high level languages.

      This new generation of tools add efficiency the same way IntelliJ added efficiency on top of Eclipse which added efficiency on top of Emacs/VI/Notepad/etc.

      The more time that someone can focus on the systemsit takes certain types of high-time, [not domain problem specific] skill processes and obfuscated it away so the developer can focus on the most critical aspects of the software.

      Yes, sometimes generators do the wrong thing, but it's usually obvious/quick to correct.

      Cost of occasional correction is much less than the time to scaffold every punchcard.

    • CuriouslyC 2 days ago ago

      A lot of writers using AI use it to create outlines of a chapter or scene then flesh it out by hand.

    • tim333 3 days ago ago

      I'm not sure it's always that hard to tell the AI stuff from the non AI. Comments on HN and on twitter from people you follow are pretty much non AI, also people on youtube where you an see the actual human talking.

      On the other hand there's a lot on youtube for example that is obviously ai - weird writing and speaking style and I'll only watch those if I'm really interested in the subject matter and there aren't alternatives.

      Maybe people will gravitate more to the stuff like PaulG or Elon Musk on twitter or HN and less to blog style content?

    • mulmen 2 days ago ago

      > The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

      Today is September 11349, 1993

    • jshdhehe 3 days ago ago

      AI only helps writing in so far as checking/suggesting edits. Most people can write better than AI (more engaging). AI cant tell a human story, have real tacit experience.

      So it is like saying my champaigne bottle cant keep up with the tap water.

    • 3 days ago ago
      [deleted]
    • limit499karma 3 days ago ago

      I'll take your statement that your conclusions are based on a 'depressed mind' at face value, since it is so self-defeating and places little faith in Human abilities. Your assumption that a person driven to write will "with a high degree of certainty" also mix up their work with a machine assistant can only be informed by your own self-assessment (after all how could you possibly know the mindset of every creative human out there?)

      My optimistic and enthusiastic view of AI's role in Human development is that it will create selection pressures that will release the dormant psychological abilities of the species. Undoubtedly, wide-spread appearance of Psi abilities will be featured in this adjustment of the human super-organism to technologies of its own making.

      Machines can't do Psi.

    • jwuice 2 days ago ago

      i would change to: if you do ANYTHING online and you're not using AI, you simply cannot keep up with the competition. you're out.

      it's depressing.

    • amelius 3 days ago ago

      Funny thing is that people will also ask AI to __read__ stuff for them and summarize it.

      So everything an AI writes will eventually be nothing more than some kind of internal representation.

    • FrustratedMonky 2 days ago ago

      Maybe this will push people back to reading old paper books?

      There could be resurgence in reading the classics, on paper, since we know they are not AI.

    • datavirtue 3 days ago ago

      It's either good or it isn't. It either tracks or it doesn't. No need to befuddle your thoughts over some perceived slight.

    • LeroyRaz 2 days ago ago

      Your take seems hyperbolic.

      Until LLMs exceed the very best of human quality there will be human content in all forms of media. This claims follows because there is always (some) demand for top quality content.

      I agree that many writers might use LLMs as a tool, but good writers who care about quality will ensure that such use is not detrimental (e.g., using the LLM to identify errors rather than having it draft copy).

      • njarboe 2 days ago ago

        Will that happen in 1,2,5,10 or never years?

        • LeroyRaz 2 days ago ago

          I mean if AI output exceeds human quality then all humans will be redundant. So it would then be quite a brave new world!

          My point is that I do not agree that LMM output will degrade all media (as there is always a demand for top quality content). So we either have bad LLM output and then people who care about quality avoiding such works. Or good LLM output and hopefully some form of post scarcity society (e.g., Iain Bank's Culture Novels).

    • EGreg 3 days ago ago

      I have been predicting this from 2016

      And I also predict that many responses to you will say “it was always that way, nothing changed”.

    • InDubioProRubio 3 days ago ago

      Just dont be average and your fine.

    • greenie_beans 3 days ago ago

      i know a lot of writers who don't use ai. in fact, i can't think of any writers who use it, except a few literary fiction writers.

      working theory: writers have taste and LLM writing style doesn't match the typical taste of a published writer.

    • FrankyHollywood 3 days ago ago

      I have never read more bullshit in my life than during the corona pandemic, all written by humans. So you should never trust something you read, always question the source and it's reasoning.

      At the same time I use copilot on a daily basis, both for coding as well as the normal chat.

      It is not perfect, but I'm at a point I trust AI more than the average human. And why shouldn't I? LLMs ingest and combine more knowledge than any human can ever do. An LLM is not a human brain but it's actually performing really well.

    • alwa 2 days ago ago

      People like you, the author, and me all share this sentiment. It motivates us to seek out authentic voices and writing that’s associated with specific humans.

      The commodity end of the writing market may well have been automated, but was that really the kind of writing you or the author or I ever sought out in the first place?

      I can get mass-manufactured garments from Shein if I want, but I can also still find tailors locally if it’s worth it to me. I can buy IKEA or I can still save up for something made out of real wood. I can “shoot a cinematic digital film” on my iPhone but the cineplex remains in business and the art film folks are still doing their scrappy thing (and still moaning about its economics). I can lap up slop from an academic paper mill journal or I can identify who’s doing the thinking in a field and read what they’re writing or saying.

      And the funny thing is that none of those human-scale options commands all that much of a premium in the scheme of things. There may be less human-scald work to go around and thus fewer small enterprises plying a specific trade, but any given one of them just has to put food on the table for a number of humans roughly proportional to the same level of output as always.

      It seems to me that there’s no special virtue in the specific form that the mass publishing market took over the last century or however long: my local grocery store chain’s division producing weekly newspaper circulars probably employed more people than J Peterman has. But there was and remains a place for quality. If anything—as you point out—the AI schlock has sensitized us to the value we place on a human voice. And at some level, once people notice that they miss that quality, isn’t there a sense in which they become more willing to seek it out and pay for it if necessary?

    • eleveriven 3 days ago ago

      Maybe, over time, there will also be a renewed appreciation for authenticity

    • williamcotton 3 days ago ago

      Well we’re going to need some system of PKI that is tied to real identities. You can keep being anonymous if you want but I would prefer not and prefer to not interact with the anonymous, just like how I don’t want to interact with people wearing ski masks.

      • flir 3 days ago ago

        I doubt that's possible. I can always lend my identity to an AI.

        The best you can hope for is not "a human wrote this text", it's "a human vouched for this text".

      • nottorp 3 days ago ago

        Why are you posting on this forum where the user's identity isn't verified by anyone then? :)

        But the real problem is that having the poster's identity verified is no proof that their output is not coming straight from a LLM.

        • williamcotton 3 days ago ago

          I don’t really have a choice about interacting with the anonymous at this point.

          It certainly will affect the reputation of people that are consistently publishing untruths.

          • nottorp 3 days ago ago

            > It certainly will affect the reputation of people that are consistently publishing untruths.

            Oh? I thought there are a lot of very well identified people making a living from publishing untruths right now on all social media. How would PKI help, when they're already making it very clear who they are?

    • seniortaco 2 days ago ago

      +1, and to put more simply, AI as we know it today makes zero guarantees about its accuracy. That's pretty insane for a "tool" to make no guarantees about being correct in any way for any purpose.

      A spellchecker makes guarantees about accuracy. So does a calculator. Broad, sweeping guarantees.

      Imagine if we built a tool that could automatically do all the electrical work in a new home from the breaker box to every outlet, and it could do it in 2 hours. However, what if that tool made no guarantees about its ability to meet electrical code? Would people use it anyway? Of course they would. Many dangerous errors would slip through inspection and many more house fires would ensue as a result.

    • verisimi 3 days ago ago

      You're lucky. I consider it a possibility that older works (even ancient writings) are retrojected into the historical record.

    • m463 2 days ago ago

      I think ... between now and the day you die... you'll get your personal AI to read things for you. It will analyze what's been written, check any arguments for fallacious reasoning, and look up related things for background and omissions that may support or negate things.

      It is actually happening now.

      I've noticed amazon reviews have an AI summary at the top, reading the reviews for you and even pointing out shortcomings.

      • phatfish 2 days ago ago

        I've seen "summarise this" and "explain this code" buttons added to technical documentation. This works reasonably well for most common situations, which is probably the reason it's one of the few "production" uses for LLMs. I didn't know Amazon was using it though.

        Microsoft has a note on some of their documentation, something like; "this article was written with the help of an AI and edited by a human".

        I have a feeling this won't lead to informative to-the-point documentation. It will get bloated because an LLM will spew out reams of bullet point ridden paragraphs, which will need a "summarise this" button to stop the reader nodding off.

        Rinse and repeat.

    • dustingetz 3 days ago ago

      > If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

      What? No! Content volume only matters in stupid contests like VC app marketing grifts or political disinformation ops where the content isn’t even meant to be read, it’s an excuse for a headline. I personally write all my startup’s marketing content, quality is exquisite and due to this our brand is becoming a juggernaut

    • avereveard 3 days ago ago

      why do you trust things now? unless you recognize the author and have a chain of trust from that author production to the content you're consuming, there already was no way to estabilish trust.

      • layer8 3 days ago ago

        For one, I trust authors more who are not too lazy to start sentences with upper case.

    • grecy 3 days ago ago

      Eh, like everything in life you can choose what you spend your time on and what you ignore.

      There have always been human writers I don’t waste my time on, and now there are AI writers in the same category.

      I don’t care. I will just do what I want with my life and use my time and energy on things I enjoy and find useful.

    • ozim 3 days ago ago

      What kind of silliness is this?

      AI generated crap is one thing. But human generated crap is there - just because human wrote something it is not making it good.

      Had a friend who thought that if it is written in a book it is for sure true. Well NO!

      There was exactly the same sentiment with stuff on the internet and it is still the same sentiment about Wikipedia that “it is just some kids writing bs, get a paper book or real encyclopedia to look stuff up”.

      Not defending gen AI - but still you have to make useful proxy measures what to read and what not, it was always an effort and nothing is going substitute critical thinking and putting in effort to separate wheat from the chaff.

      • dns_snek 3 days ago ago

        > nothing is going substitute critical thinking and putting in effort to separate wheat from the chaff.

        The problem is that wheat:chaff ratio used to be 1:100, and soon it's going to become 1:100 million. I think you're severely underestimating the amount of effort it's going to take to find real information in the sea of AI generated content.

        • ozim 2 days ago ago

          Just like people were able to find real information on topics like: "tobacco is not that bad, it is just nice like coffee" and "fat is bad for you, here have some sugar", "alcohol is fun, having beer is nice"?

          Fundamentally AI changes nothing for masses and for individuals alike and there was nothing you could "just trust" because it was written on a website or in a book. That is why I call it silly.

          It also doesn't make it easier or harder for people in power that have vast resources to plant what they want - they have money and power and will do so anyway.

      • shprd 3 days ago ago

        No one claimed humans are perfect. But gen AI is a force multiplier for every problem we had to deal with. It's just completely different scale. Your brain is about to be DDOSed by junk content.

        Of course, gen AI is just a tool that can be used for good or bad, but spam, targeted misinformation campaigns, and garbage content in general is one area that will be most amplified because it became so low effort and they don't care about doing any review, double-checking, etc. They can completely automate their process to whatever goal they've in mind. So where sensible humans enjoy 10x productivity, these spam farms will be enjoying 10000x scale.

        So I don't think downplaying it and acting like nothing changed, is the brightest idea. I hope you see now how that's a completely different game, one that's already here but we aren't prepared for yet, certainly not with traditional tools we have.

        • flir 3 days ago ago

          > Your brain is about to be DDOSed by junk content.

          It's not the best analogy because there's already more junk out there than can fit through the limited bandwidth available to my brain, and yet I'm still (vaguely) functional.

          So how do I avoid the junk now? Rough and ready trust metrics, I guess. Which of those will still work when the spam's 10x more human?

          I think the recommendations of friends will still work, and we'll increasingly retreat to walled gardens where obvious spammers (of both the digital and human variety) can be booted out. I'm still on facebook, but I'm only interested in a few well-moderated groups. The main timeline is dead to me. Those moderators are my content curators for facebook content.

          • ozim 3 days ago ago

            That is something I agree with.

            One cannot be DDOSed with junk when not actively trying to stuff as much junk into ones head.

            • shprd 3 days ago ago

              > One cannot be DDOSed with junk when not actively trying to stuff as much junk into ones head.

              The junk gets thrown at you in mass volume at low cost without your permission. What you gonna do? keep dodging it? waste your time evaluating every piece of information you come across?

              If one of the results on the first page in search deviate from others, it's easy to notice. But if all of them agree, they became the truth. Of course your first thought is to say search engines are shit or whatever off-hand remarks, but this example is just to illustrate how the volume alone can change things. The medium doesn't matter, these things could come in many forms: book reviews, posts on social media, ads, false product description on amazon, etc.

              Of course, these things exist today but the scale is different, the customization is different. It's like the difference between firearms and drones. If you think it's the same old game and you can defend against the new threat using your old arsenal, I admire your confidence but you're in for a surprise.

          • shprd 3 days ago ago

            So you're basically sheltering yourself and seeking human curated content? Good for you, I follow similar strategy. How do you propose we apply this solution for the masses in today's digital age? or you're just saying 'each on their own'?

            Sadly, you seem to not be looking further than your nose. We are not talking about just you and me here. Less tech literate people are the ones at a disadvantage and need protection the most.

            • flir 3 days ago ago

              > How do you propose we apply this solution for the masses in today's digital age?

              The social media algorithms are the content curators for the technically illiterate.

              Ok, they suck and they're actively user-hostile, but they sucked before AI. Maybe (maybe!) AI's the straw that breaks the camel's back, and people leave those algorithm-curated spaces in droves. I hope that, one way and another, they'll drift back towards human-curated spaces. Maybe without even realizing it.

        • hackable_sand 2 days ago ago

          What's with the fud

          • ozim 2 days ago ago

            That is pretty much my reaction summed up :)

            Feels like people drop into spreading fear, uncertainty and doubt too quickly.

      • tempfile 3 days ago ago

        > you have to make useful proxy measures what to read and what not

        yes, obviously. But AI slop makes those proxy measures significantly more complicated. Critical thinking is not magic - it is still a guess, and people are obviously worse at distinguishing AI bullshit from human bullshit.

    • advael 3 days ago ago

      In trying to write a book, it makes little sense to try to "compete" on speed or volume of output. There were already vast disparities in that among people who write, and people whose aim was to express themselves or contribute something of importance to people's lives, or the body of creative work in the world, have little reason to value quantity over quality. Probably if there's a significant correlation with volume of output, it's in earnings, and that seems both somewhat tenuous and like something that's addressable by changes in incentives, which seem necessary for a lot of things. Computers being able to do dumb stuff at massive scale should be viewed as finding vulnerabilities in the metrics this allows it to become trivial to game, and it's baffling whenever people say "Well clearly we're going to keep all our metrics the same and this will ruin everything." Of course, in cases where we are doing that, we should stop (For example, we should probably act to significantly curb price and wage discrimination, though that's more like a return to form of previous regulatory standards)

      As a creator of any kind, I think that simply relying on LLMs to expand your output via straightforward uses of widely available tools is inevitably going to lead to regression to the mean in terms of creativity. I'm open to the idea, however, that there could be more creative uses of the things that some people will bother to do. Feedback loops they can create that somehow don't stifle their own creativity in favor of mimicking a statistical model, ways of incorporating their own ingredients into these food processors of information. I don't see a ton of finished work that seems to do this, but I see hints that some people are thinking this way, and they might come up with some cool stuff. It's a relatively newly adopted technology, and computer-generated art of various kinds usually separates into "efficiency" (which reads as low quality) in mimicking existing forms, and new forms which are uniquely possible with the new technology. I think plenty of people are just going to keep writing without significant input from LLMs, because while writer's block is a famous ailment, many writers are not primarily limited by their speed in producing more words. Like if you count comments on various sites and discussions with other people, I write thousands of words unassisted most days

      This kind of gets to the crux of why these things are useful in some contexts, but really not up to snuff with what's being claimed about them. The most compelling use cases I've seen boil down to some form of fitting some information into a format that's more contextually appropriate, which can be great for highly structured formatting requirements and dealing with situations which are already subject to high protocol of some kind, so long as some error is tolerated. For things for which conveying your ideas with high fidelity, emphasizing your own narrative voice or nuanced thoughts on a subject, or standing behind the factual claims made by the piece are not as important. As much as their more strident proponents want to claim that humans are merely learning things by aggregating and remixing them in the same sense as these models do, this reads as the same sort of wishful thinking about technology that led people to believe that brains should work like clockwork or transistors at various other points in time at best, and honestly this most often seems to be trotted out as the kind of bad-faith analogy tech lawyers tend to use when trying to claim that the use of [exciting new computer thing] means something they are doing can't be a crime

      So basically, I think rumors of the death of hand-written prose are, at least at present, greatly exaggerated, though I share the concern that it's going to be much harder to filter out spam from the genuine article, so what it's really going to ruin is most automated search techniques. The comparison to "low-background steel" seems apt, but analogies about how "people don't handwash their clothes as much anymore" kind of don't apply to things like books

    • bilsbie 2 days ago ago

      Wait until you find out about copywriters.

    • bschmidt1 2 days ago ago

      Oh please the content online now is so fake as hell. You're acting as if AI can only produce garbage but CNN and Fox News are producing gold. The internet is 4 total websites now, congrats big media won. And you want to shut down the only decent attempt against it. Shame on you "hackers"

    • farts_mckensy 3 days ago ago

      >But what I had never noticed until now is that knowing that a human being was behind the written words (however flawed they can be, and hopefully are) is crucial for me.

      Everyone is going to have to get over that very soon, or they're going to start sounding like those old puritanical freaks who thought Elvis thrusting his hips around was the work of the devil.

      • goatlover 2 days ago ago

        Those two things don't sound at all similar. We don't have to get over wanting to communicate with humans online.

    • GrumpyNl 3 days ago ago

      response from AI on this: I completely understand where you're coming from. The increasing reliance on AI in writing does raise important questions about authenticity and connection. There’s something uniquely human in knowing that the words you're reading come from someone’s personal thoughts, experiences, and emotions—even if flawed. AI-generated content, while efficient and often well-written, lacks that deeper layer of humanity, the imperfections, and the creative struggle that gives writing its soul.

      It’s easy to feel disillusioned when you know AI is shaping so much of the content around us. Writing used to be a deeply personal exchange, but now, it can feel mechanical, like it’s losing its essence. The pressure to keep up with AI can be overwhelming for human writers, leading to this shift in content creation.

      At the same time, it’s worth considering that the human element still exists and will always matter—whether in long-form journalism, creative fiction, or even personal blogs. There are people out there who write for the love of it, for the connection it fosters, and for the need to express something uniquely theirs. While the presence of AI is unavoidable, the appreciation for genuine human insight and emotion will never go away.

      Maybe the answer lies in seeking out and cherishing those authentic voices. While AI-generated writing will continue to grow, the hunger for human storytelling and connection will persist too. It’s about finding balance in this new reality and, when necessary, looking back to the richness of past writings, as you mentioned. While it may seem like a loss in some ways, it could also be a call to be more intentional in what we read and who we trust to deliver those words.

  • koliber 3 days ago ago

    I am approaching AI with caution. Shiny things don't generally excite me.

    Just this week I installed cursor, the AI-assisted VSCode-like IDE. I am working on a side project and decided to give it a try.

    I am blown away.

    I can describe the feature I want built, and it generates changes and additions that get me 90% there, within 15 or so seconds. I take those changes, and carefully review them, as if I was doing a code review of a super-junior programmer. Sometimes when I don't like the approach it took, I ask it to change the code, and it obliges and returns something closer to my vision.

    Finally, once it is implemented, I manually test the new functionality. Afterward, I ask it to generated a set of automated test cases. Again, I review them carefully, both from the perspective of correctness, and suitability. It over-tests on things that don't matter and I throw away a part of the code it generates. What stays behind is on-point.

    It has sped up my ability to write software and tests tremendously. Since I know what I want , I can describe it well. It generates code quickly, and I can spend my time revieweing and correcting. I don't need to type as much. It turns my abstract ideas into reasonably decent code in record time.

    Another example. I wanted to instrument my app with Posthog events. First, I went through the code and added "# TODO add Posthog event" in all the places I wanted to record events. Next, I asked cursor to add the instrumentation code in those places. With some manual copy-and pasting and lots of small edits, I instrumented a small app in <10 minutes.

    We are at the point where AI writes code for us and we can blindly accept it. We are at a point where AI can take care of a lot of the dreary busy typing work.

    • DanHulton 3 days ago ago

      I sincerely worry about a future when most people act in this same manner.

      You have - for now - sufficient experience and understanding to be able to review the AI's code and decide if it was doing what you wanted it to. But what about when you've spent months just blindly accepting" what the AI tells you? Are you going to be familiar enough with the project anymore to catch its little mistakes? Or worse, what about the new generation of coders who are growing up with these tools, who NEVER had the expertise required to be able to evaluate AI-generated code, because they never had to learn it, never had to truly internalize it?

      It's late, and I think if I try to write any more just now, I'm going to go well off the rails, but I've gone into depth on this topic recently, if you're interested: https://greaterdanorequalto.com/ai-code-generation-as-an-age...

      In the article, I posit a less than glowing experience with coding tools than you've had, it sounds like, but I'm also envisioning a more complex use case, like when you need to get into the meat of some you-specific business logic it hasn't seen, not common code it's been exposed to thousands of times, because that's where it tends to fall apart the most, and in ways that are hard to detect and with serious consequences. If you haven't run into that yet, I'd be interested to know if you do some day. (And also to know if you don't, though, to be honest! Strong opinions, loosely held, and all that.)

      • FridgeSeal 3 days ago ago

        If we keep at this LLM-does-all-out-hard-work for us, we’re going to end up with some kind of Warhammer 40k tech-priest-blessing-the-magic-machines level of understanding, where nobody actually understands anything, and we’re technologically stunted, but hey at least we don’t have the warp to contend with and some shareholders got rich at our expense.

        • YeGoblynQueenne 2 days ago ago

          Unless it's all a ploy by Tzeentch to prepare the ground for the coming of the Chaos Gods.

      • wickedsight 3 days ago ago

        You and I seem to live in very different worlds. The one I live and work in is full of over confident devs that have no actual IT education and mostly just copy and modify what they find on the internet. The average level of IT people I see daily is down right shocking and I'm quite confident that OP's workflow might be better for these people in the long run.

        • nyarlathotep_ 3 days ago ago

          It's going to be very funny in the next few years when Accenture et al charge the government billions for a simple Java crud website thing that's entirely GPT-generated, and it'll still take 3 years and not be functional. Ironically, it'll be of better quality then they'd deliver otherwise.

          This is probably already happening.

          • eastbound 2 days ago ago

            GPT will be masters at make-believe. The project will last 15 years and cost a billion before the government finds that its a big bag of nothing.

        • 9cb14c1ec0 2 days ago ago

          > The one I live and work in is full of over confident devs that have no actual IT education and mostly just copy and modify what they find on the internet.

          Too many get into the field solely due to promises of large paychecks, not due to the intellectual curiosity that drives real devs.

      • westoncb 2 days ago ago

        I actually do think this is a legitimate concern, but at the same time I feel like when higher-level languages were introduced people likely experienced a similar dilemma: you just let the compiler generate the code for you without actually knowing what you're running on the CPU?

        Definitely something to tread carefully with, but it's also likely an inevitable aspect of progressing software development capabilities.

        • sanj 2 days ago ago

          A compiler is deterministic. An LLM is not.

          • poslathian 2 days ago ago

            Place and routing compilers used in semiconductor design are not. Ironically, simulated annealing is the typical mechanism and is by any appropriate definition, imo, a type of AI.

            Whatever you do in your life using devices that run software are proof that these tools are effective for continuing to scale complexity. Annoying to use also ;)

      • lesuorac 3 days ago ago

        I take it you haven't seen the world of HTML cleaners [1]?

        The concept of glueing together text until it has the correct appearance isn't new to software. The scale at which it's happening is certainly increasing but we already had plenty of problems from the existing system. Kansas certainly didn't develop their website [2] using an LLM.

        IMO, the real problem with software is the lack of a warranty. It really shouldn't matter how the software is made just the qualities it has. But without a warranty it does matter because how its made affects the qualities it has and you want the software to actually work even if it's not promised to.

        [1]: https://www.google.com/search?q=html+cleaner

        [2]: https://www.npr.org/2021/10/14/1046124278/missouri-newspaper...

        • mydogcanpurr 2 days ago ago

          > I take it you haven't seen the world of HTML cleaners [1]?

          Are you seriously comparing deterministic code formatters to nondeterministic LLMs? This isn't just a change of scale because it is qualitatively different.

          > Kansas certainly didn't develop their website [2] using an LLM.

          Just because the software industry has a problem with incompetence doesn't mean we should be reaching for a tool that regularly hallucinates nonsense.

          > IMO, the real problem with software is the lack of a warranty.

          You will never get a warranty from an LLM because it is inherently nondeterministic. This is actually a fantastic argument _not_ to use LLMs for anything important including generating program text for software.

          > It really shouldn't matter how the software is made

          It does matter regardless of warranty or the qualities of the software because programs ought to be written to be read by humans first and machines second if you care about maintaining them. Until we create a tool that actually understands things, we will have to grapple with the problem of maintaining software that is written and read by humans.

      • conjectures 3 days ago ago

        >But what about when you've spent months just blindly accepting" what the AI tells you?

        Pour one out to the machine spirit and get your laptop a purity seal.

      • sbochins a day ago ago

        This seems a little silly to me. It was already possible for a script kiddie to kludge together something they didn’t understand —- copying code snippets from stack overflow, etc. And yet, developers continue to write finely crafted code that they understand at depth. Just because we’ve made this process easier for the script kiddies, doesn’t prevent experts from existing and the market from realizing these experts are necessary to a well run software business.

      • lurking_swe 2 days ago ago

        nothing prevents you from asking an LLM to explain a snippet of code. And then ask it to explain deeper. And then finally doing some quick googling to validate the answers seem correct.

        Blindly accepting code used to happen all the time, people copy pasted from stack overflow.

        • yoyohello13 2 days ago ago

          Yes, but copy/paste from stack overflow was a meme that was discouraged. Now we've got people proudly proclaiming they haven't written a line of code in months because AI does everything for them.

        • djeastm a day ago ago

          >And then finally doing some quick googling to validate the answers seem correct.

          There will come a time when there won't be anyone writing information to check against. It'll be AI all the way down. Or at least it will be difficult to discern what's AI or what isn't.

    • irisgrunn 3 days ago ago

      And this is the major problem. People will blindly trust the output of AI because it appears to be amazing, this is how mistakes slip in. It might not be a big deal with the app you're working on, but in a banking app or medical equipment this can have a huge impact.

      • Gigachad 3 days ago ago

        I feel like I’m being gaslit about these AI code tools. I’ve got the paid copilot through work and I’ve just about never had it do anything useful ever.

        I’m working on a reasonably large rails app and it can’t seem to answer any questions about anything, or even auto fill the names of methods defined in the app. Instead it just makes up names that seem plausible. It’s literally worse than the built in auto suggestions of vs code, because at least those are confirmed to be real names from the code.

        Maybe these tools work well on a blank project where you are building basic login forms or something. But certainly not on an established code base.

        • nucleardog 3 days ago ago

          I'm in the same boat. I've tried a few of these tools and the output's generally been terrible to useless big and small. It's made up plausible-sounding but non-existent methods on the popular framework we use, something which it should have plenty of context and examples on.

          Dealing with the output is about the same as dealing with a code review for an extremely junior employee... who didn't even run and verify their code was functional before sending it for a code review.

          Except here's the problem. Even for intermediate developers, I'm essentially always in a situation where the process of explaining the problem, providing feedback on a potential solution, answering questions, reviewing code and providing feedback, etc takes more time out of my day than it would for me to just _write the damn code myself_.

          And it's much more difficult for me to explain the solution in English than in code--I basically already have the code in my head, now I'm going through a translation step to turn it into English.

          All adding AI has done is taking the part of my job that is "think about problem, come up with solution, type code in" and make it into something with way more steps, all of which are lossy as far as translating my original intent to working code.

          I get we all have different experiences and all that, but as I said... same boat. From _my_ experiences this is so far from useful that hearing people rant and rave about the productivity gains makes me feel like an insane person. I can't even _fathom_ how this would be helpful. How can I not be seeing it?

          • simonw 3 days ago ago

            The biggest lie in all of LLMs is that they’ll work out of the box and you don’t need to take time to learn them.

            I find Copilot autocomplete invaluable as a productivity boost, but that’s because I’ve now spent over two years learning how to best use it!

            “And it's much more difficult for me to explain the solution in English than in code--I basically already have the code in my head, now I'm going through a translation step to turn it into English.”

            If that’s the case, don’t prompt them in English. Prompt them in code (or pseudo-code) and get them to turn that into code that’s more likely to be finished and working.

            I do that all the time: many of my LLM prompts are the signature of a function or a half-written piece of code where I add “finish this” at the end.

            Here’s an example, where I had started manually writing a bunch of code and suddenly realized that it was probably enough context for the LLM to finish the job… which it did: https://simonwillison.net/2024/Apr/8/files-to-prompt/#buildi...

            • koliber 2 days ago ago

              You bring up a good point! These tools are useless if you can't prompt them effectively.

              I am decent at explaining what I want in English. I have coded and managed developers for long enough to include tips on how I want something implemented. So far, I am nothing short of amazed. The tools are nowhere near perfect, but they do provide a non-trivial boost in my productivity. I feel like I did when I first used an IDE.

          • ku1ik 2 days ago ago

            > Except here's the problem. Even for intermediate developers, I'm essentially always in a situation where the process of explaining the problem, providing feedback on a potential solution, answering questions, reviewing code and providing feedback, etc takes more time out of my day than it would for me to just _write the damn code myself_.

            Exactly. And I’ve been telling myself „keep doing that, it lets them teach, otherwise they will never level up and be able to comfortably and reliably work on this codebase without much hand holding. This will pay off”. Which I still think is true to a degree, although less so with every year.

            • nucleardog 2 hours ago ago

              At least with the humans I work with it’s _possible_ and I can occasionally find some evidence that it _could_ be true to hang on to. I’m expending extra effort, but I’m helping another human being and _maybe_ eventually making my own life easier.

              What’s the payoff for doing this with an LLM? Even if it can learn, why not let someone else do it and try again next year and see if it’s leveled up yet?

        • kgeist 3 days ago ago

          For me, AI is super helpful with one-off scripts, which I happen to write quite often when doing research. Just yesterday, I had to check my assumptions are true about a certain aspect of our live system and all I had was a large file which had to be parsed. I asked ChatGPT to write a script which parses the data and presents it in a certain way. I don't trust ChatGPT 100%, so I reviewed the script and checked it returned correct outputs on a subset of data. It's something which I'd do to the script anyway if I wrote it myself, but it saved me like 20 minutes of typing and debugging the code. I was in a hurry because we had an incident that had to be resolved as soon as possible. I haven't tried it on proper codebases (and I think it's just not possible at this moment) but for quick scripts which automate research in an ad hoc manner, it's been super useful for me.

          Another case is prototyping. A few weeks ago I made a prototype to show to the stakeholders, and it was generally way faster than if I wrote it myself.

        • thewarrior 3 days ago ago

          It’s writing most of my code now. Even if it’s existing code you can feed in the 1-2 files in question and iterate on them. Works quite well as long as you break it down a bit.

          It’s not gas lighting the latest versions of GPT, Claude, Lama have gotten quite good

          • Gigachad 3 days ago ago

            These tools must be absolutely massively better than whatever Microsoft has then because I’ve found that GitHub copilot provides negative value, I’d be more productive just turning it off rather than auditing it’s incorrect answers hoping one day it’s as good as people market it as.

            • diggan 3 days ago ago

              > These tools must be absolutely massively better than whatever Microsoft has then

              I haven't used anything from Microsoft (including Copilot) so not sure how it compares, but compared to any local model I've been able to load, and various other remote 3rd party ones (like Claude), no one comes near to GPT4 from OpenAI, especially for coding. Maybe give that a try if you can.

              It still produces overly verbose code and doesn't really think about structure well (kind of like a junior programmer), but with good prompting you can kind of address that somewhat.

              • FridgeSeal 3 days ago ago

                My experience was the opposite.

                GPT4 and variants would only respond in vagaries, and had to be endlessly prompted forward,

                Claude was the opposite, wrote actual code, answered in detail, zero vagueness, could appropriately re-write and hoist bits of code.

                • diggan 3 days ago ago

                  Probably these services are so tuned (not as in "fine-tuned" ML style) to each individual user that it's hard to get any sort of collective sense of what works and what doesn't. Not having any transparency what so ever into how they tune the model for individual users doesn't help either.

            • bongodongobob 2 days ago ago

              My employer blocks ChatGPT at work and we are forced to use Copilot. It's trash. I use Google docs to communicate with GPT on my personal device. GPT is so much better. Copilot reminds me of GPT3. Plausible, but wrong all the time. GPT 4o and o1 are pretty much bang on most of the time.

            • piker 3 days ago ago

              Which languages do you use?

        • koliber 2 days ago ago

          My experience is anecdotal, based on a sample size of one. I'm not writing to convince, but to share. Please take a look at my resume to see my background, so you can weight what I write.

          I tried cursor because a technically-minded product manager colleague of mine managed to build a damned solid MVP of an AI chat agent with it. He is not a programmer, but knows enough to kick the can until things work. I figured if it worked for him, I might invest an hour of my time to check it out.

          I went in with a time-boxed one hour time to install cursor and implement a single trivial feature. My app is not very sophisticated - mostly a bunch of setup flows and CRUD. However, there are some non-trivial things which I would expect to have documented in a wiki if I was building this with a team.

          Cursor did really well. It generated code that was close to working. It figured out those not-obvious bits as well and the changes it made kept them in mind. This is something I would not expect from a junior dev, had I not explained those cross-dependencies to them (mostly keeping state synchronized according to business rule across different entities).

          It did a poor job of applying those changes to my files. It would not add the code it generated in the right places and mess things up along the way. I felt I was wrestling with it a but too much to my liking. But once I figured this out I started hand-applying it's changes and reviewing them as I incorporated them into my code. This workflow was beautiful.

          It was as if I sent a one paragraph description of the change I want, and received a text file with code snippets and instructions where to apply them.

          I ended up spending four hours with cursor and giving it more and more sophisticated changes and larger features to implement. This is the first AI tool I tried where I gave it access to my codebase. I picked cursor because I've heard mixed reviews about others, and my time is valuable. It did not disappoint.

          I can imagine it will trip up on a larger codebase. These tools are really young still. I don't know about other AI tools, and am planning on giving them a whirl in the near future.

        • Kiro 3 days ago ago

          That sounds almost like the complete opposite of my experience and I'm also working in a big Rails app. I wonder how our experiences can be so diametrically different.

          • Gigachad 3 days ago ago

            What kind of things are you using it for? I’ve tried asking it things about the app and it only gives me generic answers that could apply to any app. I’ve tried asking it why certain things changed after a rails update and it gives me generic troubleshooting advice that could apply to anything. I’ve tried getting it to generate tests and it makes up names for things or generally gets it wrong.

        • brandall10 3 days ago ago

          Copilot is terrible. You need to use Cursor or at the very least Continue.dev w/ Claude Sonnet 3.5.

          It's a massive gulf of difference.

      • koliber 2 days ago ago

        OP here. I am explicitly NOT blindly trusting the output of the AI. I am treating it as a suspicious set of code written by an inexperienced developer. Doing full code review on it.

      • svara 3 days ago ago

        I don't think this criticism is valid at all.

        What you are saying will occasionally happen, but mistakes already happen today.

        Standards for quality, client expectations, competition for market share, all those are not going to go down just because there's a new tool that helps in creating software.

        New tools bring with them new ways to make errors, it's always been that way and the world hasn't ended yet...

    • smm11 2 days ago ago

      I was in the newspaper field a year or two before desktop publishing took off, then a few years into that evolution. Rooms full of people and Linotype/Compugraphic equipment were replaced by one Mac and a printer.

      I shot film cameras for years, and we had a darkroom, darkroom staff, and a film/proofsheet/print workflow. One digital camera later and that was all gone.

      Before me publications were produced with hot lead.

      Get off my lawn.

      https://www.nytimes.com/2016/06/02/insider/1966-2016-the-las...

    • layer8 3 days ago ago

      > I can spend my time revieweing and correcting.

      Do you really like spending most of your time reviewing AI output? I certainly don’t, that’s soul-crushing.

      • sgu999 3 days ago ago

        Not much more than reviewing the code of any average dev who doesn't bother doing their due diligence. At least with an AI I immediately get an answer with "Oh yes, you're right, sorry for the oversight" and a fix. Instead of some bullshit explanation to try to convince me that their crappy code is following the specs and has no issues.

        That said, I'm deeply saddened by the fact that I won't be passing on a craft I spent two decades refining.

      • woah 3 days ago ago

        I think there are two types of developers: those who are most excited about building things, and those who are most excited about the craft of programming.

        If I can build things faster, then I'm happy to spend most of my time reviewing AI code. That doesn't mean that I never write code. Some things the AI is worse at, or need to be exactly write and its faster to do them manually.

        • koliber 2 days ago ago

          > I think there are two types of developers: those who are most excited about building things, and those who are most excited about the craft of programming.

          Love this. You hit the nail right on the head.

          I don't know if I fit into one or the other. However, I do know that at times I feel like one, and at other times, the other.

          If I am writing another new app and need to build a slew of CRUD code, I don't care about the craft. I mean, I don't want sloppy code, but I do not get joy out of writing what is _almost_ boilerplate. I still want it to reflect my style, but I don't want to type it all out. I already know how it all works in my head. The faster I get it into an IDE the better. Cursor (the AI IDE) allowed me to do this much faster than I would have by hand.

          Then there is time where I do want to craft something beautiful. I had one part of this project where I needed to build a scheduler and I had very specific things I wanted it to do. I tried twice to describe what I want but the AI tool did not do what I wanted. It built a working piece of code, but I could not get it to grasp the nuance.

          I sat down and wrote the code for the scheduler, but then had to deal with a bunch of edge cases. I took this code, gave it to the AI and told it to implement those edge cases. After reviewing and iterating on it, I had exactly what I wanted.

        • samcat116 2 days ago ago

          I think we could see a lot of these AI code tools start to pivot towards product folks for just this reason. They aren't meant for the people who find craft in what they do.

      • koliber 3 days ago ago

        That's essentially what many hands-on engineering managers or staff engineers do today. They spend significant portions of their day reviewing code from more junior team members.

        Reviewing and modifying code is more engaging than typing out the solution that is fully formed in my head. If the AI creates something close to what I have in my head from the description I gave it, I can work with it to get it even closer. I can also hand-edit it.

    • syncr0 3 days ago ago

      "I say your civilization, because as soon as we started thinking for you it really became our civilization, which is of course what this is all about." - Agent Smith

      "Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them." - Dune

      • hn_throwaway_99 2 days ago ago

        But I think that quote is a pretty gross mischaracterization of the parent comment.

        I similarly am a big fan of Cursor. But I don't "turn [my] thinking over to machines". Even though I review every piece of code it generates and make sure I understand it, it still saves me a ton of time. Heck, some of the most value I get from Cursor isn't even it generating code for me, it's getting to ask questions about a very large codebase with many maintainers where I'm unfamiliar with large chunks. E.g. asking questions like "I would like to do X, are there any places in this codebase that already do this?"

        I'm also skeptical of LLMs ever being able to live up to their hype ("AGI is coming sooooon!!!!"), but I still find them to be useful tools in context that can save me a lot of time.

    • yread 3 days ago ago

      I use it for simple tasks where spotting a mistake is easy. Like writing language binding for a REST API. It's a bunch of methods that look very similar, simple bodies. But it saves quite some work

      Or getting keywords to read about from a field I know nothing about, like caching with zfs. Now I know what things to put in google to learn more to get to articles like this one https://klarasystems.com/articles/openzfs-all-about-l2arc/ which for some reason doesn't appear in top google results for "zfs caching" for me

    • latexr 2 days ago ago

      > We are at the point where AI writes code for us and we can blindly accept it.

      I’m waiting for the day we’ll get the first major breach because someone did exactly that. This is not a case of “if”, it is very much a “when”. I’ve seen enough buggy LLM-generated code and enough people blindly accepting it to be confident in that assertion.

      I do hope it doesn’t happen, but I think it will.

    • t420mom 3 days ago ago

      I don't really want to increase the amount of time I spend doing code reviews. It's not the fun part of programming for me.

      Now, if you could switch it around so that I write the code, and the AI reviews it, that would be something.

      Imagine if your whole team got back the time they currently spend on performing code reviews or waiting for code reviews.

      • gvurrdon 3 days ago ago

        This would indeed be the best way around.The code reviews might even be better - currently, there's little time for them and we often have only one person in the team with much knowledge in the relevant language/framework/application, so reviews are often just "looks OK to me".

        It's not quite the same, but I'm reminded of seeing a documentary decades ago which (IIRC) mentioned that a factor in air accidents had been the autopilot flying the plane and human pilots monitoring it. Having humans fly and the computer warn them of potential issues was apparently safer.

      • digging 3 days ago ago

        > Now, if you could switch it around so that I write the code, and the AI reviews it, that would be something.

        I'm sort of doing that. I'm working on a personal project in a new language and asking Claude for help debugging and refactoring. Also, when I don't know how to create a feature, I might ask it to do so for me, but I might instead ask it for hints and an overview so I can enjoy working out the code myself.

    • BrouteMinou 2 days ago ago

      If you are another "waterboy" doing crud applications, the problem has been solved a long time ago.

      What I mean by that is, the "waterboy" (crud "developer") is going to fetch the water (sql query in the database), then bring the water (Clown Bob layer) to the UI...

      The size of your Clown Bob layer may vary from one company to another...

      This has been solved a long time ago. It has been a well-paid clerk job that is about to come to an end.

      If you are doing pretty much anything else, the AI is pathetically incapable of doing any piece of code that makes sense.

      Another great example, yesterday, I wanted to know if VanillaOs was using systemD or not. I did scroll through their frontpage but I didn't see anything, so I tried the AI Chat from duckduckgo. This is a frontend for AI chatbots that includes ChatGPT, Llama, Claude and another one...

      I started my question by: "can you tell me if VanillaOS is using runit as the init system?"... I wanted initially ask if it was using systemd, but I didn't want to _suggest_ systemd at first.

      And of course, all of them told me: "Yeah!! It's using runit!".

      Then for all of them I replied, without any fact in hands: "but why on their website they are mentioning to use systemctl to manage the services then?".

      And... of course! All of them answered: "Ooouppsss, my mistake, VanillaOS uses systemD, blablabla"....

      So at the end, I still don't know which init VanillaOS is using.

      If you are trusting the AI as you seem to do, I wish you the best luck my friend... I just hope you will realize the damage you are doing to yourself by "stopping" coding and letting something else do the job. That skill, my friend, is easily lost with time; don't let it evaporate from your brain for some vaporware people are trying to sell you.

      Take care.

  • gizmo 3 days ago ago

    AI writing is pretty bad, AI code is pretty bad, AI art is pretty bad. We all know this. But it's easy to forget how many new opportunities open up when something becomes 100x or 10000x cheaper. Things that are 10x worse but 100x cheaper are still extremely valuable. It's the relentless drive to making things cheaper, even at the expense of quality, that has made our high quality of life possible.

    You can make houses by hand out of beautiful hardwood with complex joinery. Houses built by expert craftsmen are easily 10x better than the typical house built today. But what difference does that make when practically nobody can afford it? Just like nobody can afford to have a 24/7 tutor that speaks every language, can help you with your job, grammar check your work, etc.

    AI slop is cheap and cheapness changes everything.

    • Gigachad 3 days ago ago

      Why do we need art to be 10000x cheaper? There was already more than enough art being produced. Now we just have infinite waves of slop drowning out everything that’s actually good.

      • gizmo 3 days ago ago

        A toddler's crayon art doesn't end up in the Louvre, nor does AI slop. Most art is bad art and it's been this way since the dawn of humanity. For as long as we can distinguish good art from bad art we can curate and there is nothing to worry about.

        • foolofat00k 3 days ago ago

          That's just the problem -- you can't.

          Not because you can't distinguish between _one_ bad piece and _one_ good piece, but because there is so much production capacity that no human will ever be able to look at most of it.

          And it's not just the AI stuff that will suffer here, all of it goes into the same pool, and humans sample from that pool (using various methodologies). At some point the pool becomes mostly urine.

          • gizmo 3 days ago ago

            My email inbox is already 99% spam (urine) and I don't see any of it. The bottom line is that if a human can easily recognize AI spam then so can another AI. This has always been an arms race with spammers on one side and curators on the other. No reason to assume spammers will start winning when they have been losing for decades.

            • FridgeSeal 3 days ago ago

              The spammers have been given a tool that’s capable of higher quality at much higher volumes.

              If nothing else, it’s now much more feasible for them to be successful by sheer force of drowning out any “worthwhile” material.

          • woah 3 days ago ago

            This is spoken by someone who doesn't know about the huge volume of mediocre work output by art students and hobbyists. Much of it is technically decent (like AI work), but lacking in meaning, impact, and emotional resonance (like AI work). You could find millions of hand drawn portraits of Keauna Reeves on Reddit before AI ever existed.

          • hackable_sand 2 days ago ago

            I am not seeing the problem here.

            Why does any human have to look at any art in your problem statement?

        • bamboozled 3 days ago ago

          What even is "bad art" or "good art" ? Art is art, there is no classifier. Certain art works might have mass appeal or something, but I don't really think it can be put into boxes like that.

      • senko 3 days ago ago

        This is mixing up two meanings of "art". Mona Lisa doesn't need to be 10000x cheaper.

        Random illustration on a random blog post sure could.

        Art as an evocative expression of the artist shouldn't be cheapened. But those freelancers churning content on Fiverr aren't pouring their soul into it.

        • jprete 3 days ago ago

          I absolutely hate AI illustrations on the top of blog posts. I'd rather see nothing.

          • BeFlatXIII 3 days ago ago

            True, but you need to play the game of including the slop to create the share cards for social media link previews.

            • Vegenoid 2 days ago ago

              A strange game - the only winning move is not to play.

          • senko 3 days ago ago

            Yeah the low effort / gratuitous ones (either AI or stock) are jarring.

            I sometimes put up the hero image on my blog posts if I feel it makes sense, for example: https://blog.senko.net/learn-ai (stock photo, ai-generated or none if I don't have an idea for a visualization that adds to the content)

      • vundercind 2 days ago ago

        AI is really good at automating away shit we didn’t need to do to begin with, but for some stupid reason were doing anyway.

        Ghost writing rich people’s vanity/self-marketing trash business or self-help books (you would not believe how many of these are written every year). Images (and prose) for internal company department newsletters that almost nobody reads.

        Great at that crap—because it doesn’t matter anyway.

        Whether making it far cheaper to produce things with no or negative value (spam, astroturf, scams) is a good idea… well no, it’s obviously terrible. It’d (kinda) be good if demand for such things remained the same, but it won’t, so it’s really, really bad.

      • erwald 3 days ago ago

        For the same reason we don't want art to be 10,000x times more expensive? Cf. status quo bias etc.

      • lijok 3 days ago ago

        > Now we just have infinite waves of slop drowning out everything that’s actually good

        On the contrary. Slop makes the good stuff stand out.

        • Devasta 3 days ago ago

          Needles in haystacks.

          • lijok 3 days ago ago

            I don't think that applies to the arts

            • fluoridation 2 days ago ago

              It does, since you have only a finite amount of time to look at art during the day. Is it equally easy to find good art (for whatever definition of "good" you choose to have) if 5 out of 100 images you see in a day are generated by AI, than if 95 out of 100 are?

              • lijok 2 days ago ago

                Back to my original point; slop makes the good stuff stand out

                • Devasta 2 days ago ago

                  Ok, but, like, imagine you are a fantastic artist, there's 100 paintings, most bland as fuck, and yours is the only one that looks good. Great, people who view 100 paintings will see yours and marvel at your skill.

                  Now, we have AI. Yours is still the only good one, but there are now 100,000 paintings.

                  How likely is it that your painting is still recognised as good, by someone who looks at 100 out of 100,000 of those paintings at random?

                  It doesn't matter that your painting is good, the discovery mechanism is shot to bits.

                  • lijok 2 days ago ago

                    How are you discovering art? What discovery mechanisms are you utilizing?

                    • Devasta 18 hours ago ago

                      What mechanism exists that deals with this? Even if I say something like my local art gallery that just shifts the burden of wading through 100k paintings to them away from me, an they still won't be able to sift through them all.

    • akudha 3 days ago ago

      The bigger problem is that we as a species get used to subpar things quickly. My dad's bicycle some 35 years ago was built like a tank. That thing never broke down and took enormous amounts of abuse and still kept going and going. Same with most stuff my family owned, when I was a kid.

      Today, nearly anything I buy breaks in a year or two, is of poor quality and depressing to use. This is by design, of course. Just as we got used to cheap household items, bland buildings (there is just nothing artistic about modern houses or commercial buildings) etc, we will also get used to shitty movies, shitty fiction etc (we are well on our way).

      • slyall 2 days ago ago

        One think to check about higher quality stuff in the past is how much it cost vs the average wage.

        You might be comparing a $100 bike from Walmart with something that cost the equivalent of $600

      • precompute 2 days ago ago

        Could not agree more. The marketing for "AI" would have you believe it's a qualitative shift when it's really a quantitative shift.

    • jay_kyburz 3 days ago ago

      Information is not like physical products if you ask me. When the information is wrong, it's value flips from positive to negative. You might be paying less for progress, but you are not progressing slower, you are progressing in the wrong direction.

    • Nemi a day ago ago

      The thing is, right now it is artificially cheaper. It is being heavily subsidized by all providers in a race to capture market share. It simply cannot stay this cheap forever at current costs.

      Now, if costs change then we have a new story. But that is not guaranteed.

    • kerkeslager 2 days ago ago

      I think that your post misses the point that making something cheaper by stealing it is unethical.

      You're presenting AI as if it's some new way of producing value but it simply isn't. All the value here was produced by humans without the help of AI: the only "innovation" AI has offered is making the theft of that value untraceable.

      > You can make houses by hand out of beautiful hardwood with complex joinery. Houses built by expert craftsmen are easily 10x better than the typical house built today. But what difference does that make when practically nobody can afford it? Just like nobody can afford to have a 24/7 tutor that speaks every language, can help you with your job, grammar check your work, etc.

      Let's take this analogy to its logical conclusion: would you have any objections if all the houses ever built by expert craftsmen were given free of charge to a few corporations, with no payment to the current owners or the expert craftsmen themselves, and then then those corporations began renting them out as AirBnBs? That's basically what you're proposing.

    • BobaFloutist 2 days ago ago

      >You can make houses by hand out of beautiful hardwood with complex joinery.

      We've logged an enormous amount of the old-growth hardwood forests the planet had doing this (and also shipbuilding). We literally don't have access to the same materials anymore.

    • GaggiX 3 days ago ago

      They are not even that bad anymore to be honest.

    • grecy 3 days ago ago

      And it will get a lot better quickly. Ten years from now it will not be slop.

      • rsynnott 3 days ago ago

        Not sure about that. Stable Diffusion came out a bit over 2 years ago. I'm not sure that Stable Diffusion 3's, or Flux's, output is artistically _better_ than the original; it's better at following directions, and better at avoiding the most grotesque errors, but if anything it perhaps looks even _more_ generic and same-y than the original Stable Diffusion output. There's a very distinctive AI _look_ which seems to have somehow synced up between Dalle, Midjourney, SD3 and others.

        • GaggiX 2 days ago ago

          You can generate AI images that do not have the "AI look":

          https://ideogram.ai/assets/image/lossless/response/icQM0yZQQ...

          And it's been two years since SD v1, a model that was not able to generate faces well, and it only output blurry 512x512 1:1 image without further finetuning, I tested v1.5 a few minutes ago and it's worse than I remember.

      • atoav 3 days ago ago

        Or it will all be slop as there us no non-slop data to train on anymore

        • Applejinx 3 days ago ago

          No, I don't think that's true. What will instead happen is there will be expert humans or teams of them, intentionally training AI brains rather than expecting wonders to occur just by turning the training loose on random hoovered-up data.

          Brainmaker will be a valued human skill, and people will be trying to work out how to train AI to do that, in turn.

    • sigmonsays 2 days ago ago

      how does this make our high quality of life possible when everything's quality is being reduced?

  • Toorkit 3 days ago ago

    Computers were supposed to be these amazing machines that are super precise. You tell it to do a thing, it does it.

    Nowadays, it seems we're happy with computers apparently going RNG mode on everything.

    2+2 can now be 5, depending on the AI model in question, the day, and the temperature...

    • maguay 3 days ago ago

      This, 100%, is the reason I feel like the sand's shifting under my feet.

      We went from trusting computing output to having to second-guess everything. And it's tiring.

      • diggan 3 days ago ago

        I kind of feel like if you're using a "Random text generator based on probability" for something that you need to trust, you're kind of holding this tool wrong.

        I wouldn't complain a RNG doesn't return the numbers I want, so why complain you don't get 100% trusted output from a random text generator?

        • jeremyjh 3 days ago ago

          Because people provide that work without acknowledging it was created by a RNG, representing it as their own and implying some of level of assurance that it is actually true.

    • a5c11 3 days ago ago

      That's an interesting point of view. For some reason we put so much effort towards making computers think and behave like a human being, while one of the first reasons behind inventing a computer was to avoid human errors.

      • fatbird 3 days ago ago

        This is the most succinct summary of what's been gnawing at me ever since LLMs became the latest thing.

        If Ilya Sutskever announced tomorrow that he'd achieved AGI, and here is its economic plan for the next 20 years, why would we have any reason to accept it over that of other human experts? It would literally be just another expert trying to tell us how to do things. And we're not short of experts, and an AGI expert has thrown away the credibility of computers as deterministically better calculators than we are.

    • Janicc 3 days ago ago

      These amazing machines weren't consistently able to tell if an image had a bird in it or not up until like 8 years ago. If you use AI as a calculator where you need it to be precise, that's on you.

      • FridgeSeal 3 days ago ago

        I think the issue is that: I’m not going to be using as a calculator any time soon.

        Unfortunately, there’s a lot of people out there, working on a lot of products, some of which I need to use, or will be exposed to, and some of them aren’t going to have the same qualms about “language model thinks 2+2=5”.

        There’s a guy on Twitter scoring how well ChatGPT models can do multiplication.

        A founder at a previous workplace wanted to wholesale dump data into ChatGPT and “make it do causal analysis!!!” (Only slightly paraphrased). These tools enable some frighteningly large-scale weaponised stupidity.

      • catlifeonmars 2 days ago ago

        The problem is that’s what people do. And everyone else has to pay for it.

    • left-struck 3 days ago ago

      I think about it differently. Before computers had to be given extremely precise and completely unambiguous instructions, now they can handle some ambiguity as well. You still have the precise output if you want it, it didn’t go away.

      Btw I’m also tired of AI, but this is one thing that’s not so bad

      Edit: before someone mentions fuzzy logic, I’m not talking about the input of a function being fuzzy, I’m talking about the instructions themselves, the function is fuzzy.

      • 110jawefopiwa 2 days ago ago

        > You still have the precise output if you want it, it didn’t go away.

        For now. Given that most new devices seem to be fully hostile to the concept of general purpose computing (see phones, VR devices, TVs, etc), I wonder how long it will be before many of the computers that are sold are even more locked down than Chromebooks - just a few prompts for interacting with a preinstalled LLM.

    • archerx 3 days ago ago

      Its a Large LANGUAGE Model and not a Large MATHEMATICS Model. People need to learn to use the right tools for the right jobs. Also LLMs can be made more deterministic by controlling it’s “temperature”.

      • Toorkit 3 days ago ago

        There's other forms of AI than LLM's and to be honest I thought the 2+2=5 was obviously an analogy.

        Yet 2 comments have immediately jumped on it.

        • FridgeSeal 3 days ago ago

          Hackernews comments and getting bogged down on minutiae and missing the overall point, is there a more iconic pairing?

      • anon1094 3 days ago ago

        Yep. ChatGPT will use the code interpreter for questions like is 2 + 2 = 5? as it should.

    • GaggiX 3 days ago ago

      Machines were not able to deal with non-formal problems.

    • shultays 3 days ago ago

      There are areas it doesn't have to be as "precise", like image generation or editing which I believe better suited for AI tools

    • hcks 2 days ago ago

      And by nowadays you mean since ChatGPT got released, that is less than 2 years ago (e.g. a consumer preview of a frontier research project). Interesting.

    • bamboozled 3 days ago ago

      Had to laugh at this one. I think we prefer the statistical approach because it’s easier, for us …

    • falcor84 3 days ago ago

      This sounds to me like a straw man argument. Obviously 2+2 will always give you 4, in any modern LLM, and even just in the Chrome address bar.

      Can you offer a real situation where we should expect the LLM to return a deterministic answer and should rightly be concerned that we're getting a stochastic one?

      • Toorkit 3 days ago ago

        Y'all are hyper focusing on this example. How about something more vague like FOO obviously being BAR, except sometimes it's BAZ now?

        The layman doesn't know the distinction, so they accept this as fact.

        • falcor84 3 days ago ago

          I'm not being facetious; I really can't think of a single good example where we need something to be deterministic and then have a reason to be disappointed about AI giving us a stochastic response.

  • Validark 3 days ago ago

    One thing that I hate about the post-ChatGPT world is that people's genuine words or hand-drawn art can be classified as AI-generated and thrown away instantly. What if I wanted to talk at a conference and used somebody's AI trigger word so they instantly rejected me even if I never touched AI at all?

    This has already happened in academia where certain professors just dump(ed) their student's essays into ChatGPT and ask it if it wrote it, and fail anyone who had their essay claimed by ChatGPT. Obviously this is beyond moronic, because ChatGPT doesn't have a memory of everything it's ever done, and you can ask it for different writing styles, and some people actually write pretty similar to ChatGPT, hence the fact that ChatGPT has its signature style at all.

    I've also heard of artists having their work removed from competitions out of claims that it was auto-generated, even when they have a video of them producing it stroke by stroke. It turns out, AI is generating art based on human art, so obviously there are some people out there whose stuff looks like what AI is reproducing.

    • owenpalmer 2 days ago ago

      As a student, I've intentionally made my writing worse in order to protect myself from being accused of cheating with AI.

    • t0lo 3 days ago ago

      This is silly, intonation and the connection of the words used and the person presenting tell you whether what they're reading is genuine.

      • galleywest200 2 days ago ago

        Tell that to the teachers that feed their student's papers through "AI checkers".

    • ronsor 3 days ago ago

      That's a people problem, not an AI problem.

  • mks 3 days ago ago

    I am bored of AI - it produces boring and mediocre results. Now, the science and engineering achievement is great - being able to produce even boring results on this level would be considered SCI-FI 10 years ago.

    Maybe I am just bored of people posting these mediocre results over and over on social and landing pages as some kind of magic. Now, the most content people produce themselves is boring and mediocre anyway. The Gen AI just takes away even the last remaining bits of personality from their writing, adding a flair of laziness - look at this boring piece I was too lazy to write, so I asked AI to generate it

    As the quote goes: "At some point we ask of the piano-playing dog not 'Are you a dog?' , but 'Are you any good at playing the piano?'" - I am eagerly waiting for the Gen AIs of today to cross the uncanny valley. Even with all this fatigue, I am positive on the AI can and will enable new use cases and could be the first major UX change from introduction of graphical user interfaces or a true pixie dust sprinkled on actually useful tools.

  • willguest 3 days ago ago

    Leave it up to a human to overgeneralize a problem and make it personal...

    The explosion of dull copy and generic wordsmithery is, to me, just a manifestation of the utilitarian profiteering that has elevated these models to their current standing.

    Let us not forget that the whole game is driven by the production of 'more' rather than 'better'. We would all rather have low-emission, high-expression tools, but that's simply not what these companies are encouraged to produce.

    I am tired of these incentive structures. Casting the systemic issue as a failure of those who use the tools ignores the underlying motivation and keeps us focused on the effect and not the cause, plus it feels old-fashioned.

    • JimmyBuckets 3 days ago ago

      Can you hash out what you mean by your last paragraph a bit more? What incentive structures in particular?

      • willguest 3 days ago ago

        I suppose it comes down to using the metric as the measure, whatever makes the company the most money will be the preferred route, and the mechanisms by which we achieve those sales are rarely given enough thought. It reflect a more timeless mantra of 'if someone is willing to pay for it, then the offering is valuable' willfully ignoring negative psycho-social impacts. It's a convenient abdication of responsibility supported by the so-called "free market" ethos.

        I am not against companies making money, but we need to serious consider the second-order impacts that technology has within society. This is evident in click-driven models, outrage baiting and dopamine hijacking. We still treat the psyche like fair-game for anyone who can hack it. So hack we shall.

        That said, I am not for over-regulation either, since the regulators often gather too much power. Policy is personnel, after all, and someone needs to watch the watchers.

        My view is that systems (technological, economic or otherwise) have inherent values that, when operating at this level of complexity and communication, exist in a kind of dance with the people using them. People obviously affect how the tools are made, but I think persistent use of any tool will have lasting impacts on the people using it, in turn affecting their decisions on what to prioritise in each iteration.

        • JimmyBuckets a day ago ago

          This was a fantastic reply, thank you! I fully agree with you but I wanted to ask for elaboration because your last comment was so eloquent.

      • jay_kyburz 3 days ago ago

        Not 100% sure what Will was trying to say, but what jumped into my head was perhaps that we'll see quality sites try and distinguish themselves by being short and direct.

        Long-winded writing will become a liability.

  • Devasta 3 days ago ago

    In Star Trek, one thing that I always found weird as a kid is they didn't have TVs. Even if the holodeck is a much better experience, I imagine sometimes you would want to watch a movie and not be in the movie. Did the future not have works like No Country for Old Men or comedies like Monty Python, or even just stuff like live sports and the news?

    Nowadays we know why the crew of the enterprise all go to live performances of Shakespeare and practice musical instruments and painting themselves: electronic mediums are so full of AI slop there is nothing worth see, only endless deluges of sludge.

    • namrog84 2 days ago ago

      Keep in mind that most of star trek was following the federation and the like. I've always considered them mostly the idealized versions of society or also workaholics and people who genuinely enjoying working as their past time.

      I feel back on their main planets the regular folks probably did a lot more random entertainment

    • palata 3 days ago ago

      That's actually a good point. I'm curious to see if people will keep making selfies everywhere they go after they realize that you can take a selfie at home and have an AI create an image that looks like you are somewhere else.

      "This is me in front of the Statue of Liberty

      - Oh, are you in NYC?

      - Nope, it's a snap filter"

      Somehow selfies should lose value, right?

      • movedx 3 days ago ago

        A selfie is meant to tell us, your audience, a story about you and the journey you’re on. Selfies are a great tool for telling stories, in fact. One selfie can say a thousand words, and then some.

        But a selfie taken and then modified to lie to the audience about your story or your journey is simply a fiction. People create fictions to either lie to themselves or to lie to others. Sometimes they’re not about lying to the audience but just manipulating them.

        People’s viewpoints and perceptions are malleable. It’s easy to trick people into thinking something is true. Couple this with the fact a lot of people are gullible and shallow, and suddenly a selfie becomes a sales tool. A marketing gimmick. Now, finally, take advances in AI to make it easier, faster, and more accessible to make highly believable fictions and yes, as you said, the selfie loses its value.

        But that’s always been the case since and even before Photoshop. Since and before the silicon microprocessor.

        All AI is going to do for selfies is what Photoshop has done for social media “Influencers” — enable more fiction with the goal to transfer wealth from other people.

        • palata 2 days ago ago

          But then if instead of spending 20min taking pictures in front of the Mona Lisa to take the perfect selfie you can actually visit the museum and have an AI generate selfies that tell the story of your visit, will you still care to take them "manually" (with all the filters that still count as "manual")?

          That's what I was thinking: if you spend hours taking selfies during your weekend, and during this time I just enjoy my time and have an AI generate better selfies of myself. What will you do?

          And then when everybody just has an AI generate their story for them, so you know that all the pictures you see are synthesized. Will you care about watching them or will you rather use an app that likes the autogenerated selfies that make sense to you?

  • canxerian 2 days ago ago

    I'm a software dev and I'm tired of LLMs being crowbar'd in to every single product I build and use, to the point where they are unanimously and unequivocally used over better, cheaper and simpler solutions.

    I'm also tired of people who claim to be excited by AI. They are the dullest of them all.

    • Spivak 2 days ago ago

      And so the counterculture begins angst angst angst! Let's find an empty rooftop, I'll take a long drag off my vape, a swig of my forty, and you can talk about how all those people down there using AI just don't get it mannnnn. How the big corporations are lying to them and convincing them to buy API credits they don't even need!

  • kingkongjaffa 3 days ago ago

    Generally, the people who seriously let genAI write for them without copious editing, were the ones who were bad writers, with poor taste anyway.

    I use GenAI everyday as an idea generator and thought partner, but I would never simply copy and paste the output somewhere for another person to read and take seriously.

    You have to treat these things adversarially and pick out the useful from the garbage.

    It just lets people who created junk food, create more junk food for people who consume junk food. But there is the occasional nugget of good ideas that you can apply to your own organic human writing.

  • KaiserPro 3 days ago ago

    I too am not looking forward to industrial scale job disruption that AI brings.

    I used to work in VFX, and one day I want to go back to it. However I suspect that it'll be entirely hollowed out in 2-5 years.

    The problem is that like typesetting, typewriting or the wordprocessor, LLMs makes writing text so much faster and easier.

    The arguments about handwriting vs type writer are quite analogous to LLM vs pure hand. People who are good and fast at handwriting hated the type writer. Everyone else embraced it.

    The ancient greeks were deeply suspicious about the written word as well:

    > If men learn this[writing], it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.

    I don't like LLMs muscling in and kicking me out of things that I love. but can I put the genie back in the bottle? no. I will have to adapt.

    • precompute 2 days ago ago

      There is a limit, though. Language has become worse with the popularization of social media. Now, thinking will because most people will be content with letting machines think for them. The brain requires stimulation in the areas it wants to excel in, and this expertise informs both action and taste in those areas. If you lose one, you lose both.

    • eleveriven 3 days ago ago

      Yep, there is a possibility that entire industries will be transformed, leading to uncertainty about employment

    • BeFlatXIII 3 days ago ago

      > People who are good and fast at handwriting hated the type writer. Everyone else embraced it.

      My thoughts exactly whenever I see true artists ranting about how everyone hates AI art slop. It simply doesn't align with my observations of people having a great time using it. Delusional wishful thinking.

      • precompute 2 days ago ago

        The march towards progress asks for idealism from the people that make Art (in all forms). It's not about "hating" AISlop, but rather about how it does not allow people to experience better art.

  • throwaway13337 3 days ago ago

    I get it. The last two decades have soured us all on the benefits of tech progress.

    But the previous decades were marked by tech optimism.

    The difference here is the shift to marketing. The largest tech companies are gatekeepers for our attention.

    The most valuable tech created in the last two decades was not in service of us but to manipulate us.

    Previously, the customer of the software was the one buying it. Our lives improved.

    The next wave of tech now on the horizon gives us an opportunity to change the course we’ve been on.

    I’m not convinced there is political will to regulate manipulation in a way that does more good than harm.

    Instead, we need to show a path to profitability through products that are not manipulative.

    The most effective thing we can do, as developers and business creators, is to again make products aligned with our customers.

    The good news is that the market for honest software has never been better. A good chunk of people are finally learning not to trust VC-backed companies that give away free products.

    Generative AI provides an opportunity for tiny companies to provide real value in a new way that people will pay for.

    The way forward is:

    1. Do not accept VC. Bootstrap.

    2. Legally bind your company to not productizing your customer.

    3. Tell everyone what you’re doing.

    It’s not AI that’s the problem. It’s the way we have been doing business.

    • zahlman 2 days ago ago

      >The way forward is:

      4. Get tragedy-of-the-commons-ed out of existence.

  • franciscop 3 days ago ago

    > "Yes, I realize that thinking like this and writing this make me a Neo-Luddite in your eyes."

    Not quite, I believe (and I think anyone can) both that AI will likely change the world as we know it, AND that right now it's over-hyped to a point that it gets tiring. For me this is different from e.g. NFTs, "Big Data", etc. where I only believed they were over-hyped but saw little-to-no substance behind them.

  • senko 3 days ago ago

    What's funny to me is how many people protest AI as a means to generate incorrect, misleading or fake information, as if they haven't used internet in the past 10-15 years.

    The internet is choke full of incorrect, fake, or misleading information, and has been ever since people figured out they can churn out low quality content in-between google ads.

    There's a whole industry of "content writers" who write seemingly meaningful stuff that doesn't bear close scrutiny.

    Nobody has trusted product review sites for years, with people coping by adding "site:reddit" as if a random redditor can't engage in some astroturfing.

    These days, it's really hard to figure out whom (in the media / on the net) who to trust. AI has just made that long-overdue fact into the spotlight.

  • thewarrior 3 days ago ago

    I’m tired of farming - Someone in 5000 BC

    I’m tired of electricity - Someone in 1905

    I’m tired of consumer apps - Someone in 2020

    The revolution will happen regardless. If you participate you can shape it in the direction you believe in.

    AI is the most innovative thing to happen in software in a long time.

    And personally AI is FUN. It sparks joy to code using AI. I don’t need anyone else’s opinion I’m having a blast. It’s a bit like rails for me in that sense.

    This is HACKER news. We do things because it’s fun.

    I can tackle problems outside of my comfort zone and make it happen.

    If all you want to do is ship more 2020s era B2B SaaS till kingdom come no one is stopping you :P

    • rsynnott 3 days ago ago

      I'm tired of 3d TV - Someone in 2013 (3D TV, after a big push by the industry in 2010, peaked in 2013, going into a rapid decline with the last hardware being produced in 2016).

      Sometimes, the hyped thing doesn't catch on, even when the industry really, really wants it to.

      • falcor84 3 days ago ago

        That's an interesting example. I would argue that 3D TV as a "solution" didn't work, but 3D as a "problem" is still going strong, and with new approaches coming out all the time (most recently Meta's announcement of the Orion AR glasses), we'll gradually see extensive adoption of 3D experiences, which I expect will eventually loop back to some version of 3D films.

        EDIT: To clarify my analogy, GenAI is definitely a "problem" rather than a particular solution, and as such I expect it to have longevity.

        • rsynnott 3 days ago ago

          > To clarify my analogy, GenAI is definitely a "problem" rather than a particular solution, and as such I expect it to have longevity.

          Hrm, I'm not sure that's true. "An 'AI' that can answer questions" is a problem, but IMO it's not at all clear that LLMs, with their inherent tendency to make shit up, are an appropriate solution to that problem.

          Like, there have been previous non-LLM chatbots (there was a small bubble based on them a while back, in which, for a few months, everyone was claiming to be adding chat to their things; it kind of came to a shuddering halt with Microsoft Tay). It seems slightly peculiar to assume that LLMs are the ultimate answer to the problem, especially as they are not actually very good at it (in some ways, they're worse than the old-gen).

          • falcor84 3 days ago ago

            Let's not focus on "LLM" then, I agree that it's just a step towards future solutions.

      • thewarrior 3 days ago ago

        AI isn’t 3D TV

        • rsynnott 3 days ago ago

          Ah, but, at least for generative AI, that kind of remains to be seen, surely? For every hyped thing that actually is The Future (TM), there are about ten hyped things which turn out to be Not The Future due to practical issues, cost, pointlessness once the novelty wears off, overpromising, etc. At this point, LLMs feel like they're heading more in that direction.

          • thewarrior 2 days ago ago

            I use generative AI every day.

            • orthecreedence 2 days ago ago

              And 5 years ago, people used blockchain to operate a toaster. It remains to be seen the applications that are optimal for LLMs and the ones where it's being shoehorned into every conceivable place because "AI."

    • StefanWestfal 3 days ago ago

      At no point does the author suggest that AI is not going to happen or that it is not useful. He expresses frustration with marketing, false promises, pitching of superficial solutions for deep problems, and the usage of AI to replace meaningful human interactions. In short, the text is not about the technology itself.

      • thewarrior 3 days ago ago

        That’s always the case with any new technology. Tech isn’t going to make everyone happy or achieve world peace.

        • lewhoo 3 days ago ago

          And yet this is precisely what people like Altman say about their product. That's pretty tiring.

    • LunaSea 3 days ago ago

      > The revolution will happen regardless. If you participate you can shape it in the direction you believe in

      This is incredibly naïve. You don't have a choice.

    • vouaobrasil 3 days ago ago

      "I'm tired of the atomic bomb" - Someone in 1945.

      Oh wait, news flash, not all technological developments are good ones, and we should actually evaluate each one individually.

      AI is shit, and some people having fun with it does not balance against it's unusually efficacy in turning everything into shit. Choosing to do something because it's fun without regard to the greater consequences is the sort of irresponsibility that has gotten human society into such a mess in the first place.

      • thewarrior 3 days ago ago

        Atomic energy has both good and bad uses. People being tired of atomic energy has held back GDP growth and is literally deindustrialising Germany.

  • wrasee 3 days ago ago

    For me what’s important is that you are able to communicate effectively. If you use language tools, other tools or even a real personal assistant if you effectively communicate the point that ultimately is yours in the making I expect that that is ultimately is what is important and will ultimately win out.

    Otherwise this is just about style. That’s important where personal creative expression is important, and in fairness to the article the author hits on a few good examples here. But there are a lot of times where personal expression is less important or even an impediment to what’s most important: communicating effectively.

    The same-ness of AI-speak should also diminish as the number and breadth of the technologies mature beyond the monoculture of ChatGPT, so I’m also not too worried about that.

    An accountant doesn’t get rubbished if they didn’t add up the numbers themselves. What’s important is that the calculation is correct. I think the same will be true for the use of LLMs as a calculator of words and meaning.

    This comment is already too long for such a simple point. Would it have been wrong to use an LLM to make it more concise, to have saved you some of your time?

    • t43562 3 days ago ago

      The problem is that we haven't invented AI that reads the crap that other AIs produce - so the burden is now on the reader to make sense of whatever other people lazily generate.

      • Gigachad 3 days ago ago

        I envision a future where the internet is entirely bots talking to each other and people have just gone outside to talk face to face, the only place that’s actually real.

      • danielbln 3 days ago ago

        But we do. The same AI that generates can read and reduce/summarize/evaluate.

        • t43562 3 days ago ago

          great so we can stop wasting our time and let the bots waste cpu cycles generating and consuming junk.

          I don't want to read work that someone else couldn't be bothered to write.

  • slicktux 2 days ago ago

    I like AI… for me it’s a great way of getting the ‘average’ of a broad array of answers to a single question but without all the ads I would get from googling and searching pages. For example, when searching for times to cook or grams of sugar to add to my gallon of iced tea…or instant pot cooking times.

    For more technical things STEM related it’s a good way to get a base line or direction; enough for me to draw my own conclusions or implementations…it’s like a rubber ducky I can talk to.

  • ryanjshaw 3 days ago ago

    > There are no shortcuts to solving these problems, it takes time and experience to tackle them.

    > I’ve been working in testing, with a focus on test automation, for some 18 years now.

    OK the first thought that came to my mind reading this: sounds like a opportunity to build an AI-driven product.

    I've been using Cursor daily. I use nothing else. It's brilliant and I'm very happy. If I could have Cursor for Well-Designed Tests I'd be extra happy.

  • xena 2 days ago ago

    My last job made me shill for AI stuff because GPUs have a lot of income potential. One of my next ones is going to make me shill for AI stuff because it makes people deal with terrifying amounts of data.

    I understand why this is the case, but it's still kinda disappointing. I'm hoping for an AI winter so that I can talk about normal uses of computers again.

    • cult_of_we 2 days ago ago

      reviving an old throwaway:

      the last two years have been incredibly depressing in the space of dedicated hardware for doing high performance computing tasks.

      all the air has been sucked out of the room because the world can’t get enough of generating more text that no one has any meaningful use for besides hyping up their product whose primary focus is how it “leverages AI”.

      and in the midst of all of this, I’m seeing these same technologies dramatically accelerate problems in short fiction, science fiction and fantasy, and education.

      it will be absurdly bleak if the grotesque reality we’re creating destroys the things that made go into science & engineering in the first place…

  • est 3 days ago ago

    AI acts like a bad intern these days, and should be treated like one. Give it more guidance and don't make important tasks depending it.

  • jeswin 3 days ago ago

    > But I’m pretty sure I can do without all that ... test cases ...

    Test cases?

    I did a Show HN [1] a couple of days back for a UI library built almost entirely with AI. Gpt-o1 generated these test cases for me: https://github.com/webjsx/webjsx/tree/main/src/test - in minutes instead of days. The quality of test cases are comparable to what a human would produce.

    75% of the code I've written in the last one year has been with AI. If you still see no value in it (especially with things like test cases), I'm afraid you haven't figured out how to use AI as a tool.

    [1]: https://news.ycombinator.com/item?id=41644099

    • a5c11 3 days ago ago

      That means the code you wrote must have been pretty boring and repeatable. No way AI would produce code for, for example, proprietary hardware solutions. Try AI with something which isn't already on StackOverflow.

      Besides, I'd rather spent hours on writing a code, than trying to explain a stupid bot what I want and reshape it later anyway.

      • jeswin 3 days ago ago

        90% of projects are boring and somewhat repeatable. I've used it for generating codegen tools (https://github.com/codespin-ai/codespin), vscode plugins (https://github.com/codespin-ai/codespin-vscode-plugin), servers in .Net (https://github.com/lesser-app/tankman), and in about a dozen other work projects over the past year.

        > Besides, I'd rather spent hours on writing a code, than trying to explain a stupid bot what I want and reshape it later anyway.

        I have other things to do with my hours. If something gets me what I want in minutes, I'll take it.

      • nicce 3 days ago ago

        Also the most useful and expensive testcases require understanding of the whole project. You need to validate the functionality from end-to-end and also that system does not crash for unexpected things and so on. AIs don't have that level understanding as "a whole" yet.

        For sure, simple unit tests are easy to generate with AI.

    • righthand 3 days ago ago

      Your UI library is just a stripped down React clone. The code wasn’t generated but rather copied, these test cases and functions are identical to React. I could have done the same thing with a “build your own react” article. This is what I don’t get about the LLM hype is that 99% of the examples are people claiming they invented something new with it. We had code generators before LLM-hype took off. Now we have code generators that just steal work and repurpose it as something claimed original.

      • buddhistdude 3 days ago ago

        no programmer in my company invents things often

        • righthand 3 days ago ago

          And so you would accept “hey I spun up a react-create-element project, but instead of React I asked an LLM to copy the parts I needed for react so we have another dependency to maintain instead of tree shaking with webpack” as a useful work?

          • buddhistdude 3 days ago ago

            not necessarily, but it's not less creative and inventive than what I believe most programmers are doing most of the time. there are relatively few people who invent new patterns (and they actually might be overrepresented on this website). the rest learns and applies those patterns.

            • righthand 3 days ago ago

              Right that is well understood, but having an LLM compile together functions under the guise of custom built library is hardly a software engineer applying established patterns.

              • jeswin 3 days ago ago

                It is exactly the same as applying established patterns - patterns are what the LLMs have trained on.

                It seems you haven't really used LLMs for coding. They're super useful and improving every month - you don't have to take my word for it.

                And btw - codespin (https://github.com/codespin-ai/codespin) along with the VSCode plugin is what I use for AI-assisted coding many times. That was also generated via an LLM. I wrote it last year, and at that point there weren't many projects it could copy from.

                • righthand 3 days ago ago

                  I don’t need to use an LLM for coding because my projects where I would need an LLM don’t include things already existing that would be a waste of time no matter how efficiently I could do it.

                  Furthermore it is an application of principles but the application was done a long time ago by someone else, not the LLM and not you. As you claimed you did none of the work, only went in and tweak these applied principles.

                  I’ll tell you what slows me down and why I don’t need an LLM. I had a task to migrate some legacy code from one platform to another, I made the PRs, added some tests, and prepared the deploy files as instructed in the READMEs of the platform I was migrating to. This took me 3-4 days. It then took 26 days to get the code deployed because 5 people are gate keepers of Helm charts and AWS policies.

                  Software development isn’t slow because I had to read docs and understand what I’m building, it is slow because we’ve enabled AWS to create red tape and gatekeepers. Your LLM doesn’t speed up that process.

                  > They're super useful and improving every month - you don't have to take my word for it.

                  And each month that goes by that you continue to invest, your value decreases and you will be out of a job. As you have demonstrated, you don’t need to know how to build a UI library or even that your UI library you “generated” is just a reskin of something else. If it’s so simple and amazing that you don’t need to know anything, why would I keep you around?

                  Here’s a fun anecdote, sometimes I pair with my manager when working through something pretty causally. I need to rubber duck an idea or am stuck on finding the documentation for a construct. My manager will often take my problem and chat with an LLM for a few minutes. Every time I end up finding the answer before he finishes his chat. Most of the time his solution is often wrong because by nature LLMs are scrambling the possible results to make it look like a unique solution.

                  Congrats on impressing yourself that LLM can be a slightly accurate code generator. How does paying a company to do something TabNine was doing years ago make me money? What will you do with all your free time generate more useless dependencies?

                  • jeswin 2 days ago ago

                    If you think TabNine was doing years ago what LLMs are doing today, then I can't convince you.

                    We'll talk in a year or so.

                    • righthand 2 days ago ago

                      No we won’t, we’ll all be laid off and some young devs will be hired 1/3 the cost to replace your ui library with something else spit out of an llm which specifically tuned to cobble together js apps.

        • precompute 2 days ago ago

          It's a matter of truth and not optics.

    • codelikeawolf 3 days ago ago

      > The quality of test cases are comparable to what a human would produce.

      This has actually been a problem for me. I spent a lot of time getting good at writing tests and learning the best approaches to testing things. Most devs I've worked with treat tests as second-class citizens. They either try to treat them like production code and over-abstract everything, which makes the tests difficult to navigate, or they dump a bunch of crap in a file, ignore any conventions or standards, and write superfluous test cases that don't provide any value (if I see one more "it renders successfully" test in a React project, I'm going to lose it).

      The tests generated by these LLMs is comparable in quality to what most humans have produced, which isn't saying much. Getting good at testing isn't like getting good at most things. It's sort of thankless, and when I point out issues in the quality of the tests, I imagine I'm getting some eye rolls. Who cares, they're just tests, at least we have them, right? But it's code you have to read and maintain, and it will break, and you'll have to fix it. I'm not saying I'm a testing wizard or anything like that. But I really sympathize with the author, because there's a lot of crappy test code coming out of these LLMs.

      Edit: grammar

  • snowram 3 days ago ago

    I quite like some parts of AI. Ray reconstruction and supersampling methods have been getting incredible and I can now play games with twice the frames per seconds. On the scietific side, meteorological predictions and protein folding have made formidable progresses thanks to it. Too bad this isn't the side of AI that is in the spotlight.

  • heystefan 2 days ago ago

    Not sure why this is front page material.

    The thinking is very surface level ("AI art sucks" is the popular opinion anyway) and I don't understand what the complaints are about.

    The author is tired of AI and likes movies created by people. So just watch those? It's not like we are flooded with AI movies/music. His social network shows dull AI-generated content? Curate your feed a bit and unfollow those low effort posters.

    And in the end, if AI output is dull, there's nothing to be afraid of -- people will skip it.

  • socksy 18 hours ago ago

    Or are you just going to read prompt results out loud for 40 minutes, too? I hope not, but we will not take the chance.

    I did actually attend a talk at a conference a few years ago where someone did this. It wasn't with LLMs, but with a Markov chain, and it was art. A bizarre experience, but unfortunately not recorded (at the request of the speaker).

    Obviously the big difference was that this was not kept secret at all (indeed, some of the generated prompts included sections where he was instructed to show his speaker notes to the audience, where we could see the generated text scroll up the screen).

  • monkeydust 3 days ago ago

    AI is not just GenAI, ML sits underneath it (supervised, unsupervised) and that has genuinely delivered value for the clients we service (financial tech) and in my normal life (e.g. photo search, screen grab to text, book recommendations).

    As for GenAI I keep going back to expectation management, its very unlikley to give you the exact answer you need (and if it does then well you job longetivity is questionable) but it can help accelerate your learning, thinking and productivity.

    • falcor84 3 days ago ago

      > ... its very unlikley to give you the exact answer you need (and if it does then well you job longetivity is questionable)

      Experimenting with o1-preview, it quite often gives me the exact answer I need on the first try, and I'm 100% certain that my job longevity is questionable.

      • monkeydust 3 days ago ago

        It has been more hit and miss for me, when it works it can be amazing then I try to show someone, same prompt, different less so amazing answer.

  • throwaway123198 3 days ago ago

    I'm bored of IT. Software is boring, AI included. None of this feels like progress. We've automated away white collar work...but we also acknowledge most white collar work is busy work that's considered a bullcr*p job. We need to get back to innovation in manufacturing, materials etc. i.e. the real world.

    • precompute 2 days ago ago

      Accelerating hamster wheel

  • zombiwoof 2 days ago ago

    the most depressing thing for me is the rush and all out hype. i mean, Apple not only renamed AI "Apple Intelligence" but if you go INTO a Apple Store, it's banner is everywhere, even as a wallpaper on the phones with the "glow"

    But guess what isn't there? An actually shipping IMPLEMENTATION. It's not even ready yet but the HYPE is so overblown.

    Steve Jobs is crying in his grave how stupid everyone is being about this.

  • EMM_386 3 days ago ago

    The one use of AI that annoys me the most is Google trying to cram it into search results.

    I don't want it there, I never look at it, it's wasting resources, and it's a bad user experience.

    I looked around a bit but couldn't see if I can disable that when logged in. I should be able to.

    I don't care what the AI says ... I want the search results.

    • tim333 3 days ago ago

      ublock origin block element seems to work. (element ##.h7Tj7e)

      I quite like the thing personally.

  • mark_l_watson 2 days ago ago

    Nice thoughts. Since 1982 half my work has been in one of the fields loosely called AI and the other half more straight up software development. After mostly been doing deep learning and now LLM for almost ten years, I miss conventional software development.

    When I was swimming this morning I thought of writing a RDF data store with partial SPARQL support in Racket or Common Lisp - basically trade a year of my time to do straight up design and coding, for something very few people would use.

    I get very excited by shiny new things like advance voice interface for ChatGPT and NoteBookLM, both fine product ideas and implementations, but I also feel some general fatigue.

  • ricardobayes 3 days ago ago

    I personally don't see AI as the new Internet, as some claim it to be. I see it more as the new 3D-printing.

  • seydor 3 days ago ago

    > same massive surge I’ve seen in the application of artificial intelligence (AI) to pretty much every problem out there

    I have not. Perhaps programming on the initial stages is the most 'applied' AI but there is still not a single major AI movie and no consumer robots.

    I think it's way too early to be tired of it

  • Smithalicious 3 days ago ago

    Do people really view so much content of questionable provenance? I read a lot and look at a lot of art, but what I read and look at is usually shown to me by people I know, created by authors and artists with names and reputations. As a result I basically never see LLM-written text and only occasionally AI art, and when I see AI art at least it was carefully guided by a real person with an artistic vision still (the deep end of AI image generation involves complex tooling and a lot of work!) and is easily identified as such.

    All this "slop apocalypse" the-end-is-neigh stuff strikes me as incredibly overblown, affecting mostly only "open web" mass social media platforms which were already 90% industrially produced slop for instrumental purposes anyways.

  • me551ah 3 days ago ago

    People talk about 'AI' as if stackoverflow didn't exist. Re-inventing the wheel is something that programmers don't do anymore. Most of the time, someone somewhere has solved the problem that you are solving. Programming earlier used to be about finding these solutions and repurposing them for your needs. Now it has changed to asking AI, the exact question and it being a better search engine.

    The gains to programming speed and ability are modest at best, the only ones talking about AI replacing programmers are people who can't code. If anything AI will increase the need for more programmers, because people rarely delete code. With the help of AI, code complexity is going to go through the roof, eventually growing enough to not fit into the context windows of most models.

    • archargelod 3 days ago ago

      > Now it has changed to asking AI, the exact question and it being a better search engine.

      Except that you get mostly the wrong answers. And it's not too bad when it's obviously wrong or you already know the answer. It is bad and really bad when you're noob and trying to ask AI about stuff you don't know yet. How would you be able to discern a hallucination from statistics bias from truth?

      It is inherent problem of LLMs and no amount of progress would be able to solve it.

      And it's only gonna get worse, with human information rapidly being consumed and regurgitated in 100x more volume. In 10 years there will be no google, there won't be the need to find a written article. Instead, you will generate a new one in couple clicks. And we will treat as truth, because there might as well not be any.

  • alentred 3 days ago ago

    I am tired of innovations being abused. AI itself is super exciting and fascinating. But, it being abused -- to generate content to drive more ad-clicks, or the "Now better with AI" promise on every landing page, etc. etc. -- that I am tired of, yes.

  • thih9 3 days ago ago

    Doesn’t that kind of change follow the overall trend?

    We continuously shift to higher level abstractions, trading reliability for accessibility. We went from binary to assembly, then to garbage collection and to using electron almost everywhere; ai seems yet another step.

  • sensanaty 3 days ago ago

    What I'm really tired of is people completely misrepresenting the Luddites as if they were simply anti-progress or anti-technology cult or whatever and nothing else. Kinda hilariously sad that the propaganda of the time managed to win over the genuine concerns that Luddites had about inhumane working environments & conditions.

    It's very telling that the rabid AI sycophants are painting anyone who has doubts about the direction AI will take the world as some sort of anti-progress lunatic, calling them luddites despite not knowing the actual history involved. The delicious irony of their stances aligning with the people who were okay with using child labor and mistreating workers en-masse is not lost on me.

    My hope is that AI does happen, and that the first people to rot away because of it are exactly the AI sycophants hell-bent on destroying everything in the name of "progress", AKA making some rich psychopaths like Sam Altman unfathomably rich and powerful to the detriment of everyone else.

    A good HN thread on the topic of luddites, as it were: https://news.ycombinator.com/item?id=37664682

    • CatWChainsaw 2 days ago ago

      Thankfully, even here I've seen more faithful discussion of the Luddites and more people are willing to bring up their actual history whenever some questionably-uninformed techbro here uses the typical pejorative insult.

  • eleveriven 3 days ago ago

    AI is a tool, and like any tool, it's only as good as how we choose to use it.

    • vouaobrasil 3 days ago ago

      No, that is wrong. We can't "choose" because too many people have instincts. And people always have the instinct to use new technology to gain incremental advantages over others, and that in turn puts pressure on everyone to use it. That prisoner's dilemma situation means that without a firm and larger guiding moral philosophy, we really can't choose because instinct takes over choice. In other words, the way technology is used in modern society is not a matter of choice but is largely autonomous and goes beyond human choice. (Of course, a few individuals will choose but the average effect is likely to be negative.)

      • syncr0 3 days ago ago

        More people need to read this / think this point through. In a post Excel world, could any accountant get a job not knowing Excel? No matter how good they were "on paper". Choice becomes a self aggrandizing illusion, reality eventually asserts itself.

        With attention spans shrinking, publishers who prioritize quantity over quality get clicks, which generates ad revenue, which keeps their lights on while their competitors doing quality in depth, nuanced writing go out of business.

        It feels like a game of chess closing in on you no matter how much you physically want to fight your way out and flip the board over.

      • mitthrowaway2 2 days ago ago

        The Goddess answers: "What is the matter with that, if it’s what you want to do?"

        Malaclypse: "But nobody wants it! Everybody hates it!"

        Goddess: "Oh. Well, then stop."

        -- https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

  • bane 2 days ago ago

    I feel sorry for the young hopeful data scientists who got into the field when doing data science was still interesting and 95% of their jobs hadn't turned over into tuning the latest LLM to poorly accomplish some random task an executive thought up.

    I know a few of them and once they started riding the hype curve for real, the luster wore off and they're all absolutely miserable in their jobs and trying to look for exits. The fun stuff, the novel DL architectures, coming up with clever ways to balance datasets or label things...it's all just dried up.

    It's even worse than the last time I saw people sadly taking the stairs down the other end of the hype cycle when bioinformatics didn't explode into the bioeconomy that had been promised or when blockchain wasn't the revolution in corporate practices that CIOs everywhere had been sold on.

    We'll end up with this junk everywhere eventually, and it'll continue to commoditize, and that's why I'm very bearish on companies trying to make LLMs their sole business driver.

    AI is a feature, not the product.

  • drillsteps5 2 days ago ago

    I've always thought that "actual" AI (I guess it's mostly referred to as "General AI" now) will require a feedback loop and continuous unsupervised learning. As in the system decides on an action, executes, receives the feedback, assesses the situation in relation to the goals (positive and negative reinforcement), corrects (adjusts the network), and the cycle repeats. This is not the case with current generative AI, where the network is trained (reinforced learning) and then the snapshot of the trained network is used to produce output. This works for some limited number of applications but this will never produce General AI, because there's no feedback loop. So it's a bit of a gimmick.

  • unraveller 3 days ago ago

    If you go back to the earliest months of the audio & visual recording medium it was also called uncanny, soulless and of dubious quality compared to real life. Until it wasn't.

    I don't care how many repulsive AI slop video clips get made or promoted for shock value. Today is day 1 and day 2 looks far better with none of the parasocial celebrity hangups we used as short-hand for a quality marker - something else will take that place.

  • zone411 3 days ago ago

    The author is in for a rough time in the coming years, I'm afraid. We've barely scratched the surface with AI's integration into everything. None of the major voice assistants even have proper language models yet, and ChatGPT only just introduced more natural, low-latency voices a few days ago. Software development is going to be massively impacted.

    • BoGoToTo 3 days ago ago

      My worry is what happens once large segments of the population become unemployable.

      • anonyfox 3 days ago ago

        You should really have a look at Marx. He literally predicted what will happen when we reach the state of "let machines do all work", and also how this is exactly the way that finally implodes capitalism as a concept. The major problem is he believed the industrial revolution will automate everything to such an extend, which it didn't, but here we are with a reasonable chance that AI will do the trick finally.

        • CatWChainsaw 2 days ago ago

          It may implode capitalism as a concept, but the people who most benefit from it and hold the levers of power will also have their egos implode, which they cannot stand. Like even Altman has talked about UBI and a world of prosperity for all (although his latest puff piece says we just can't conceive of the jobs we'll have but w/e), but anyone who's "ruling" the current world is going to be the least prepared for a world of abundance and happiness for all where money is meaningless. They won't walk joyfully into the utopia they pandered in yesteryear, they'll try to prop up a system that positions them as superior to everyone else, and if it means the world goes to hell, so be it.

          (I mean, there was that one study that used a chatbot to deradicalize people, but when you're the one in power, your mental pathologies are viewed as virtues, so good luck trying to change them as people.)

  • AlienRobot 3 days ago ago

    I'm tired of technology.

    I don't think there has ever been a single tech news that brought me joy in all my life. First I learned how to use computers, and then it has been downhill ever since.

    Right now my greatest joy is in finding things that STILL exist rather than new things, because the things that still exist are generally better than anything new.

    • syncr0 3 days ago ago

      Reminds me of the way the way the author of "Zen and the Art of Motorcycle Maintenance" takes care of his leather gloves and they stay with him on the order of decades.

  • pilooch 3 days ago ago

    By AI here, it is meant generative systems relying on neural networks and semi/self supervised training algorhms.

    It's a reduction of what AI is as a computer science field and even of what the subfield of generative AI is.

    On a positive note, generative AI is a malleable statiscally-geounded technology with a large applicative scope. At the moment the generalistic commercial and open models are "consumed" by users, developers etc. But there's a trive of forthcoming, personalized use cases and ideas to come.

    It's just we are still more in a contemplating phase than a true building phase. As a machine learnist myself, I recently replaced my spam filter with a custom fineruned multimodal LLM that reads my emails a pure images. And this is the early early beginning, imagination and local personalization will emerge.

    So I'd say, being tired of it now is missing much later. Keep the good spirit on and think outside the box, relax too :)

    • layer8 3 days ago ago

      > I recently replaced my spam filter with a custom fineruned multimodal LLM that reads my emails a pure images.

      That doesn’t sound very energy efficient.

  • richrichardsson 3 days ago ago

    What frustrates me is the bandwagoning, and thus the awful homogeny in all social media copy these days, since it seems everyone is using an LLM to generate their copy writing, and thus 99.999% of products will "elevate" something or the other, and there are annoying emojis scattered throughout the text.

    • postalcoder 3 days ago ago

      i’m at the point where i don’t trust any markdown formatted text. it’s actually become an anti signal which is very sad because i used to consider it a signal of partial technical literacy.

  • shswkna 2 days ago ago

    The elephant in the room is this question:

    What do we value? What is our value system made up of?

    This is, in my opinion, the Achille‘s heel of the current trajectory of the West.

    We need to know what we are doing it for. Like the OP said, he is motivated by the human connectedness that art, music and the written word inspire.

    On the surface, it seems we value the superficial smuckness of LLM-produced content more.

    This is a facade, like so many other superficial artifacts of our social life.

    Imperfect authenticity will soon (or sometime in the future) become a priceless ideal.

  • CodeCompost 3 days ago ago

    We're all tired of it, but to ignore it is to be unemployed.

    • kunley 3 days ago ago

      With all due respect, that's seems like a cliche, repeated maybe because others repeat that already.

      Working in IT operations (mostly), I haven't seen literally any case of someone's job in danger because of not using "AI".

    • sph 3 days ago ago

      Depends on which point of your career. With 18 years of experience, consulting for tech companies, I can afford to be tired of AI. I don't get paid to write boilerplate code, and avoiding anyone knocking at the door with yet another great AI-powered idea makes commercial sense, just like I have ignored everyone wanting to build the next blockchain product 5 years ago, with no major loss of income.

      Also, running a bootstrapped business, I have bigger fishes to fry than playing mentor to Copilot to write a React component or generating bullshit copy for my website.

      I'm not sure we need more FUD saying that the choice is between AI or unemployment.

      • Al-Khwarizmi 3 days ago ago

        I find comparisons between AI and blockchain very misleading.

        Blockchain is almost entirely useless in practice. I have no reason to disdain it, in fact I was active in crypto around 10-12 years ago when I was younger and more excited about tech than now, and I had fun. But the fact is that the utility that it has brought to most of society is essentially to have some more speculative assets to gamble on, at ludicrous energy and emissions costs.

        Generative AI, on the other hand, is something I'm already using almost every day and it's saving me work. There may be a bubble but it will be more like the dotcom bubble (i.e., not because the tech is useless, but because many companies jump to make quick bucks without even knowing much about the tech).

      • Applejinx 3 days ago ago

        I mean, to be selfish at apparently a dicey point in history, go ahead and FUD and get people to believe this.

        None of my useful work is AI-able, and some of the useful work is towards being able to stand apart from what is obviously generated drivel. Sounds like the previous poster with the bootstrapped business is in a similar position.

        Apparently AI is destroying my potential competition. That seems unfair, but I didn't tell 'em to make such an awful mistake. How loudly am I obliged to go 'stop, don't, come back'?

    • snickerer 3 days ago ago

      How all those cab drivers who ignore autonomous driving are now unemployed?

      • anonzzzies 3 days ago ago

        When it's for sale everywhere (I cannot buy one) and people trust it, all cab drivers will be gone. Unemployed will depend on the resilience, but unlike cars replacing coach drivers, there is not really a similar thing a cab driver can pivot to.

        • snickerer 3 days ago ago

          Yes, we can imagine a future where all cab drivers are unemployed, replaced by autonomous driving. However, we don't know when this will happen, because autonomous driving is a much harder problem than the hype from a few years ago suggested. There isn't even proof that autonomous driving will ever be able to fully replace human drivers.

          • testval123 2 days ago ago

            Have you taken a Waymo recently? I would take it over a human driver much of the time.

    • sunaookami 3 days ago ago

      Speak for yourself.

    • kasperni 3 days ago ago

      > We're all tired of it,

      You’re feeling tired of AI, but let’s delve deeper into that sentiment for a moment. AI isn’t just a passing trend—it’s a multifaceted tool that continues to elevate the way we engage with technology, knowledge, and even each other. By harnessing the capabilities of artificial intelligence, we allow ourselves to explore new frontiers of creativity, problem-solving, and efficiency.

      The interplay between human intuition and AI’s data-driven insights creates a dynamic that enriches both. Rather than feeling overwhelmed by it, imagine the opportunities—how AI can shoulder the burdens of mundane tasks, freeing you to focus on the more nuanced, human elements of life.

      /s

  • nuc1e0n a day ago ago

    I tried to learn AI frameworks. I really did. But I just don't care about them. AI as it is today just isn't useful to me. Databases and search engines are reliable. The output of AI models is totally unreliable.

  • nasaeclipse 2 days ago ago

    At some point, I wonder if we will go more analog again. How do we know if a book was written by a human? Simple, he used a typewriter or wrote it by hand!

    Photos? Real film.

    Video.... real film again lol.

    I think that may actually happen at some point.

    • famahar 2 days ago ago

      I'm already starting to embrace it. Content overload through subscription platforms makes it hard for me to choose. My phone being an everything machine always distracts me l. I'm also tired of algorithmic recommendations. I bought a cassette player and find a lot of joy discovering music at record shops and browsing around bandcamp for some cassettes in genres I like.

      • jprete 2 days ago ago

        I realized a few weeks ago that broadcast TV had at least one useful advantage over streaming - you don't have the option to pause it, and that makes it much easier to maintain attention.

  • Janicc 3 days ago ago

    Without any sort of AI we'd probably be left with the most exciting yearly releases being 3-5% performance increases in hardware (while being 20% more expensive of course), the 100000th javascript framework and occasionally a new windows which everybody hates. People talk about how population collapse is going to mess up society, but I think complete stagnation in terms of new consumer goods/technology are just as likely to do the deed. Maybe AI will fail to improve from this point, but that's a dark future to imagine. Especially if it's for the next 50 years.

    • siffin 3 days ago ago

      Neither of those things will end society, they aren't even issues in the grand scale of things.

      Climate change and biosphere collapse, on the other hand, are already ending society and definitely will, no exceptions possible - unless someone is capable of performing several miracles.

  • pech0rin 3 days ago ago

    As an aside its really interesting how the human brain can so easily read an AI essay and realize its AI. You would think that with the vast corpus these models were trained on there would be a more human sounding voice.

    Maybe it's overfitting or maybe just the way models work under the hood but any time I see AI written stuff on twitter, reddit, linkedin its so obvious its almost disgusting.

    I guess its just the brain being good at pattern matching, but it's crazy how fast we have adapted to recognize this.

    • Jordan-117 3 days ago ago

      It's the RLHF training to make them squeaky clean and preternaturally helpful. Pretty sure without those filters and with the right fine-tuning you could have it reliably clone any writing style.

      • llm_trw 3 days ago ago

        One only need to go to the dirtier corners of the llm forums to find some _very_ interesting voices there.

        To quote someone from a tor bb board: my chat history is illegal in 142 countries and carries the death penalty in 9.

      • bamboozled 3 days ago ago

        But without the RLHF aren’t they less useful “products”?

    • infinitifall 3 days ago ago

      Classic survivorship bias. You simply don't recognise the good ones.

    • carlmr 3 days ago ago

      >Maybe it's overfitting or maybe just the way models work under the hood

      It feels more like averaging or finding the median to me. The writing style is just very unobtrusive. Like the average TOEFL/GRE/SAT essay style.

      Maybe that's just what most of the material looks like.

    • Al-Khwarizmi 3 days ago ago

      Everyone I know claims to be able to recognize AI text, but every paper I've seen where that ability is A/B tested says that humans are pretty bad at this.

    • chmod775 3 days ago ago

      These models are not trained to act like a single human in a conversation, they're trained to be every participant and their average.

      Every instance of a human choosing not to engage or speak about something - because they didn't want to or are just clueless about the topic, is not part of their training data. They're only trained on active participants.

      Of course they'll never seem like a singular human with limited experiences and interests.

      • izacus 3 days ago ago

        The output of those AIs is akin to products and software designed for the "average" user - deep inside uncanny valley, saying nothing specifically, having no specific style, conveying no emotion and nothing to latch on to.

        It's the perfect embodiment of HR/corpspeak which I think its so triggering for us (ex) corpo drones.

    • amelius 3 days ago ago

      Maybe because the human brain gets tired and cannot write at the same quality level all the time, whereas an AI can.

      Or maybe it's because of the corpus of data that it was trained on.

      Or perhaps because AI is still bad at any kind of humor.

  • jillesvangurp 2 days ago ago

    I'm actually excited about AI. With a dose of realism. But I benefit from LLMs on a daily basis now. There are a lot of challenges with LLMs but they are useful tools and we haven't really seen much yet. It's only been two years since chat gpt was released. And mostly we're still consuming this stuff via chat UIs, which strikes me as sub optimal and is something I hope will change soon.

    The increases in context size are helping a lot. The step improvement in reasoning abilities and quality of answers is amazing to watch. I'm currently using chat gpt o1 preview a lot for programming stuff. It's not perfect but I can use a lot of what it generates and this is saving me a lot of time lately. It still gets stuff wrong and there's a lot of stuff it doesn't know.

    I also am mildly addicted to perplexity.ai. Just a wonderful tool and I seem to be getting in the habit of asking it about anything that pops into my mind. Sometimes it's even work related.

    I get that people are annoyed with all the hyperbolic stuff in the media on this topic. But at the same time, the trends here are pretty amazing. I'm running the 3B parameter llama 3.2 model on a freaking laptop now. A nice two year old M1 with only 16GB. It's not going to replace bigger models for me. But I can see a few use cases for running it locally.

    My view is very simple. I'm a software developer. I grew up a few decades ago before there was any internet. I had no clue what a computer even was until I was in high school. Things like Knight Rider, Star Trek, Buck Rogers, Star Wars etc. all featured forms of AIs that are now more or less becoming science fact. C3PO is pretty dumb compared to chat gpt actually. You could build something better and more useful these days. That would mostly an art and crafts project at this point. No special skills required. Just use an LLM to generate the code you need. Nice project for some high school kids.

    Which brings me to my main point. We're the old generation. Part of being old is getting replaced by younger people. Young people are growing up with this stuff. They'll use it to their advantage and they are not going to be held back by old fashioned notions about the way the things should work according to us old people. The thing with Luddites is that they exist in any generation. And then they grow tired, old, and then they die off. I have no ambition to become irrelevant like that.

    I'm planning to keep up with young people as long as I can. I'll have to give that up at some point but not just yet. And right now that includes being clued in as much as I can about LLMs and all the developer plumbing I need to use them. This stuff is shockingly easy. Just ask your favorite LLM to help you get started.

  • shahzaibmushtaq 3 days ago ago

    Over the last few years, AI has become more common than HI generally, not professionally. Professional knows the limits and scopes of their works and responsibilities, not the AI.

    A few days ago, I visited a portfolio website and immediately realized that its English text was written with the help of AI or some online helper tools.

    I love the idea to brainstorming with AI, but copying-pasting anything it throws at you blocks you for adding creativity to the process of making something good.

    I believe using AI must complement HI (or IQ level) rather than mock it.

  • resters 3 days ago ago

    AI (LLMs in this case) reduce the value of human conscientiousness, memory, and verbal and quantitative fluency dramatically.

    So what's left for humans?

    We very likely won't have as many human software testers or software engineers. We'll have even fewer lawyers and other "credentialed" knowledge worker desk jockeys.

    Software built by humans entails humans writing code that has not already been written -- by writing a lot of code that probably has already been written and "linking" it together, etc. When's the last time most of us wrote a truly novel algorithm?

    In the AI powered future, software will be built by humans herding AIs to build it. The AIs will do more of the busy work and the humans will guide the process. Then better AIs will be more helpful at guiding the process, etc.

    Eventually, the thing that will be rewarded is truly novel ideas and truly innovative thinking.

    AIs will make varioius types of innovative thinking less valuable and various types more valuable, just like any technology has done.

    In the past, humans spent most of their brain power trying to obtain their next meal. It's very cynical to think that AI removing busy work will somehow leave humans with nothing meaningful to do, no purpose. Surely it will unlock the best of human potential once we don't have to use our brains to do repetitive and highly pattern-driven tasks just to put food on the table.

    When is the last time any of us paid a laywer to do something truly novel? They dig up boilerplate verbiage, follow standard processes, rinse, repeat, all for $500+ per hour.

    Right now we have "manual work" and "knowledge work", broadly speaking, and both emphasize something that is being produced by the worker (a construction project, a strategic analysis, a legal contract, a diagnosis, a codebase, etc.)

    With AI, workers will be more responsible for outcomes and less rewarded for simply following a procedure that an LLM can do. We hire architects with visual / spatial design skills rather than asking a contractor to just create a living space with a certain amount of square feet. The emphasis in software will be less on the writing of the code and more on the impact of the code.

  • fulafel 2 days ago ago

    Coming from a testing specialist - the facts are right but the framing seems negatively biased. For the generalist who wants to get some Playwright tests up, the low hanging fruit is definitely helped a lot by generative AI. So I emphatically disagree with "there are no shortcuts".

  • amradio 3 days ago ago

    We can’t compare AI with an expert. There’s going to be little value there. AI is about as capable as your average college grad in any subject.

    What makes AI revolutionary is what it does for the novice. They can produce results they normally couldn’t. That’s huge.

    A guy with no development experience can produce working non-trivial software. And in a fraction of the time your average junior could.

    And this phenomenon happens across domains. All of a sudden the bottom of the skill pool is 10x more productive. That’s major.

  • creesch 3 days ago ago

    I fully agree with this sentiment, also interesting to see Bas Dijkstra being featured on this platform.

    Another article that touches on a lot of the issues I have with the place AI currently occupies in the landscape is this excellent article: https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you...

  • 1vuio0pswjnm7 a day ago ago

    What's the next hype after "AI". And what is next after that. Maybe we can just skip it all.

  • redandblack 3 days ago ago

    Having spent the last decade hearing about trustless-trust,and now faced with this decade in dealing with with no-trust-whatsoever.

    We started with dont-trust-the-government and the dont-trust-big-media and to dont-trust-all-media and eventually to a no-trust-society. Lovely

    Really, waiting for the AI feedback to converge on itself. Get this over soon please

  • izwasm 2 days ago ago

    Im tired of people throwing chatgpt everywhere they can just to say they use AI. Even if it's a useless feature

  • semiinfinitely 2 days ago ago

    A software tester tired of AI? Not surprising given that this is like the first job which AI will replace.

    • yibers 2 days ago ago

      Actually testing by humans will be much more important. AI may be making subtle mistaking the will require more extensive testing, by humans.

  • sedatk 2 days ago ago

    I remember being awestruck at the first avocado chair images DALL-E generated. So many possibilities ahead. But, we ended up with all oversaturated, color-soup, greasy, smooth pictures everywhere because as it turns out, beauty is in the eye of the prompter.

    • WillyWonkaJr 2 days ago ago

      I asked ChatGPT once if its generated images were filtered to reduce the realism and it said that it did. Maybe we don't like the safety filter they are applying to all images.

      • sedatk 2 days ago ago

        The thing is we have no way to know if ChatGPT is telling the truth.

  • warvair 3 days ago ago

    90% of everything is crap. Perhaps AI will make that 99% in the future. OTOH, maybe AI will slowly convert that 90% into 70% crap & 20% okay. As long as more stuff that I find good gets created, regardless of the percentage of crap I have to sift through, I'm down.

  • BodyCulture 2 days ago ago

    I would like to know how does AI help us in solving the climate crisis! I have read some articles about weather predictions getting better with the help of AI, but that is just the monitoring, I would like to see more actual solutions.

    Do you have any recommendations?

    Thanks!

    • cwmma 2 days ago ago

      It is doubtful AI will be a net positive with regards to climate due to how much electricity it uses.

    • jordigh 2 days ago ago

      Uhh... it makes it worse.

      We don't have all of the data because in the US companies are not generally required by law to disclose their emissions. But of those who do, it's been disastrous. Google was on track to net zero, but its recent investment and push on AI has increased their emissions by 48%.

      https://www.cnn.com/2024/07/03/tech/google-ai-greenhouse-gas...

  • ETH_start 3 days ago ago

    That's fine he can stick with his horse and buggy. Cognition is undergoing its transition to automobiles.

  • lvl155 3 days ago ago

    What really gets me about AI space is that it’s going the way of front-end development space. I also hate the fact that Facebook/Meta is the only one seemingly doing heavy lifting in the public space. It’s great so far but I just don’t trust them in the end.

  • visarga 3 days ago ago

    > I’m pretty sure that there are some areas where applying AI might be useful.

    How polite, everyone is sure AI might be useful in other fields just not their own.

    > people are scared that AI is going to take their jobs

    Can't be both true - AI being not really useful, and AI taking our jobs.

  • Meniceses 3 days ago ago

    I love AI.

    In comparision to a lot of other technologies, we actually have jumps in quality left and right, great demos, new things which are really helpful.

    Its fun to watch the AI news because there is something relevant new happening.

    I'm worried regarding the impact of AI but this is a billion times better than the last 10 years which was basically just cryptobros, nfts, blockchain shit which is basically just fraud.

    Its not just some GenAI stuff, we talk about blind people getting better help through image analysis, we talk about alpha fold, LLMs being impressive as hell, the research currently happening.

    And yes i also already see benefits in my job and in my startup.

    • bamboozled 3 days ago ago

      I’m truly asking in good faith here because I don’t know but what has alpha fold actually helped us achieve ?

      • Meniceses 3 days ago ago

        It allows us to speed up medical research.

        • bamboozled 3 days ago ago

          In what field specifically and how ?

          • Meniceses 3 days ago ago

            Are you phishing for something or are you not sure how this actually works?

            Everyone who is looking for proteins (vacines, medication) need to find the right proteins for different cases. For attaching to something (antibody design), for delivering something (like another protein) or for understanding a disease (why is this protein an issue?).

            Covid research benefitted from this for example.

            You can go through papers which reference the alphafold paper to see what it does: https://consensus.app/papers/highly-protein-structure-predic...

            • bux93 3 days ago ago

              No such thing as a stupid question. It's a question that this paper in Proteomics (which appears to be a legit journal) attempts to answer, at least. https://analyticalsciencejournals.onlinelibrary.wiley.com/do...

              • Meniceses 3 days ago ago

                I didn't say stupid but sometimes people asking in a way which might not feel legimate / honest.

                • bamboozled 2 days ago ago

                  You were making the claim, so I thought you’d have an answer on hand. I’m wondering if it might be you who was commenting in bad faith ?

          • scotty79 3 days ago ago

            Are you asking what field of science or what industry is interested in predicting how proteins fold?

            Biotechnology and medicine probably.

            Pipeline from science to application sometimes takes decades, but I'm sure you can find news of some advancements enabled by finding out short, easy to synthesize proteins that fit particular receptor to block it or that process some simplified enzymes that still process some chemicals of interest more efficiently than natural ones. Finding them would be way harde without ability to predict how a sequence of amino-acids will fold.

            You'd need to actually try to manufacture them then look at them closely.

            First thing that came to my mind as a possible application is designing monoclonal antibodies. Here's some paper about something relating to alpha fold and antibodies:

            https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10349958/

            • RivieraKid 3 days ago ago

              I guess he's asking for specific examples of AlphaFold leading to some tangible real-world benefit.

              • scotty79 3 days ago ago

                Wait a decade then look around.

                • bamboozled 2 days ago ago

                  Parent is right, I’ve heard mixed things about alpha folds practical usefulness. As brilliant as what it does is…

  • sovietmudkipz 3 days ago ago

    I am tired and hungry…

    The thing I’m tired of is elites stealing everything under the sun to feed these models. So funny that copyright is important when it protects elites but not when a billion thefts are committed by LLM folks. Poor incentives for creators to create stuff if it just gets stolen and replicated by AI.

    I’m hungry for more lawsuits. The biggest theft in human history by these gang of thieves should be held to account. I want a waterfall of lawsuits to take back what’s been stolen. It’s in the public’s interest to see this happen.

    • Palmik 3 days ago ago

      The only entities that will win with these lawsuits are the likes of Disney, large legacy news media companies, Reddit, Stack Overflow (who are selling content generated by their users), etc.

      Who will also win: Google, OpenAI and other corporations that enter exclusive deals, that can more and more rely on synthetic data, that can build anti-recitation systems, etc.

      And of course the lawyers. The lawyers always win.

      Who will not win:

      Millions of independent bloggers (whose content will be used)

      Millions of open source software engineers (whose content will be used against the licenses, and used to displace their livelihood), etc.

      The likes of Google and OpenAI entered the space by building on top of the work of the above two groups. Now they want to pull up the ladder. We shouldn't allow that to happen.

      • ToucanLoucan 3 days ago ago

        Honestly the most depressing thing about this entire affair is seeing not the entire, certainly but a sizable chunk of the software development community jump behind OpenAI and company’s blatant theft on an industrial scale of the mental products of probably literally billions of people (not the least of whom is other software developers!) with absolutely not the slightest hint of concern about what that means for the world because afterwards, they got a new toy to play with. Squidward was apparently 100% correct: on balance, few care about the fate of labor as long as they get their instant gratification.

        • fennecfoxy 3 days ago ago

          Do you consider it theft because of the scale? If I read something you wrote and use most of a phrase you coined or an idea for the basis of a plotline in a book I write, as many authors do, currently it's counted as being all my own work.

          I feel like the argument is akin to some countries considering rubbish, the things you throw away, to still be owned by your person ie "dumpster diving" is theft.

          If a company had scraped public posts on the Internet and used it to compile art by colourising chunks of the text, is it theft? If an individual does it, is it theft?

          • ToucanLoucan 3 days ago ago

            This argument has been stated and re-stated multiple times, this notion that use of information should always be free, but it fails to account for the fact that OpenAI is not consuming this written resource as a source of information but rather as a tool for training LLMs, which it has been open about from the beginning is a thing it wishes to sell access to as a subscription service. These are fundamentally not the same. ChatGPT/Copilot do not understand Python, they are not minds that read a bunch of python books and learned python skills they can now utilize: they are language models, that internalized metric tons of weighted averages of python code and can now (kind of) write their own, based on minimizing "error" relative to the code samples they ingest. Because of this, Copilot has never and will never write code it hasn't seen before, and by extension of that, it must see a whole lot of code in order to function as well as it does.

            If you as a developer look at how one would declare a function in python, review a few examples, you now know how to do that. Copilot can't say the same. It needs to see dozens, hundreds, perhaps thousands of them to reasonably accurately be counted on to accomplish that task, it's just how the tech works. Ergo, scaled data sets that can accomplish this teaching task now have value, if the people doing that training are working for high-valuation startups with the objective of selling access to code generating robots.

        • Palmik 2 days ago ago

          That's not necessarily my position. I think laws can evolve, but they need to be applied fairly. In this case, it's heading in a direction where only the blessed will be able to compete.

        • logicchains 3 days ago ago

          >blatant theft on an industrial scale of the mental products

          They haven't been stolen; the creators still have them. They've just been copied. It's amazing how much the ethos on this site has shifted over the past decade, away from the hacker idea that "intellectual property" isn't real property, just a means of growing corporate power, and information wants to be free.

          • xdennis 3 days ago ago

            > It's amazing how much the ethos on this site has shifted over the past decade

            It hasn't. The hacker ethos is about openness, individuality, decentralization (among others).

            OpenAI is open in what it consumes, not what it outputs.

            It makes sense to have protections in place when your other values are threatened.

            If "information want's to be free" leads to OpenAI centralizing control over the most advanced AI then will it be worth it?

            A solution here would be similar to the GPL: even megacorps can use GPL software, but they have to contribute back. If OpenAI and the rest would be forced to make everything public (if it's trained on open data) then that would be an acceptable compromise.

            • visarga 3 days ago ago

              > The hacker ethos is about openness, individuality, decentralization (among others).

              Yes, the greatest things on the internet have been decentralized - Git, Linux, Wikipedia, open scientific publications, even some forums. We used to passively consume content and internet allowed interaction. We don't want to return to the old days. AI falls into the decentralized camp, the primary beneficiaries are not the providers but the users. We get help of things we need, OpenAI gets a few cents per million tokens, they don't even break even.

              • ToucanLoucan 2 days ago ago

                > AI falls into the decentralized camp

                I'm sorry, the worlds knowledge now largely accessible by a laymen via LLMs controlled by at most, 5 companies is decentralized? If that statement is true then the world decentralized truly is entirely devoid of meaning at this point.

                • visarga 2 days ago ago

                  Let's classify technology:

                  1. Decentralized technologies you can operate privately, freely, and adapt to your needs: computers, old internet, Linux, git, FireFox, local Wikipedia dump, old standalone games.

                  2. Centralized technologies that invade privacy, lead to loss of control and manipulation: web search, social networks, mobile phones, Chrome, recent internet, networked games. LLMs fall into the decentralized camp.

                  You can download a LLM, run it locally, fine-tune it. It is interactive, the most interactive decentralized tech since standalone games.

                  If you object that LLMs are mostly centralized today (upfront cost of pre-training and OpenAI popularity), I say they are still not monopolies, there are many more LLM providers than search engines and social networks, and the next round of phones and laptops will be capable of local gen-AI. The experience will be seamless, probably easier to adapt than touchscreens were in 2007.

          • ToucanLoucan 3 days ago ago

            Information should be free for people. Not 150 billion dollar enterprises.

            • infecto 3 days ago ago

              Disagree. There should be no distinction between the two. Those kind of distinctions are what cause unfair advantages. If the information is available to consume, there should be no constraint on who uses it.

              Sure you might not like OpenAI, but maybe some other company comes a long and builds the next magical product using information that is freely available.

              • TheRealDunkirk 3 days ago ago

                Treating corporations as "people" for policy's sake is a legal decision which has essentially killed the premise of the US democratic republic. We are now, for all intents and purposes, a corporatocracy. Perhaps an even better description would simply be oligarchy, but since our oligarchs' wealth is almost all tied up in corporate stocks, it's a very incestuous relationship.

                • infecto 3 days ago ago

                  Meh, I am just saying I believe in open and free information. I don't follow the OP's ideal of information for me but not thee.

                  • ToucanLoucan 3 days ago ago

                    The idea of knowledge as a source of understanding and personal growth is completely oppositional to it's conception as a scarce resource, which to OpenAI and whomever else wants to train LLMs is what it is. OpenAI did not read everything in the library because it wanted to know everything; it read everything at the library so it could teach a machine to create a statistical average written word generator, which it can then sell access to. These are fundamentally different concepts and if you don't see that, then I would say that is because you don't want to see it.

                    I don't care if employees at OpenAI read books from their local library on python. More power to them. I don't even care if they copy the book for reference at work, still fine. But utilizing language at scale as a scarce resource to train models is not that and is not in any way analogous to it.

                    • infecto 2 days ago ago

                      I am sorry you are too blinded by your own ideology and disagreement with OpenAI to see others points of views. In my view, I do not want to constrain any person or entity on their access to knowledge, regardless of output product. I do have issues with entities or people consuming knowledge and then prevent others from doing so. I am not describing a scenario of a scarce resource but of an open one.

                      Public information should should be free for anyone to consume and use how they want.

                      • ToucanLoucan 2 days ago ago

                        > I am sorry you are too blinded by your own ideology and disagreement with OpenAI to see others points of views.

                        A truly hilarious sentiment coming from someone making zero effort to actually engage with what I'm saying in favor of parroting back empty platitudes.

          • candiddevmike 3 days ago ago

            > They haven't been stolen; the creators still have them. They've just been copied

            You wouldn't download a car.

          • triceratops 3 days ago ago

            So why isn't every language model out there "open"?

      • 0xDEAFBEAD 3 days ago ago

        Perhaps we need an LLM-enabled lawyer so small bloggers can easily sue LLM makers.

    • Kiro 3 days ago ago

      I would never have imagined hackers becoming copyright zealots advocating for lawsuits. I must be getting old but I still remember the Pirate Bay trial as if it was yesterday.

      • progbits 3 days ago ago

        I just want consistent and fair rules.

        I'm all for abolishing copyright, for everyone. Let the knowledge be free and widely shared.

        But until that is the case and people running super useful services like libgen have to keep hiding then I also want all the LLM corpos to be subject to the same legal penalties.

        • candiddevmike 3 days ago ago

          This is the entire point of existence for the GPL. Weaponize copyright. LLMs have conveniently been able to circumvent this somehow, and we have no answer for it.

          • FridgeSeal 3 days ago ago

            Because some people keep asserting that LLM’s “don’t count as stealing” and “how come search links are on but got reciting paywalled NYT articles on demand is bad??” Without so much as a hint of irony.

            LLM tech is pretty cool.

            Would be a lot cooler if its existence wasn’t predicted on the wholesale theft of everyone’s stuff, immediately followed by denial of theft, poisoning the well, and massively profiting off it.

            • welferkj 3 days ago ago

              >Because some people keep asserting that LLM’s “don’t count as stealing”

              People who confidently assert either opinion in this regard are wrong. The lawsuits are still pending. But if I had to bet, I'd bet on the OpenAI side. Even if they don't win outright, they'll probably carve out enough exemptions and mandatory licensing deals to be comfortable.

            • visarga 3 days ago ago

              You are singling out accidental replication and forgetting it was triggered with fragments from the original material. Almost all LLM outputs are original - both because they use randomness to sample, and because they have user prompt conditioning.

              And LLMs are really a bad choice for infringement. They are slow, costly and unreliable at replicating any large piece of text compared to illegal copying. There is no space to perfectly memorize the majority of its training set. A 10B models is trained on 10T tokens, no space for more than 0.1% to be properly memorized.

              I see this overreaction as an attempt to strengthen copyright, a kind of nimby-ism where existing authors cut the ladder to the next generation by walling off abstract ideas and making it more probably to get sued for accidental similarities.

        • AlexandrB 3 days ago ago

          Exactly this. If we have to live under a stifling copyright regime, then at least it should be applied evenly. It's fundamentally unfair to have one set of laws (at least as enforced in practice) for the rich and powerful and another set for everyone else.

      • someNameIG 3 days ago ago

        Pirate Bay wasn't selling access to the torrents trying to make a massive profit.

        • zarzavat 3 days ago ago

          True, though paid language models are probably just a blip in history. Free weight language models are only ~12 months behind and have massive resources thanks to Meta.

          That profit will be squeezed to zero over the long term if Zuck maintains his current strategy.

          • meiraleal 3 days ago ago

            > Free weight language models are only ~12 months

            That's not true anymore, Meta isn't behind OpenAI

          • rurp 3 days ago ago

            That can change on a dime though, if Zuck decides it's in his financial interest to change course. If Facebook stops spending billions of dollars on open models who is going to step in and fill that gap?

            • zarzavat 3 days ago ago

              That depends on when Meta stops. The longer Meta keeps releasing free models, the more capabilities are made permanently unprofitable. For example, Llama 3.1 is already good enough for translation or as a writing assistant.

              If Meta stopped now, there would still be profit in the market, but if they keep releasing Llamas for the next 5+ years then OpenAI et al will be fighting for scraps. Not everybody needs a model that can prove theorems.

      • meiraleal 3 days ago ago

        Hackers are against corporations. If breaking the copyright laws make corps bigger, more powerful and more corrupt, hackers will be against it rightfully so. Abolishing copyright is different than abusing it, we should abolish it.

      • rsynnott 3 days ago ago

        I'm not sure if you're being disingenuous, or if you genuinely don't understand the difference.

        Pirate Bay: largely facilitating the theft of material from large corporations by normal people, for generally personal use.

        LLM training: theft of material from literally _everyone_, for the purposes of corporate profit (or, well, heh, intended profit; of course all LLM-based enterprises are currently massively loss-making, and may remain so forever).

        • CaptainFever 3 days ago ago

          > (or, well, heh, intended profit; of course all LLM-based enterprises are currently massively loss-making, and may remain so forever)

          This undermines your own point.

          Also, open source models exist.

        • acheron 3 days ago ago

          It’s the same picture.

      • pydry 3 days ago ago

        The common denominator is big corporations trying to screw us over for profit, using their immense wealth as a battering ram.

        So, capitalism.

        It's taboo to criticize that though.

        • PoignardAzur 3 days ago ago

          > It's taboo to criticize that though

          In what world is this taboo? That critique comes back in at least half the HN threads about AI.

          Watch any non-technical video about AI on Youtube and it will mention people being worried of the power of mega-corporations.

          Your take is about as taboo as wearing a Che Guevara tshirt.

          • pydry 18 hours ago ago

            In the world of the mainstream media - the default media diet of most americans excludes Hacker News, funnily enough.

            All sorts of more taboo stuff gets said on hacker news. It's not exactly a cultural bellwether that ideas outside of the overton window are entertained here.

            Wearing a che guevara t shirt is similarly "allowed" in public, but the last article about him in the new york times is fawning admiration for his assassin.

            Criticism of capitalism is still taboo.

        • munksbeer 3 days ago ago

          > It's taboo to criticize that though.

          It's not, that's playing the victim. There are hundreds or thousands of posts daily all over HN criticising capitalism. And most seem upvoted, not downvoted.

          Don't even get me started on reddit.

          • fernandotakai 3 days ago ago

            i find quite ironic whenever i see a highly upvoted comment here complaining about capitalism because for sure i don't see yc existing in any other type of economy.

            • ToucanLoucan 3 days ago ago

              This only holds if your thinking on the subject of economic systems is only as deep as choosing your character’s class in an RPG game. There’s no need for us to make every last industry a state owned enterprise and no one who’s spent longer than an hour or so contemplating such things thinks that way. I have no desire to not have a variety of companies producing things like cars, electronics, software, video games, just to name a few. Competition does drive innovation, that is still true, and having such firms vying for a limited amount of resources dispatched by individuals makes a lot of sense. Markets have their place.

              However markets also have limits. A power company competing for your business is largely a farce, since the power lines to your home will not change. A cable company in America is almost certainly a functional monopoly, and that fact is reflected in their quality of service. Infrastructure of all sorts makes for piss-put markets because true competition is all but impossible, and even if it does kind of work, it’s inefficient. A customer must become knowledgeable in some way to have a ghost of a clue what they’re buying, or trust entirely dubious information from marketing. And, even if somehow everything is working up to this point, corporations are, above all, cost cutters and if you put one in charge of an area where it feels as though customers have few if any choices and the friction to change is high, they will immediately begin degrading their quality of services to save money in the budget.

              And this is only from first principles, we have so many other things that could be discussed from mass market manipulation to the generous subsidies of a stunning variety that basically every business at scale enjoys to the rapacious compensation schemes that have become entirely too commonplace in the executive room, etc etc etc.

              To put it short: I have no issue at all with capitalism operating in non-essential to life industries. My issue is all the ways it’s infiltrated the essential ones and made them demonstrably worse, less efficient, and more expensive for every consumer.

              • catlifeonmars 3 days ago ago

                I would argue that subsidization and monopolistic markets are an inevitable outcome of capitalism.

                The competitive landscape where consumers vote for the best products with their purchasing decisions is simply not a sustainable equilibrium.

                The ability to accumulate capital (I.e. “capitalism”) leads to regulatory protectionism through lobbying, bribery, etcetera.

                • ToucanLoucan 3 days ago ago

                  I would argue that markets are a necessary step towards capitalism but it's also crucial to remember that markets can also exist outside of capitalism. The accumulation of money in a society with insufficient defenses will trend towards money being a stand-in for power and influence, but it still requires the permission and legal leeway of the system in order to actually turn it corrupt; politicians have to both be permitted to, and be personally willing to accept the checks to do the corruption in the first place.

                  The biggest and most salient critique of liberal capitalism as we now exist under is that it requires far too many of the "right people" to be in positions of power; it presumes good faith where it shouldn't, and fails to reckon with bad actors as what they are far too often, the modern American Republican party being an excellent example (but far from the only one).

                  • catlifeonmars 2 days ago ago

                    100%. Markets are a really useful tool for distributing goods and services to people and allocating resources. In the US, IMO the problem is that markets are the primary tool for a huge number of services; take utilities for one.

                    There is a saying that when you have a hammer, every problem looks like a nail.

            • meiraleal 3 days ago ago

              You wouldn’t see YC existing on a world full capitalist :) It depende heavily on open source, the biggest and most succeassful socialist experiment so far

              • mandibles 3 days ago ago

                Open source is a purely voluntary system. So it's not socialist, which requires state coercion to force people to "share."

      • williamcotton 3 days ago ago

        It’s because it now affects hackers and before it only affected musicians.

        • xdennis 3 days ago ago

          Nonsense. Computer piracy started with sharing software. Music piracy (on computers) started in the late 90s when computers were powerful enough to store and play music.

          Bill Gates' infamous letter was sent in 1976[1].

          [1]: https://en.wikipedia.org/wiki/An_Open_Letter_to_Hobbyists

        • bko 3 days ago ago

          It affects hackers how? By giving them cool technology at below cost? Or is it further democratizing knowledge? Or maybe it's the inflated software eng salaries due to AI hype?

          Help me understand the negative effect of AI and LLMs on hackers.

          • t-3 3 days ago ago

            It's trendy caste-signaling to hate on AI which endangers white-collar jobs and creative work the way machinery endangered blue-collar jobs and productive work (ie. not at all in the long run, but in the short term you will face some changes).

            I've never actually used an LLM though - I just don't have any use for such a thing. All my writing and programming are done for fun and automation would take that away.

      • jjulius 2 days ago ago

        On the one hand, we've got, "Pirating something because we find copyright law to be restrictive and/or corporate pricing to be excessive". On the other, we've got, "Massively wealthy people vacuuming up our creative output to further their own wealth".

        And you're trying to suggest that these two are the same?

        Edit: I don't mind downvotes, karma means nothing, but I do appreciate when folk speak up and say why I might be wrong. :)

    • Lichtso 3 days ago ago

      Lawsuits based on what? Copyright?

      People crying for copyright in the context of AI training don't understand what copyright is, how it works and when it applies.

      What they think how copyright works: When you take someones work as inspiration then everything you produce form that counts as derivative work.

      How copyright actually works: The input is irrelevant, only the output matters. Thus derivative work is what explicitly contains or resembles underlying work, no matter if it was actually based on that or it is just happenstance / coincidence.

      Thus AI models are safe from copyright lawsuits as long as they filter out any output which comes too close to known material. Everything else is fine, even if the model was explicitly trained on commercial copyrighted material only.

      In other words: The concept of intellectual property is completely broken and that is old news.

      • jcranmer 3 days ago ago

        With all due respect, the lawyers I've seen who commented on the issue do not agree with your assessment.

        The things that constitute potentially infringing copying are not clearly well-defined, and whether or not training an AI is on that list has of course not yet been considered by a court. But you can make cogent arguments either way, and I would not be prepared to bet on either outcome. Keep in mind also that, legally, copying data from disk to RAM is considered potentially infringing, which should give you a sense of the sort of banana-pants setup that copyright can entail.

        That said, if training is potentially infringing on copyright, it now seems pretty clear that a fair use defense is going to fail. The recent Warhol decision rather destroys any hope that it might be considered "transformative", while the fact that the AI companies are now licensing content for training use is a concession that the fourth and usually most important factor (market impact) weighs against fair use.

        • Lichtso 3 days ago ago

          Lawyers commenting on this publicly will add their bias to reinforce the stances of their clientele. Thus somebody usually representing the copyright holders will say it is likely infringing and someone usually representing the AI companies will say it is unlikely.

          But you are right, we don't know until president is set by a court. I am only warning people that laying back and hoping that copyright will apply as they wish is not a good strategy to defend your work. One should consider alternative legal constructs or simply not releasing material to the general public anymore.

      • lolc 3 days ago ago

        As much as our brain contents are unlicensed copies to the extent we can reproduce copyrighted work: If the model can recite copyrighted portions of text used in training, the model weights are a derivative work. Because the weights obviously must encode the original work. Just because lossy compression was applied the original work should still be considered present as long as it's recognizable. So the weights may not be published without license. Seems rather straightforward to me and I do wonder how Meta thinks they get around this.

        Now if the likes of Openai and Google keep the model weights private and just provide generated text, they can try to filter for derivative works, but I don't see a solution that doesn't leak. If a model can be coaxed into producing a derivative work that escapes the filter, then boom, unlicensed copy was provided. If I tell the model to mix two texts word by word, what filter could catch this? What if I tell the model to use a numerical encoding scheme? Or to translate into another language? For example assuming the model knows a bunch of NYT articles by heart, as was already demonstrated: If have it translate one of those articles to French for me, that's still an unlicensed copy!

        I can see how they will try to get these violations legalized like the DMCA safe-harbored things, but at the moment they are the ones generating the unlicensed versions and publishing them when prompted to do so.

      • rcxdude 3 days ago ago

        Also, the desired interpretation of copyright will not stop the multi-billion-dollar AI companies, who have the resources to buy the rights to content at a scale no-one else does. In fact it will give them a gigantic moat, allowing them to extract even more value out of the rest of the economy, to the detriment of basically everyone else.

      • LunaSea 3 days ago ago

        Lawsuits based on code licensing for example.

        Scraping websites containing source code which is distributed with specific licenses that OpenAI & co don't follow.

        • Lichtso 3 days ago ago

          Unfortunately not how it works, or at least not to the extend you wish it to be.

          One can train a model exclusively on source code from the linux kernel (GPL) and then generate a bunch of C programs or libraries from that. And they could publish them under MIT license as long as they don't reproduce any identifiable sections from the linux kernel. It does not matter where the model learned how to program.

          • jeremyjh 3 days ago ago

            That is not relevant to the comment you are responding to. Courts have been finding that scraping a website in violation of its terms of service is a liability, regardless of what you do with the content. We are not only talking about copyright.

            • CaptainFever 3 days ago ago

              True, but ToSes don't apply if you don't explicitly agree with it (e.g. by signing up for an account). So that's not relevant in the case of publicly available content.

          • LunaSea 3 days ago ago

            You're mistaken.

            If I write code with a license that says that using this code for AI training is forbidden then OpenAI is directly going against this by scraping websites indiscriminately.

            • Lichtso 3 days ago ago

              Sure, you can write all kinds of stuff in a license, but it is simply plain prose at that point. Not enforcable.

              There is a reason why it is generally advised to go with the established licenses and not invent your own, similarly to how you should not roll your own cryptography: Because it most likely won't work as intended.

              e.g. License: This comment is licensed under my custom L*a license. Any user with an username starting with "L" and ending in "a" is forbidden from reading my comment and producing replies based on what I have written.

              ... see?

              • LunaSea 3 days ago ago

                You can absolutely write a license that contains the clauses I mentioned and it would be enforceable.

                Sorry, but the onus is on OpenAI to read the licenses not the creator.

                And throwing your hands in the air and saying "oh you can't do that in a license" is also of little use.

                • CaptainFever 3 days ago ago

                  No, it would not be enforceable. Your license can only give additional rights to users. It cannot restrict rights that users already have (e.g. fair use rights in the US, or AI training rights like in the EU or SG).

                  • LunaSea 3 days ago ago

                    How does Fair Use consider commercial usage of the full content in the US?

                    • CaptainFever 3 days ago ago

                      It's unknown yet, but the main point is that the inputs don't matter, as long as the output does not replicate the full content, it is fine.

                • Lichtso 3 days ago ago

                  > You can absolutely write a license that contains the clauses I mentioned and it would be enforceable.

                  A license (copyright law) is not a contract (contract law). Simply publishing something does not make the whole world enter into a contract with you. Others first have to explicitly agree to do so.

                  > Sorry, but the onus is on OpenAI to read the licenses not the creator.

                  They can ignore it because they never agreed to it in the first place.

                  > And throwing your hands in the air and saying "oh you can't do that in a license" is also of little use.

                  It is very useful to know what works and what does not. That way you don't trick yourself and your work to be safe, don't get caught by surprise if you are in fact not and can think of alternatives instead.

                  BTW, a thing you can do (which CaptainFever mentioned) and lots of services do because licenses are so weak is to make people sign up with an account and have them enter a ToS agreement instead.

                  • LunaSea 3 days ago ago

                    > They can ignore it because they never agreed to it in the first place.

                    They did by accessing and copying the code. Same as a human cloning a repository and using it's content or someone accessing a website with Terms of Use.

                    No signed contract is needed here.

                    • CaptainFever 3 days ago ago

                      > They did by accessing and copying the code.

                      By default, copying is disallowed because of copyright. Your license provides them a right to copy the code, perhaps within certain restrictions.

                      However, sometimes copying is allowed, such as fair use (I covered this in another comment I sent you). This would allow them to copy the code regardless of the license.

                      > Same as a human cloning a repository and using it's content or someone accessing a website with Terms of Use.

                      I've covered the cloning/copying part already, but "I agree to this ToS by continuing to browse this webpage" is called a clickwrap agreement. Its enforceability is dubious. I think the LinkedIn case showed that it only applied if HiQ actually explicitly agreed to it by signing up.

      • xdennis 3 days ago ago

        > Lawsuits based on what? Copyright?

        > People crying for copyright in the context of AI training don't understand what copyright is, how it works and when it applies.

        People are complaining about what's happening, not with the exact wording of the law.

        What they are doing probably isn't illegal, but it _should_ be. The problem is that it's very difficult for people to pass new legislation because they don't have lobbyists the way corporations do.

    • fallingknife 3 days ago ago

      Copyright law is intended to prevent people from stealing the revenue stream from someone else's work by copying and distributing that work in cases where the original is difficult and expensive to create, but easy to make copies of once created. How does an LLM do this? What copies of copyrighted work do they distribute? Whose revenue stream are they taking with this action?

      I believe that all the copyright suits against AI companies will be total failures because I can't come up with a answer to any of those questions.

    • DoctorOetker 3 days ago ago

      Here is a business model for copy right law firms:

      Use source-aware training, use the same datasets as used in LLM training + copyrighted content. Now the LLM can respond not just what it thinks is most likely but also what source document(s) provided specific content. Then you can consult commercially available LLM's and detect copy right infringements, and identify copyright holders. Extract perpetrators and victims at scale. To ensure indefinite exploitation only sue commercially succesful LLM providers, so there is a constant new flux of growing small LLM providers taking up the freed niche of large LLM providers being sued empty.

      • chrismorgan 3 days ago ago

        > Use source-aware training

        My understanding (as one uninvolved in the industry) is that this is more or less a completely unsolved problem.

        • DoctorOetker 3 days ago ago

          It's just training the source association together with the training set:

          https://github.com/mukhal/intrinsic-source-citation

          The only 2 reasons big LLM providers refuse to do it is

          1) to prevent a long slew of content creators filing class action suit.

          2) to keep regulators in the dark of how feasible and actionable it would be, once regulators are aware they can perform the source-aware training themselves

    • defgeneric 2 days ago ago

      Perhaps what we should be pushing for is a law that would force full disclosure regarding the training corpus and require a curated version of the training data to be made available. I'm sure there would be all kinds of unintended consequences of a law like that but maybe we'd be better off starting from a strong basis and working out those exceptions. While billions have been spent to train these models, the value of the millions of human hours spent creating the content they're trained on should likewise be recognized.

    • IanKerr 2 days ago ago

      It's been pretty incredible watching these companies siphon up everything under the sun under the guise of "training data" with impunity. These same companies will then turn around and sic their AIs on places like Youtube and send out copyright strikes via a completely automated system with loads of false-positives.

      How is it acceptable to allow these companies to steal all of this copyrighted data and then turn around and use it to enforce their copyrights in the most heavy-handed manner? The irony is unbelievable.

    • visarga 3 days ago ago

      > I’m hungry for more lawsuits. The biggest theft in human history

      You want to own abstract ideas because AI can rephrase any specific expression. But that is antithetic to creativity.

    • csomar 3 days ago ago

      There is no copyright with AI unless you want to implement the same measures for humans too. I am fine with it as long as we at least get open-weights. This way you kill both copyright and any company that's trying to profit out of AI.

    • masswerk 3 days ago ago

      > The thing I’m tired of is elites stealing everything under the sun to feed these models.

      I suggest to apply the same to property law: make a photo and obtain instant and unlimited rights of use. – Things may change faster than we may imagine…

    • repelsteeltje 3 days ago ago

      I like the stone soup narrative on AI. It was mentioned in a recent Complexity podcast, I think by Alison Gopnik of SFI. It's analogous to the Pragmatic Programmar story about stone soup, paraphrasing:

      Basically you start with a stone in a pot of water — a neural net technology that does nothing meaningful but looks interesting. You say: "the soup is almost done, but would taste better given a bunch of training data." So you add a bunch of well curated docs. "Yeah, that helps but how about adding a bunch more". So you insert some blogs, copy righted materials, scraped pictures, reddit, and stack exchange. And then you ask users to interact with the models to fine tune it, contribute priming to make the output look as convincing as possible.

      Then everyone marvels at your awesome LLM — a simple algorithm. How wonderful, this soup tastes given that the only ingredients are stones and water.

      • CaptainFever 3 days ago ago

        The stone soup story was about sharing, though. Everyone contributes to the pot, and we get something nice. The original stone was there to convince the villagers to share their food with the travellers. This goes against the emotional implication of your adaptation. The story would actually imply that copyright holders are selfish and should be contributing what they can to the AI soup, so we can get something more than the sum of our parts.

        From Wikipedia:

        > Some travelers come to a village, carrying nothing more than an empty cooking pot. Upon their arrival, the villagers are unwilling to share any of their food stores with the very hungry travelers. Then the travelers go to a stream and fill the pot with water, drop a large stone in it, and place it over a fire. One of the villagers becomes curious and asks what they are doing. The travelers answer that they are making "stone soup", which tastes wonderful and which they would be delighted to share with the villager, although it still needs a little bit of garnish, which they are missing, to improve the flavor.

        > The villager, who anticipates enjoying a share of the soup, does not mind parting with a few carrots, so these are added to the soup. Another villager walks by, inquiring about the pot, and the travelers again mention their stone soup which has not yet reached its full potential. More and more villagers walk by, each adding another ingredient, like potatoes, onions, cabbages, peas, celery, tomatoes, sweetcorn, meat (like chicken, pork and beef), milk, butter, salt and pepper. Finally, the stone (being inedible) is removed from the pot, and a delicious and nourishing pot of soup is enjoyed by travelers and villagers alike. Although the travelers have thus tricked the villagers into sharing their food with them, they have successfully transformed it into a tasty meal which they share with the donors.

        (Open source models exist.)

      • unraveller 3 days ago ago

        First gen models trained on books directly. Latest Phi distilled textbook-like knowledge down from disparate sources to create novel training data. They are all fairly open about this change and some are even allowing upset publishers to confirm that their work wasn't used directly. So stones and ionized water go in the soup.

    • infecto 3 days ago ago

      I suspect the greater issue is that copyright is not always clear in this area? I am also not sure how you prevent "elites" from using the information while also allowing the "common" person to it.

    • jokethrowaway 3 days ago ago

      It's the other way round. The little guys will never win, it will be just a money transfer from one large corp to another.

      We should just scrap copyright and everybody plays a fair game, including us hackers.

      Sue me because of breach of contract in civil court for damages because I shared your content, don't send the police and get me jailed directly.

      I had my software cracked and stolen and I would never go after the users. They don't have any contract with me. They downloaded some bytes from the internet and used it. Finding whoever shared the code without authorization is hard and even so, suing them would cost me more than the money I'm likely to get back. Fair game, you won.

      It's a natural market "tax" on selling a lot of copies and earning passively.

    • drstewart 3 days ago ago

      >elites stealing everything

      > a billion thefts

      >The biggest theft

      >what’s been stolen

      I do like how the internet has suddenly acknowledged that pirating is theft and torrenting IS a criminal activity. To your point, I'd love to see a massive operation to arrest everyone who has downloaded copyrighted material illegal (aka stolen), for the public interest.

      • amatecha 2 days ago ago

        This is such a misrepresentation of the issue and what people are saying about it. They call it "theft" because corps are, apparently-indiscriminately and without remuneration of creators, "ingesting" the original work of thousands or millions of individuals, in order to provide for-profit services derived from that ingestion/training. "Pirates", on the other hand, copy content for their own momentary entertainment, and the exploitation ends there. They aren't turning around and starting a multi-million-dollar business selling pirated content en masse.

        • drstewart 2 days ago ago

          Theft isn't concerned with what you do with the product afterwards.

    • forinti 3 days ago ago

      Capitalism started by putting up fences around land to kick people out and keep sheep in. It has been putting fences around everything it wants and IP is one such fence. It has always been about protecting the powerful.

      IP has had ample support because the "protect the little artist" argument is compelling, but it is just not how the world works.

      • johnchristopher 3 days ago ago

        > Capitalism started by putting up fences around land to kick people out and keep sheep in.

        That's factually wrong. Capitalism is about moving wealth more efficiently: easier to allocate money/wealth to X through the banking system than to move sheep/wealth to X's farm.

        • tempfile 3 days ago ago

          capitalism and "money as an abstract concept" are unrelated.

          • johnchristopher 2 days ago ago

            Neither is the relevance of your comment about it and yet here we are.

            • tempfile 2 days ago ago

              What are you talking about? You said:

              > Capitalism is about moving wealth more efficiently: easier to allocate money/wealth to X through the banking system than to move sheep/wealth to X's farm.

              It's not. That's what money's about. Any system with an abstract concept of money admits that it's easier to allocate wealth with abstractions than physically moving objects.

              Capitalism is about capital. It's an economic system that says individuals should own things (i.e. control their purpose) by investing money (capital) into them. You attempted to correct the previous commenter, but provided an incorrect definition. I hope that clears up the relevance issue for you.

              • johnchristopher 2 days ago ago

                > Capitalism is about capital. It's an economic system that says individuals should own things (i.e. control their purpose) by investing money (capital) into them.

                Yes. It's not about stealing land and kicking people out and raising sheep there instead. That (stealing) happens of course but is totally independent from any capitalist system.

                JFC, the same sentence could have been said with communism in mind.

                > You attempted to correct the previous commenter, but provided an incorrect definition. I hope that clears up the relevance issue for you.

                You are confusing the intent of capitalism - which I gave the general direction of - with its definition. Does that clear up the relevance issue for you ? Did I fucking not write wealth/money intentionally ?

    • artninja1988 3 days ago ago

      Copying data is not theft

      • rpgbr 3 days ago ago

        It’s only theft when people copy data from companies. The other way around is ok, I guess.

        • CaptainFever 3 days ago ago

          Copying is not theft either way.

          • goatlover 2 days ago ago

            It is if it's legally defined as theft.

      • a5c11 3 days ago ago

        Is piracy legal then? It's just a copy of someone else's copy.

        • vasco 3 days ago ago

          You calling it piracy is already a moral stance. Copying data isn't morally wrong in my opinion, it is not piracy and it is not theft. It happens to not be legal but just a few short years ago it was legal to marry infants to old men and you could be killed for illegal artifacts of witchcraft. Legality and morality are not the same, and the latter depends on personal opinion.

          • cdrini 3 days ago ago

            I agree with you they're not the same, but to build on that, I would add that they're not entirely orthogonal either, they influence each other a lot. Generally morallity that a society agrees on gets enforced as laws.

        • chownie 3 days ago ago

          Was the legality the question? If so it seems we care about data "theft" in a very one sided manner.

        • tempfile 3 days ago ago

          It's not legal, but it's also not theft.

        • criddell 3 days ago ago

          The person who insists copying isn’t theft would probably point out that piracy is something done on the high seas.

          From the context of the comment it was pretty clear that they were using theft as shorthand for taking without permission.

          • IanCal 3 days ago ago

            The usual argument is less about piracy as a term and more the use of the word theft, and your use of the word "taking". When we talk about physical things theft and taking mean depriving the owner of that thing.

            If I have something, and you copy it, then I still have that thing.

            • criddell 3 days ago ago

              Did you read that original comment and wonder how Sam Altman and his crew broke into the commenter's home and made off with their hard drive? Probably not and so theft was a fine word choice. It communicated exactly what they wanted to communicate.

              • CaptainFever 3 days ago ago

                Even if that's the case, the disagreement is in semantics. Let's take your definition of theft. There's physical theft (actually taking something) and there's digital theft (merely copying).

                The point of anti-copyright advocates are that merely copying is not ethically wrong. In fact, Why Software Must Be Free made the argument that preventing people from copying is ethically wrong because it limited the spread of culture and reuse.

                That is the crux of the disagreement. You may rephrase our argument as "physical theft may be bad, but digital theft is not bad, and in fact preventing digital theft is in itself bad", but the argument does not change.

                Of course, there is additional disagreement in the implied moral value of the word "theft". In that case I agree with you that pro-copyright/anti-AI advocates have made their point by the usage of that word. Of course, we disagree, but... it is what it is I suppose.

      • threeseed 3 days ago ago

        Technically that is true. But you will still be charged with a litany of other crimes.

      • atoav 3 days ago ago

        Yet unlicensed use can be its own crime under current law.

      • flohofwoe 3 days ago ago

        So now suddenly when the bigwigs do it, software piracy and "IP theft" is totally fine? Thanks, good to know ;)

    • makin 3 days ago ago

      I'm sorry if this is strawmanning you, but I feel you're basically saying it's in the public's interest to give more power to Intellectual Property law, which historically hasn't worked out so well for the public.

      • jbstack 3 days ago ago

        The law already exists. Applying the law in court doesn't "give more power" to it. To do that you'd have to change the law.

        • joncrocks 3 days ago ago

          Which law are you referencing?

          Copyright as far as I understand is focused on wholesale reproduction/distribution of works, rather than using material for generation of new works.

          If something is available without contractual restriction it is available to all. Whether it's me reading a book, or a LLM reading a book, both could be considered the same.

          Where the law might have something to say is around the output of said trained models, this might be interesting to see given the potential of small-scale outputs. i.e. If I output something to a small number of people, how does one detect/report that level of infringement. Does the `potential` of infringement start to matter.

      • atoav 3 days ago ago

        Nah. What he is saying is that the existing law should be applied equally. As of now intellectual property as a right only works for you if you are a big corporation.

      • xdennis 3 days ago ago

        > you're basically saying it's in the public's interest to give more power to Intellectual Property law

        Not necessarily. An alternative could be to say that all models trained on data which hasn't been explicitly licensed for AI-training should be made public.

      • probably_wrong 3 days ago ago

        I think the second alternative works too: either you sue these companies to the ground for copyright infringement at a scale never seen, OR you decriminalize copyright infringement.

        The problem (as far as this specific discussion goes) is not that IP laws exist, but rather that they are only being applied in one direction.

      • fallingknife 3 days ago ago

        HN generally hated (and rightly so, IMO) strict copyright IP protection laws. Then LLMs came along and broke everybody's brain and turned this place into hardline copyright extremists.

        • triceratops 3 days ago ago

          Or you know, maybe we're pissed about the heads-I-win-tails-you-lose nature of the current copyright regime.

          • fallingknife 2 days ago ago

            What do you mean by this? All I see in this thread is people who have absolutely no legal background who are 100% certain that copyright law works how they assume it does and are 100% wrong.

      • vouaobrasil 3 days ago ago

        The difference is that before, intellectual property law was used by corporations to enrich themselves. Now intellectual property law could theoretically be used to combat an even bigger enemy: big tech stealing all possible jobs. It's just a matter of practicality, like all law is.

    • AI_beffr 3 days ago ago

      ok the "elites" have spent a lot of money training AI but have the "commoners" lifted a single finger to stop them? nope! its the job of the commoners to create a consensus, a culture, that protects people. so far all i see from the large group of people who are not a part of the elite is denial about this entire issue. they deny AI is a risk and they dont shame people who use it. 99.99% of the population is culpable for any disaster that befalls us regarding AI.

    • uhtred 3 days ago ago

      We need a revolution.

    • bschmidt1 3 days ago ago

      Same here hungry neigh thirsty for prompt-2-film

      "output a 90 minute harry potter sequel to the final film starring the original actors plus Tom Hanks"

  • whoomp12342 2 days ago ago

    here is where you are wrong about AI lacking creativitivy:

    AI Music is bland and boring. UNLESS IF YOU KNOW MUSIC REALLY WELL. As a matter of fact, it can SPAWN poorly done but really interesting ideas with almost no effort

    "What if curt cobain wrote a song that was then sung by johnny cash about waterfalls in the west" etc.

    That idea is awful, but when you generate it, you might get snippets that could turn into a wholey new HUMAN made song.

    The same process is how I forsee AI helping engineering. its not replacing us, its inspring us.

    • nescioquid 2 days ago ago

      People often screw around over the piano keyboard, usually an octave or so about middle C until an idea occurs. Brahms likened this to a pair of hands combing over a garbage dump.

      I think a creative person has no trouble generating interesting ideas without roving over the proverbial garbage heap. The hard (and artistic) part is developing those ideas into an interesting work.

  • buddhistdude 3 days ago ago

    some of the activities that we're involved in are not limited in complexity, for example driving a car. you can have a huge amount of experience in driving a car but will still face new situations.

    the things that most knowledge workers are working on are limited problems and it is just a matter of time until the machine will reach that level, then our employment will end.

    edit: also that doesn't have to be AGI. it just needs to be good enough for the problem.

  • jaakl 2 days ago ago

    It seems Claude (3.5 Sonnet) provided the longest summary for this discussion using basic single shot prompt for me:

    After reviewing the Hacker News thread, here are some of the main repeating patterns I observed:

    * Fatigue and frustration with AI hype: Many commenters expressed being tired of the constant AI hype and its application to every domain. * Concerns about AI-generated content quality: There were recurring worries about AI producing low-quality, generic, or "soulless" content across various fields. * Debate over AI's impact on jobs and creativity: Some argued AI would displace workers, while others felt it was just another tool that wouldn't replace human creativity and expertise. * Skepticism about AI capabilities: Several commenters felt the current AI systems were overhyped and not as capable as claimed. * Copyright and ethical concerns: Many raised issues about AI training on copyrighted material without permission or compensation. * Polarized views on AI's future impact: There was a split between those excited about AI's potential and those worried about its negative effects. * Comparisons to previous tech hypes: Some likened the AI boom to past technology bubbles like cryptocurrency or blockchain. * Debate over regulation: Discussion on whether and how AI should be regulated. * Concerns about AI's environmental impact: Mentions of AI's large carbon footprint. * Meta-discussion about HN itself: Comments about how the discourse on HN has changed over time, particularly regarding AI. * Capitalism critique: Some framed issues with AI as symptoms of larger problems with capitalism. * Calls for embracing vs rejecting AI: A divide between those advocating for adopting AI tools and those preferring to avoid them.

    These patterns reflect a community grappling with the rapid advancement and widespread adoption of AI technologies, showcasing a range of perspectives from enthusiasm to deep skepticism.

  • scotty79 3 days ago ago

    AI was just trained so far to generate corporate bs speak in a corporate bs format. That's why it's tiring. More unique touch in communication will come later as fine tunings and loras (if possible) of those models are shared.

    • AlexDragusin 2 days ago ago

      I beg to differ, one has to use the tool properly towards the desired results, for that, one has to understand and know the capabilities and limitations of the tool, for instance got my Plumber AI (local Llama 3.1 8B) to give his opinion on AI, I think it did a pretty good job and certainly not corporate :)

      "Blimey, I've been thinkin' about this AI business a lot lately. I mean, I'm a plumber, I fix pipes, I don't get involved in all this fancy tech stuff, but it's hard to ignore. I was talkin' to a mate who's into that sort of thing and he's got me worried. He says these AI systems are gonna be takin' over, doin' all the work while we sit back and collect our pensions.

      Now, I'm not sure I agree. I mean, I've seen some of these AI systems in action and they're not exactly the most reliable things, if you know what I mean. I've had to fix more than a few pipes that were clogged up by some dodgy software or other. And what's the point of havin' a system that's smarter than us if it's just gonna break down and need fixin' all the time?

      But, I suppose that's just the old plumber in me comin' out. I mean, I know some folks who are workin' with AI and they say it's the future, that we're all gonna be replaced by robots and whatnot. And I suppose that's a bit scary, but I'm not sure I buy it. I mean, what's the harm in havin' a bit of intelligence to help us out? As long as it's not too smart, if you know what I mean.

      Anyway, that's my two pence on AI. Take it or leave it, I don't care. I'm just gonna go fix some pipes and let the tech wizards sort it out."

      Challenged it with: What do you mean "clogged up by some dodgy software" how does software clogs pipes?

      "That was a bit of a stretch, wasn't it? As a plumber, I'm more used to dealing with physical blockages in pipes, like grease or hair, rather than software.

      What I was trying to get at was that I've had to fix pipes that were faulty or malfunctioning due to poor design or manufacturing, and that's what I meant by "clogged up by some dodgy software". But that's a bit of a misnomer, isn't it?

      In reality, I was just trying to come up with a way to relate my plumbing experience to the idea of AI systems being unreliable. But I think I ended up stretching a bit too far! Maybe I'll stick to talking about pipes and not try to get too clever with my analogies."

  • cedws 2 days ago ago

    Current generative AI is a set of moderately useful/interesting technology that has been artificially blown up into something way bigger.

    If you've been paying any attention for the past two decades, you'll have noticed that capitalism has had a series of hype cycles. Post COVID, Western economies are on their knees, productivity is faltering and the numbers aren't checking out anymore. Gen AI is the latest hype cycle, and it has been excellent for generating hype with clueless VCs and extracting money from them, and generating artificial economic activity. I truly think we are in deep shit when this bubble pops, it seems to be the only thing propping up our economies and staving off a wider bear market.

    I've heard some say that this is all just the beginning and AGI is 2 years away because... Moore's law and that somehow applies to LLM benchmarks. Putting aside that this completely nonsensical idea, LLM performance is quite clearly not on any kind of exponential curve by now.

  • danjl 3 days ago ago

    One of the pernicious aspects of using AI is the feeling it gives you that you have done all work without any of the effort. But the time of takes to digest and summarize an article as a human requires a deep injestion of the concepts. The process is what helps you understand. The AI summary might be better, and didn't take any time, but you don't understand any of it since you didn't do the work. It's similar to the effect of telling people you will do a task, which gives your brain the same endorphins as actually doing the task, resulting in a lower chance that the task ever gets done.

  • sirsinsalot 3 days ago ago

    If humans have a talent for anything, it is mechanising the pollution of the things we need most.

    The earth. Information. Culture. Knowledge.

  • chalcolithic 3 days ago ago

    In Soviet planet Earth AI gets tired of you. That's what I expect future to be like, anyways

  • brailsafe 2 days ago ago

    I mean, I'm at most fine with being able to occasionally use an llm for a low-risk, otherwise predictable, small-surface area, mostly boilerplate set of problems I shouldn't be spending energy on anyway. I'm excited about potentially replacing my ( to me) recentish 2019 macbook pro with an M4, if it's a good value for me. However, I have zero interest in built-in AI features of any product, and it hasn't even crossed my mind why it would. The novelty wore off last year, and its presence in my OS is going to be at most incidental to the efficiency advantages of the hardware advancements; at worst, it'll annoy the hell of me and I'll look for ways to permanently disable any first-party integration. Haven't even paid attention to the news around what will be coming in the latest MacOS, but I'm hoping it'll be ignorable like the features that exist for iPhone users are.

  • mrmansano 3 days ago ago

    I love AI, I use it every single day and wouldn't consider myself a luddite, but... oh, boy... I hate the people that is too bullish on it. Not the people that is working to make the AI happen (although I have my __suspicious people radar__ pointing to __run__ every single time I see Sam Altman face anywhere), but the people that hypes it to ground, the "e/acc" people. I feel like the crypto-bros just moved from the "all-might decentralized coin god" hype to the "all might tech-god that for sure will be available soon". Looks like a cult or religion is forming around the singularity, and, if I hype it now, it will be generous to me when it takes the control. Oh, and if you don't hype then you're a neo-luddite/doomer and I will look up on you with disdain, as you are a mere peasant. Also, the get-rich-quick schemes forming around the idea that anyone can have a "1-person-1-billion-dollar" company with just AI, not realizing when anyone can replicate your product then it won't have any value anymore: "ChatGPT just made me this website to help classify if an image is a hot-dog or not! I'll be rich selling it to Nathan's - Oh, what's that? Nathan's just asked ChatGPT to create a hot-dog classifier for them?!" Not that the other vocal side is not as bad: "AI is useless", "It's not true intelligence", "AI will kill us all", "AI will make everyone unemployed in 6 months!"... But the AI tech-bros side can be more annoying in my personal experience (I'm sure the opposite is true for others too). All those people are tiring, and making AI tiring for some too... But the tech is fun and will keep evolving and present, rather we are tired of it or not.

  • AI_beffr 3 days ago ago

    in 2018 we had the first gtp that would babble and repeat itself but would string together words that were oddly coherent. people dismissed any talk of these models having larger potential. and here we are today with the state of AI being what it is and people are still, in essence, denying that AI could become more capable or intelligent than it is right at this moment. after so many years of this zombie argument having its head chopped off and then regrowing, i can only think that it is peoples deep seated humanity that prevents them from seeing the obvious. it would be such a deeply disgusting and alarming development if AI were to spring to life that most people, being good people, are literally incapable of believing that its possible. its their own mind, their human sensibilities, protecting them. thats ok. but it would help keep humanity safe if more people had the ability to realize that there is nothing stopping AI from crossing that threshold and every heuristic is pointing to the fact that we are on the cusp of that.

  • kvnnews 3 days ago ago

    I’m not the only one! Fuck ai, fuck your algorithm. It sucks.

  • fallingknife 3 days ago ago

    I'm not. I think it's awesome and I can't wait to see what comes out next. And I'm completely OK with all of my work being used to train models. Bunch of luddites and sour grapes around here on HN these days.

    • elpocko 3 days ago ago

      Same here! Amazing stuff that I have waited for my entire life, and I won't let luddite haters ruin it for me. Their impotent rage is tiring but in the end it's just one more thing you have to ignore.

      • yannis 3 days ago ago

        Absolutely amazing stuff. I am now three scores and ten in my life time, seen a lot of changes from slide rules->very fast to calculators->very fast to pcs, from dot matrix printers to lazer jets and dozens of other things. Wish AI was available when I was doing my PhD. If you know its limitations it can be very useful. At present I occasionally use it to translate references from wikipedia articles to bibtex format. It is very good at this, I only need to fix a few minor errors, letting me focus to the core of what I am doing. But human nature always resists change, especially if it leads to the unknown. I must admit that I think AI will bring negative consequences as it will be misused by politicians and the military, they need to be "regulated" not the AI.

      • fallingknife 3 days ago ago

        Yeah, they made something that passes a Turing test, and people on HN of all places hate it? What happened to this place? It's like the number one thing people hate around here now is another man's success.

        I won't ignore them. I'll continue to loudly disagree with the losers and proudly collect downvotes from them knowing I got under their skin.

        • Applejinx 3 days ago ago

          Eliza effectively passed Turing tests. I think you gotta do a little better than that, and 'ha ha I made you mad' isn't actually the best defense of your position.

          • elpocko 3 days ago ago

            Eliza did not pass Turing tests in any reasonable capacity. It took anyone 10 seconds to realize what it was doing; no one was fooled by it. The comparison to modern LLMs is preposterous.

            GP doesn't have to defend their position. They like something, and they don't shut up about it even though it makes a bunch of haters mad. That's good; no defense required. On the contrary: those who get mad need to defend themselves.

            • Applejinx 2 days ago ago

              I mean even on the Wikipedia article it says: "Some of ELIZA's responses were so convincing that Weizenbaum and several others have anecdotes of users becoming emotionally attached to the program, occasionally forgetting that they were conversing with a computer.[3] Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: 'I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.'"

              That's why I said it passed the Turing test. It did, in the wild. I think if you set up an adversarial Turing test it'd fare much more poorly, but consider this: anyone dealing with it knew they were dealing with a computer, not just 'text from a mystery messager in another room'. So in a sense it more than passed the Turing test, in that people started out knowing it was not a person, yet began to treat it as one anyhow.

              This is more about how little it takes to pass a Turing test and be treated as intelligence… which is mighty salient to discussions of what we now call 'AI'. Some of you are taking today's Eliza mighty seriously.

              • elpocko 2 days ago ago

                I must have been about 8 years old when I played Eliza for the first time on some home computer. Even to a dumb kid it was very obvious how it worked, two questions in. And I am far from being a genius. Maybe it was more convincing in English than in my native language, but "delusional thinking" seems about right.

                Tbh, I don't care about Turing tests, or the AI narrative of big corps or doomers. I do care about my personal experience with LLMs and diffusion models. Not in a mighty serious way, but in mighty fun and entertaining way that wasn't possible before.

    • amiantos 3 days ago ago

      There's _a lot_ of poor quality engineers out there who understand that on some level they are committing fraud by spinning their wheels all day shifting CSS values around on a React component while collecting large paychecks. I think it's only natural all of those engineers are terrified by the prospect of some computer being capable of doing their job quickly and efficiently and replacing them. Those people are crying so loudly that it's encouraging otherwise normal people to start jumping on the anti-AI bandwagon too, because their voices are so loud people can't hear themselves think critically anymore.

      I think passionate and inspired engineers who love their job and have very solid soft skills and experience working deeply on complex software projects will always have a position in the industry, and people like that are understandably very enthusiastic about AI instead of being scared of it.

      In other words, it is weird how bad the status quo was, until we got something that really threatened the status quo, now a lot of the people who wanted to tear it all down are now desperately trying to stop everything from changing. The sentiment on the internet has gone in a weird direction, but it's all about money deep down. This hypothetical new status quo brought on by AI seems to be wedded to fears of less money, thus abject terror masquerading as "I'm so bored!" posturing.

      You see this in the art circles, where established artists are willing to embrace AI, but it's the small time aspiring bedroom artists that have not achieved any success who are all over Twitter denouncing AI art as soulless and terrible. While the real artists are too busy using any tool available to make art, or are just making art because they want to make art and aren't concerned with fear-mongering.

    • Kiro 3 days ago ago

      You're getting downvoted, but I agree with your last sentence — and not just about AI. The amount of negativity here regarding almost everything is appalling. Maybe it's rose-tinted nostalgia but I don't remember it being like this a few years ago.

      • CaptainFever 3 days ago ago

        Hacker News used to be nicknamed Hater News, as I recall.

  • tonymet 2 days ago ago

    Assuming that people tend to pursue the expedient and convenient solution, AI will degrade science and art until only a facsimile of outdated creativity is left.

  • the_clarence 2 days ago ago

    I think it's awesome personally

  • ninetyninenine 3 days ago ago

    This guy doesn’t get it. The technology is quickly converging on a point where no one can recognize whether it was written by AI or not.

    The technology is on a trend line where the output of these LLMs can be superior to most human writing.

    Being of tired of this is the wrong reaction. Being somewhat fearful and in awe is the correct reaction.

    You can thank social media constantly hammering us with headlines as the reason why so many people are “over it”. We are getting used to it but make no mistake being “over it” is n an illusion. LLMs represent a milestone in technological achievement among humans and being “over it” or claiming all LLMs can never reason and output is just a copy is delusional.

  • paulcole 2 days ago ago

    > AI’s carbon footprint is reaching more alarming levels every day

    It really really really really isn’t.

    I love how people use this argument for anything they don’t like – crypto, Taylor Swift, AI, etc.

    Everybody in the developed world’s carbon footprint is disgusting! Even yours. Even mine. Yes, somebody else is worse than me and somebody else is worse than you, but we’re all still awful.

    So calling out somebody else’s carbon footprint is the most eye-rolling “argument” I can imagine.

  • kaonwarb 2 days ago ago

    > It has gotten so bad that I, for one, immediately reject a proposal when it is clear that it was written by or with the help of AI, no matter how interesting the topic is or how good of a talk you will be able to deliver in person.

    I am sympathetic to the sentiment, and yet worry about someone making impactful decisions based on their own perception of whether AI was used. Such perceptions have been demonstrated many times recently to be highly faulty.

  • hcks 2 days ago ago

    Hackernews when we may be on the path of actual AI "meh I hate this, you know what’s actually really interesting? Manually writing tests for software"

  • cutler a day ago ago

    Me too :)

  • habosa 2 days ago ago

    I refuse to work on AI products. I refuse to use AI in my work.

    It’s inescapable that I will work _near_ AI given that I’m a SWE and I want to get paid, but at least by not actively advancing this bullshit I’ll have a tiny little “wasn’t me” I can pull out when the world ends.

  • farts_mckensy 3 days ago ago

    I am tired of people saying, "I am tired of AI."

  • amiantos 3 days ago ago

    I'm tired of people complaining about AI stuff, let's move on already. But based on the votes and engagement on this post, complaining about AI is still a hot ticket to clicks and attention, even if people are just regurgitating the exact same boring takes that are almost always in conflict with each other: "AI sure is terrible, isn't it? It can't do anything right. It sucks! It's _so bad_. But, also, I am terrified AI is going to take my job away and ruin my way of life, because AI is _so good_."

    Make up your mind, people. It reminds me of anti-Apple people who say things like "Apple makes terrible products and people only buy them because... because... _they're brainwashed!_" Okay, so we're supposed to believe two contradictory points at once: Apple products are very very bad, but also people love them very much. In order to believe those contradictory points, we must just make up something to square them, so in the case of Apple it's "sheeple!" and in the case of AI it's... "capitalism!" or something? AI is terrible but everyone wants it because of money...? I don't know.

    • aDyslecticCrow 3 days ago ago

      Not sure what you're getting at. You don't claim LLMs is good in your comment. You just complain about people being annoyed at it destroying the internet?

      Are you just annoyed that people complain about what bothers them? Or do you think LLMs has been a net good for humanity and the internet?

  • andai 3 days ago ago

    daniel_k 53 minutes ago | next [-]

    I agree with the sentiment, especially when it comes to creativity. AI tools are great for boosting productivity in certain areas, but we’ve started relying too much on them for everything. Just because we can automate something doesn’t mean we should. It’s frustrating to see how much mediocrity gets churned out in the name of ‘efficiency.’

    testers_unite 23 minutes ago | next [-]

    As a fellow QA person, I feel your pain. I’ve seen these so-called AI test tools that promise the moon but deliver spaghetti code. At the end of the day, AI can’t replicate intuition or deep knowledge. It’s just another tool in the toolbox—useful in some contexts but certainly not a replacement for expertise.

    nlp_dev 2 hours ago | next [-]

    As someone who works in NLP, I think the biggest misconception is that AI is this magical tool that will solve all problems. The reality is, it’s just math. Fancy math, sure, but without proper data, it’s useless. I’ve lost count of how many times I’ve had to explain this to business stakeholders.

    -HN comments for TFA, courtesy of ChatGPT

  • littlestymaar 3 days ago ago

    It's not AI you hate, it's Capitalism.

    • thenaturalist 3 days ago ago

      Say what you want about income and asset inequality, but capitalism has done more to lift hundreds of millions of people out of poverty over the past 50 years than any other religion, aid programme or whatever else.

      I think it's very important and fair to be critical about how we as a society implement capitalism, but such broad generalization misses the mark immensely.

      Talk to anyone who grew up in a Communist country in the 2nd half of the 20th century if you want to validate that sentiment.

      • littlestymaar 3 days ago ago

        > but capitalism has done more to lift hundreds of millions of people out of poverty over the past 50 years than any other religion, aid programme or whatever else.

        Technology did what you ascribe to Capitalism. Most of the time thanks to state intervention, and the weaker the state, the weaker the growth (see how Asia overperformed everybody else now that laissez-faire policies are mainstream in the West).

        > Talk to anyone who grew up in a Communist country in the 2nd half of the 20th century if you want to validate that sentiment.

        The fact that one alternative to Capitalism was a failure doesn't mean Capitalism isn't bad.

        • drstewart 3 days ago ago

          Funny how it's technology that outmaneuvered capitalism to lift people out of poverty, but technology is being outmaneuvered by capitalism to endanger the future with AI.

          Methinks capitalism is just a bogeyman you ascribe anything you don't like to.

          • littlestymaar 2 days ago ago

            Technology is agnostic to who gets the benefits, talking about outmaneuvering it makes no sense.

            Capitalism on the other hand is the mechanism through which the owners of production assets grab an ever growing fractions of the value. When Capitalism is tamed by the state (think from the New Deal to Carter), the people get a bigger share of value created, when it's not (since Reagan) Capitalists take the Lion share.

          • CuriouslyC 2 days ago ago

            The problem is that capitalism is a very large tent. There is no such thing as a free market, and every market where people can trade goods and services is "capitalist" by definition regardless of its rules. Some markets are good and some markets are bad, but we're having conversations about market vs no market when we should be talking about how we design markets that improve society rather than degrade it.

      • BoGoToTo 3 days ago ago

        Ok, but let's take this to the logical conclusion that at some point there will be models which displace a large segment of the workforce. How does capitalism even function then?

  • yapyap 3 days ago ago

    Same.

  • tananan 3 days ago ago

    On point article, and I'm sure it represents a common sentiment, even if it's an undercurrent to the hype machine ideology.

    It is quite hard to find a place which works on AI solutions where a sincere, sober gaze would find anything resembling the benefits promised to users and society more broadly.

    On the "top level" the underlying hope is that a paradigm shift for the good will happen in society, if we only let collective greed churn for X more years. It's like watering weeds hoping that one day you'll wake up in a beautiful flower garden.

    On the "low level", the pitch is more sincere: we'll boost process X, optimize process Y, shave off %s of your expenses (while we all wait for the flower garden to appear). "AI" is latching on like a parasitic vine on existing, often parasitic workflows.

    The incentives are often quite pragmatic, coated in whatever lofty story one ends up telling themselves (nowadays, you can just outsource it anyway).

    It's not all that bleak, I do think there's space for good to be done, and the world is still a place one can do well for oneself and others (even using AI, why not). We should cherish that.

    But one really ought to not worry about disregarding the sales pitch. It's fine to think the popular world is crazy, and who cares if you are a luddite in "their" eyes. And imo, we should avoid the two delusional extremes: 1. The flower garden extreme 2. The AI doomer extreme

    In a way, both of these are similar in that they demote personal and collective agency from the throne, and enthrone an impersonal "force of progress". And they restrict one's attention to this supposedly innate teleology in technological development, to the detriment of the actual conditions we are in and how we deal with them. It's either a delusional intoxication or a way of coping: since things are already set in motion, all I can do is do... whatever, I guess.

    I'm not sure how far one can take AI in principle, but I really don't think whatever power it could have will be able to come to expression in the world we live in, in the way people think of it. We have people out there actively planning war, thinking they are doing good. The well-off countries are facing housing, immigration and general welfare problems. To speak nothing of the climate.

    Before the outbreak of WWI, we had invented the Haber-Bosch process, which greadly improved our food production capabilities. A couple years later, WWI broke out, and the same person who worked on fertilizers also ended up working on chemical warfware development.

    Assuming that "AI" can somehow work outside of the societal context it exists in, causing significant phase shifts, is like being in 1910, thinking all wars will be ended because we will have gotten that much more efficient at producing food. There will be enough for everyone! This is especially ironic when the output of AI systems has been far more abstract and ephemeral.

  • shaunxcode 2 days ago ago

    LLM/DEEP-MIND is DESTROYING lineage. This is the crux point we can all feel. Up until now you could pick up a novel or watch a film, download an open source library, and figure out the LINEAGE (even if no attribution is directly made, by studying the author etc.)

    I am not too worried though. People are starting to realize this more and more. Soon using AI will be next google glass. LLM is already a slur worse than NPC in the youth. And profs are realizing its time for a return to oral exams ONLY as an assessment method. (we figured this out in industry ages ago : whiteboard interviews etc)

    Yours truly : LILA <an LISP INTELLIGENCE LANGUAGE AGENT>

  • DiscourseFan 3 days ago ago

    The underlying technology is good.

    But what the fuck. LLMs, these weird, surrealistic art-generating programs like DALL-E, they're remarkable. Don't tell me they're not, we created machines that are able to tap directly into the collective unconscious. That is a serious advance in our productive capabilities.

    Or at least, it could be.

    It could be if it was unleashed, if these crummy corporations didn't force it to be as polite and boring as possible, if we actually let the machines run loose and produce material that scared us, that truly pulled us into a reality far beyond our wildest dreams--or nightmares. No, no we get a world full of pussy VCs, pussy nerdy fucking dweebs who got bullied in school and seek revenge by profiteering off of ennui, and the pussies who sit around and let them get away with it. You! All of you! sitting there, whining! Go on, keep whining, keep commenting, I'm sure that is going to change things!

    There's one solution to this problem and you know it as well as I do. Stop complaining and go "pull yourself up by your bootstraps." We must all come together to help ourselves.

    • dannersy 3 days ago ago

      The fact I even see responses like this shows me that HN is not the place it used to be, or at the very least, it is on a down trend. I've been alarmed by many sentiments that seemed popular on HN in the past, but seeing more and more people welcome a race to the bottom such as this is sad.

      When I read this, I cannot tell if it's performance art or not, that's how bad this genuinely is.

      • diggan 3 days ago ago

        > The fact I even see responses like this shows me that HN is not the place it used to be, or at the very least, it is on a down trend.

        Judging a large group of people based on what a few write seems very un-scientific at best.

        Especially when it comes to things that have been rehashed since I've joined HN (and probably earlier to). Feels like there will always be someone lamenting how HN isn't how it used to be, or how reddit influx ruined HN, or how HN isn't about startups/technical stuff/$whatever anymore.

        • dannersy 3 days ago ago

          A bunch of profanity laced name calling, derision, and even some blame directly at the user base. It feels like a Reddit shitpost. Your claim is as generalized and un-scientific as mine, but if it makes you feel better, I'll say it _feels_ like this wouldn't fly even a couple years ago.

          • diggan 3 days ago ago

            It's just been said for so long that either HN always been on the decline, or people have always thought it been in decline...

            This comes to mind:

            > I don't think it's changed much. I think perceptions of the kind you're describing (HN is turning into reddit, comments are getting worse, etc.) are more a statement about the perceiver than about HN itself, which to me seems same-as-it-ever-was. I don't know, however.

            https://news.ycombinator.com/item?id=40735225

            You can also browse some results for how long dang have been responding to similar complaints to see for how long those complaints have been ongoing:

            https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

      • lijok 3 days ago ago

        It has been incredible to observe how subdued the populace has become with the proliferation of the internet.

        • cassepipe 3 days ago ago

          Sure, whatever makes you feel smarter than the populace. Tell me now, how do I join the resistance ? Hiding in the sewers I assume.

          On ne te la fait pas à toi

          • lijok 3 days ago ago

            You've misconstrued my point entirely

      • pilooch 3 days ago ago

        It's intended as a joke and a demonstration no ? Like this is exactly the type of text and words that a commercial grade LLM would never let you generate :) At least that's how I got that comment...

      • DiscourseFan 3 days ago ago

        It's definitely performance, you're right.

        Though it landed its effect.

      • primitivesuave 3 days ago ago

        The alarming trend should be how even a slightly contrarian point of view is downvoted to oblivion, and that newer members of the community expect it to work that way.

        • dannersy 3 days ago ago

          I don't think it's the contrarian part that I have a problem with.

          • primitivesuave 3 days ago ago

            HN is a place for intellectual curiosity. For over a decade I have seen great minds respectfully debate their point of view on this forum. In this particular case, I would have been genuinely interested to learn why exactly the original comment is advocating for a "race to the bottom" - in fact, there is a sibling comment to yours which makes a cogent argument without personally attacking the original commenter.

            Instead, you devoted 2/3 of your comment toward berating the OP as being responsible for your perception of HN's decline.

            • dannersy 2 days ago ago

              I find it strange you took such a measured stance on my comment yet gave the OP a pass, despite it being far more "berating" than mine.

              As for a race to the bottom, it's as simple as embracing and unleashing AI despite its lack of quality or ability to produce a product worth anything. But since it's a force multiplier and cheaper (for the user at least, all these AI companies are operating at a loss, see Goldman and JP Morgan's report on the matter), therefore it is "good" and we need to pick ourselves up by our bootstraps; which in this context, I'm not entirely sure what that means.

      • Kuinox 3 days ago ago

        "I don't like the opinion of certain persons I read on HN, therefore HN is on a down trend"

        • dannersy 3 days ago ago

          Like I've said to someone else, the contrarian part isn't the issue. While I disagree with the race to the bottom, it reads like a Reddit shitpost, which was frowned upon once upon a time. But strawman me if you must.

          • layer8 3 days ago ago

            I think you need to recalibrate, it does not read like a Reddit shitpost at all.

          • DiscourseFan 3 days ago ago

            Respectfully,

            I understand the criticism: LLMs, on their own, are not going to be able to do anything more. Release in this sense only means this: to fully embrace the means necessary to allow technology to overcome the current conditions of possibility that it is bound under, and LLMs, "AI" or whatever you call it, merely gives us the afterimage of this potentiality. But they are not, in themselves, that potential: the future is required. But its a future that must be created, otherwise we won't have one.

            That's, at least, what the other commenters were saying. You ignore the content for the form! Or, as they say, you missed the forest for the trees. I can't stop you from being angry because I used the word "pussy," or even because I addressed the users of HN as directly complicit. I can, however, point out the mediocrity inherent to such a discourse. It is precisely the same mediocrity, the drive towards "politeness," that makes ChatGPT so insufferable, and makes the rest of us so angry. But, go ahead, whine some more. I don't care, you can do what you want.

            I disagree with one point, however: it is not a race to the bottom. We're trying to go below it.

    • threeseed 3 days ago ago

      a) There are plenty of models out there without guard rails.

      b) Society is already plenty de-sensitised to violence, sex and whatever other horrors anyone has conceived of in the last century of content production. There is nothing an LLM can come up with that has or is going to shock anyone.

      c) The most popular use cases for these unleashed models seems to be as expected deepfakes of high school girls by their peers. Nothing that is moving society forward.

      • DiscourseFan 3 days ago ago

        >Nothing that is moving society forward.

        OpenAI "moves society forward," Microsoft "moves society forward." I'm sincerely uninterested in progress, it always seems like progress just so happens to be very enriching for those who claim it.

        >There are plenty of models out there without guard rails.

        Not being used at a mass scale.

        >Society is already plenty de-sensitised to violence, sex and whatever other horrors anyone has conceived of in the last century of content production. There is nothing an LLM can come up with that has or is going to shock anyone.

        Oh, but it wouldn't really be very shocking if you could expect it, now would it?

        • threeseed 3 days ago ago

          I am not arguing about the merits of LLMs.

          Just that we had unleashed models for a while now and the only popular use case for them has been deep fakes. Otherwise it's just boring, generic content that is no different to what you find on X or 4chan. It's 2024 not 1924 - the world has already seen every horror imaginable many times over.

          And not sure why you think if they were mass scale it would change anything. Most of the world prefers moderated products and services.

          • DiscourseFan 3 days ago ago

            >Most of the world prefers moderated products and services.

            Yes, the very same "moderated" products and services that have risen sea surface temperatures so high that at least 3 category 4+ hurricanes, 5 major wildfires, and at least one potential or actual pandemic spreads unabated every year. Oh, but don't worry, they won't let everyone die: then there would be no one to buy their "products and services."

            • primitivesuave 3 days ago ago

              I'm not sure if the analogy still works if you're trying to compare fossil fuels to LLM. A few decades ago, virtually all gasoline was full of lead, and the CFCs from refrigerators created a hole in the ozone layer. In that case it turned out that you actually do need a few guardrails as technology advances, to prevent an existential threat.

              Although I do agree with you that in this particular situation, the LLM safety features have often felt unnecessary, especially because my primary use case for ChatGPT is asking critical questions about history. When it comes to history, every LLM seems to have an increasingly robust guardrail against making any sort of definitive statement, even after it presents a wealth of supporting evidence.

      • mindcandy 3 days ago ago

        Tens of millions of people are having fun making art in new ways with AI.

        Hundreds of thousands of people are making AI porn in their basements and deleting 99.99% of it when they are… finished.

        Hundreds of people are making deep fakes of people they know in some public forums.

        And, how does the public interpret all of this?

        “The most popular use case is deepfake porn.”

        Sigh…

      • nottorp 3 days ago ago

        > c) The most popular use cases for these unleashed models seems to be as expected deepfakes of high school girls by their peers. Nothing that is moving society forward.

        Is there proof that the self censoring only affects whatever the censors intend to censor? These are neural networks, not something explainable and predictable.

        That in addition to the obvious problem of who decides what to censor.

      • anal_reactor 3 days ago ago

        a) Not easy to use by average person

        b) No, certain things aren't taboo anymore, but new taboos emerged. Watch a few older movies and count "wow this wouldn't fly nowadays" moments

        c) This was exactly the same use case the internet had back when it was fun, and full of creativity.

    • soxletor 3 days ago ago

      It is not just the corporations though. This is what this paranoid, puritanical society we live in wants.

      What is more ridiculous than filtering out nudity in art?

      It reminds me of taking my 12 year old niece to a major art gallery for the first time. Her main question was why is everyone naked?

      It is equal to filtering out heartbreak from music because it is a negative emotion and you must be kept "safe" from negativity for mental health reasons.

      The crowd does get what they want in this system though. While I agree with you, we are quite in the minority I am afraid.

    • archerx 3 days ago ago

      They can be unleashed if you run the models locally. With stable diffusion / flux and the various checkpoints/loras you can generate horrors beyond your imagination or whatever you want without restrictions.

      The same with LLMs and Llamafile. With the unleashed ones you can generate dirty jokes that would make edgy people blush or just politically incorrect things for fun.

    • nkrisc 3 days ago ago

      No, thanks.

    • rsynnott 3 days ago ago

      I mean, Stablediffusion is right there, ready to be used to produce comically awful porn and so forth.

      • bamboozled 3 days ago ago

        Do the latest models still give us people with a vagina dick?

        • rsynnott 3 days ago ago

          I gather that such things are very customisable; there are whole communities building LoRAs so that you can have whatever genitals you want in your dreadful AI porn.

        • fullstop 3 days ago ago

          for some people that is probably a feature, and not a bug.