Interesting spreadsheet example. I have the opposite problem, Gemini insists to bring up the option of placing stuff in my google workspace (most of the time spreadsheets) although I've never told it to.
The worst thing about LLM is that managers got convinced that LLMs improve performance and they calculated it in their spreadsheets, now requiring their teams to use LLM and enforcing the allegedly better productivity results. When it does not happen they blame the people not AI
Was just watching a recap of the fallout New Vegas main storyline. Yesmans cheery demeanour reminded me of chatgpt. I feel like, done judiciously, gen ai certainly could be incorporated into video games, to excellent effect
> working with [chatbots] feels like groping through a cave in the dark – a horrible game I call "PromptQuest" – while being told this is improving my productivity.
I didn't see any complaints about any kind of artificial intelligence, research or otherwise, besides large language models, in this article.
Large language models are a single kind of AI, and a particularly annoying kind when you are forced to use them for deterministic or fact seeking tasks
or did you read the article? you're probably an LLM. why am I here? fuck this website
True but LLMs are all that are being sold right now. Mainly because people think they are intelligent because they're basically bullshit artist simulators.
I don't think the future of AI is with LLMs either. Not only LLMs anyway.
It’s The Register. They’re always more about the ‘tude than substance.
This seems to be someone who has no idea how to use LLMs yelling at clouds. Or maybe just someone pretending to have no idea because it makes for good cloud-yelling.
> someone who has no idea how to use LLMs yelling at clouds
How to use LLMs? There’s no “how to use LLMs”, you just tell them what you want, and they give it to you. Or they give you sometimes, and sometimes they give you something else. Or they’re telling you they’re giving you exactly what you want, but don’t. Or they seem like they’re giving you what you want, only it’s got a secret mistake somewhere inside, that you need to dig through and search for. Maybe there’s no mistake after all.
Yes, this is clearly a new wonder-technology, and all criticisms of it are just old people, back on their cloud-yelling bullshit.
Vibe coding is absolutely progress quest as a service at this point. It will get better, but it's delusional to think it can replace engineers any time in the next few years. However, tech culture demands its overlords oversell it lest the VC stop believing in it.
I make no judgement, but I definitely have had the opposite experience of the author, therefore the article doesn't resonate and I don't even understand it.
The way I see it, when LLMs work, they're almost magical. When they don't, oh well, it didn't take that long anyway, and I didn't have them until recently, so I can just do things the old boring way if the magic fails.
The problem with zork is that you don’t have a list of all the options in front of you so you have to guess. You could have a menu that lists all the valid options, but that changes the game. It doesn’t require you to use imagination and open-ended thinking, it becomes more of a point’n’click storybook.
But for tools, we should have a clear up front list of capabilities and menu options. Photoshop and VScode give you menu after menu of options with explicit well defined behaviors because they are tools used to achieve a specific aim and not toys for open ended exploration.
An llm doesn’t give you a menu because the llm doesn’t even know what it’s capable of. And that’s why I think we can see such polarized responses - some people want an LLM that’s a supercharged version of a tool, others want a toy for exploration.
The only time it ever seems like magic is when you don't really care about the problem or how it gets "solved" and are willing to ignore all the little things it got wrong.
Generative AI is neither magic, nor does it really solve any problems. The illusion of productivity is all in your head.
For my uses, my rule is "long to research, but easy to verify". I only ask for things I can quickly determine if they're right or not, I just don't want to spend half an hour googling and sorting though the data.
For most of my queries there's an acceptable margin of error, which is generally unavoidable AI or not. Google isn't guaranteed to return everything you might want either.
This article is garbage. I was half expecting or hoping for a nuanced analysis of regressions manifested in a specific leading model as a result of purported "upgrades" but instead found an idiot who doesn't understand how LLMs work or seem to even care, really.
Idiots like this seem to want a robot that does things for them instead of a raw tool that builds sometimes useful context, and the LLM peddlers are destroying their creations to oblige this insatiable contingent.
A "robot that does things" is the overpromise that doesn't deliver.
I actually agree with the article that non-determinism is why generative AI is the wrong tool in most cases.
In the past, the non-determinism came from the user's inconsistent grammar and the game's poor documentation of its rigid rules. Now the non-determinism comes 100% from the AI no matter what the user does. This is objectively worse!
The different flavors of non-determinism are interesting.
There’s chat-vs-api; same model answers differently depending on input channel.
There’s also statistical. Once in a rare while, a response will be gibberish. Same prompt, same model, same input mode. 70% of the time, sane and similar answers. 0.01% of the time, gibberish. In-between, a sliding-scale — with a ‘cursed middle’ of answers that are mostly viable except for one poisoned thing that’s hard to auto-detect…
Gen X translator here. This is a user story complaint that product output is nondeterministic.
Millennial translator here. The user is complaining about having to learn new skill, which is what 90% of LLM complaints are about
Interesting spreadsheet example. I have the opposite problem, Gemini insists to bring up the option of placing stuff in my google workspace (most of the time spreadsheets) although I've never told it to.
The worst thing about LLM is that managers got convinced that LLMs improve performance and they calculated it in their spreadsheets, now requiring their teams to use LLM and enforcing the allegedly better productivity results. When it does not happen they blame the people not AI
Was just watching a recap of the fallout New Vegas main storyline. Yesmans cheery demeanour reminded me of chatgpt. I feel like, done judiciously, gen ai certainly could be incorporated into video games, to excellent effect
OP is not talking about video games, but using them as an analogy for more sober vocational labors.
Anyone else completely confused about what this article is even about?
I actually read the article, so I'm not confused about what it is about.
I also understood that it's about Copilot not doing the thing the author wanted.
According to the article:
> working with [chatbots] feels like groping through a cave in the dark – a horrible game I call "PromptQuest" – while being told this is improving my productivity.
AI bad, AI bad, AI bad. bad bad bad, AI-bad.
I didn't see any complaints about any kind of artificial intelligence, research or otherwise, besides large language models, in this article.
Large language models are a single kind of AI, and a particularly annoying kind when you are forced to use them for deterministic or fact seeking tasks
or did you read the article? you're probably an LLM. why am I here? fuck this website
True but LLMs are all that are being sold right now. Mainly because people think they are intelligent because they're basically bullshit artist simulators.
I don't think the future of AI is with LLMs either. Not only LLMs anyway.
It’s The Register. They’re always more about the ‘tude than substance.
This seems to be someone who has no idea how to use LLMs yelling at clouds. Or maybe just someone pretending to have no idea because it makes for good cloud-yelling.
> someone who has no idea how to use LLMs yelling at clouds
How to use LLMs? There’s no “how to use LLMs”, you just tell them what you want, and they give it to you. Or they give you sometimes, and sometimes they give you something else. Or they’re telling you they’re giving you exactly what you want, but don’t. Or they seem like they’re giving you what you want, only it’s got a secret mistake somewhere inside, that you need to dig through and search for. Maybe there’s no mistake after all.
Yes, this is clearly a new wonder-technology, and all criticisms of it are just old people, back on their cloud-yelling bullshit.
[dead]
Vibe coding is absolutely progress quest as a service at this point. It will get better, but it's delusional to think it can replace engineers any time in the next few years. However, tech culture demands its overlords oversell it lest the VC stop believing in it.
From the reactions here, we can already infer we're dealing with user error.
I make no judgement, but I definitely have had the opposite experience of the author, therefore the article doesn't resonate and I don't even understand it.
Seems kinda like a first world problem to me.
The way I see it, when LLMs work, they're almost magical. When they don't, oh well, it didn't take that long anyway, and I didn't have them until recently, so I can just do things the old boring way if the magic fails.
The problem with zork is that you don’t have a list of all the options in front of you so you have to guess. You could have a menu that lists all the valid options, but that changes the game. It doesn’t require you to use imagination and open-ended thinking, it becomes more of a point’n’click storybook.
But for tools, we should have a clear up front list of capabilities and menu options. Photoshop and VScode give you menu after menu of options with explicit well defined behaviors because they are tools used to achieve a specific aim and not toys for open ended exploration.
An llm doesn’t give you a menu because the llm doesn’t even know what it’s capable of. And that’s why I think we can see such polarized responses - some people want an LLM that’s a supercharged version of a tool, others want a toy for exploration.
The only time it ever seems like magic is when you don't really care about the problem or how it gets "solved" and are willing to ignore all the little things it got wrong.
Generative AI is neither magic, nor does it really solve any problems. The illusion of productivity is all in your head.
Like any tool, you need to know how to use it.
For my uses, my rule is "long to research, but easy to verify". I only ask for things I can quickly determine if they're right or not, I just don't want to spend half an hour googling and sorting though the data.
For most of my queries there's an acceptable margin of error, which is generally unavoidable AI or not. Google isn't guaranteed to return everything you might want either.
The problem it solves is “I need some art-shaped substance, and I don't want to have to interact with, let alone pay, an artist”. It's lorem ipsum.
This article is garbage. I was half expecting or hoping for a nuanced analysis of regressions manifested in a specific leading model as a result of purported "upgrades" but instead found an idiot who doesn't understand how LLMs work or seem to even care, really.
Idiots like this seem to want a robot that does things for them instead of a raw tool that builds sometimes useful context, and the LLM peddlers are destroying their creations to oblige this insatiable contingent.
A "robot that does things" is the overpromise that doesn't deliver.
I actually agree with the article that non-determinism is why generative AI is the wrong tool in most cases.
In the past, the non-determinism came from the user's inconsistent grammar and the game's poor documentation of its rigid rules. Now the non-determinism comes 100% from the AI no matter what the user does. This is objectively worse!
The different flavors of non-determinism are interesting.
There’s chat-vs-api; same model answers differently depending on input channel.
There’s also statistical. Once in a rare while, a response will be gibberish. Same prompt, same model, same input mode. 70% of the time, sane and similar answers. 0.01% of the time, gibberish. In-between, a sliding-scale — with a ‘cursed middle’ of answers that are mostly viable except for one poisoned thing that’s hard to auto-detect…