For a start they could make the answers less talkative?
I switched back to ChatGPT out of necessity, because Claude stopped working after two queries, where it gave overly elaborate answers (about a simple web app config).
But Claude isn't alone.
It seems a recent (subjective) trend that Claude and ChatGPT give very lengthy answers, with a lot of repetition from the original query on the free plans.
I got used to add "answer briefly", to keep the noise in check.
Yes lately i've also noticed the same pattern that Model try to provide over explaination to even simple stuff & that points to its system prompt or something internal instructions to waste tokens to hit limits
How ironic: once the exfiltrators of all of the web's data have consolidated it into their own walled-garden it becomes 'proprietary' and must - of course - be protected from exfiltration by others as if it was their own.
This is something these tech giants ignore intentionally. Infact many people don't even know about how they train their model by scraping data for free & when it comes to their code being open source, you see Takedowns lol. Interestingly, Anthropic made a bigger Contribution to open source itself.
For a start they could make the answers less talkative?
I switched back to ChatGPT out of necessity, because Claude stopped working after two queries, where it gave overly elaborate answers (about a simple web app config).
But Claude isn't alone. It seems a recent (subjective) trend that Claude and ChatGPT give very lengthy answers, with a lot of repetition from the original query on the free plans.
I got used to add "answer briefly", to keep the noise in check.
Yes lately i've also noticed the same pattern that Model try to provide over explaination to even simple stuff & that points to its system prompt or something internal instructions to waste tokens to hit limits
And just as with a real human rambler, the longer they rambler, the more likely they are to start making stuff up and asserting false truths
> Anthropic recently accidentally released part of its internal source code for Claude Code due to "human error".
I wonder who that human was counting on leading up to this "human error" ...
Yeah the whole OpenAI exodus brought in a ton of people and Anthropic was struggling to meet the previous usage already
That’s why there’re now work hours restrictions
Yes that make sense also Since Anthropic says other Chinese companies using their data for their models, they might be limiting use on new accounts.
How ironic: once the exfiltrators of all of the web's data have consolidated it into their own walled-garden it becomes 'proprietary' and must - of course - be protected from exfiltration by others as if it was their own.
This is something these tech giants ignore intentionally. Infact many people don't even know about how they train their model by scraping data for free & when it comes to their code being open source, you see Takedowns lol. Interestingly, Anthropic made a bigger Contribution to open source itself.
[dead]
Is that really on BBC? what a world we live in...
Anthropic launched in the UK recently (Feb I think) so I expect it’s as a consequence of that.
Yes it could be possible
[flagged]
Your gonna get flagged and all for this comment..... But I agree.
Is there a HN frontend that filters out mentions of AI, it would make a nice change... Maybe I should AI code it /just joking
I'm unironically working on a proxy that filters sites like reddit, hn, etc by using user provided LLM rules