As an aside, people say AI will eliminate coding jobs, but then who will configure these hooks? Or think about adding such a feature?
These kinds of tooling and related work will still be there unless AI evolves to the point that it even thinks of this and announces this to all other AI entities and they also implement it properly etc.
To misuse a woodworking metaphor, I think we’re experiencing a shift from hand tools to power tools.
You still need someone who understands the basics to get the good results out of the tools, but they’re not chiseling fine furniture by hand anymore, they’re throwing heaps of wood through the tablesaw instead. More productive, but more likely to lose a finger if you’re not careful.
And we may get an ugly transitory period where a lot of programs go from being clearly hand made with some degree of care and some fine details that show the developer's craftsmanship, to awful prefab and brutalist software that feels inhuman, mass-produced, and nothing is really fit for the job but still shipped because it kind of works well enough.
People go to museums to admire old hand-carved furniture and travel to cities to admire the architecture of centuries past made with hand-chiseled blocks. While power tools do let people make things of equal quality faster, they're instead generally used to make things of worse quality much, much faster and the field has gone from being a craft to simply being an assembly line job. As bad as software is today, we're likely to hit even deeper lows and people will miss the days where Electron apps are good compared to what's yet to come.
There's already been one step in this direction with the Cambrian extinction of 90s/early 2000s software. People still talk about how soulful Winamp/old Windows Media Player/ZSNES/etc were.
Exactly. As a non-software engineer, people talk about software as some fine art on here while my experience as a user is that most software basically sucks in one way or another.
Your experience is a perfect reflection of reality. Most software is not well done.
In trades I found people were very opinionated about the Right Way to do things, but we tended to cut corners constantly there as well. People who work in a craft seem to like the idea of doing things right more than they actually do things right in practice. We end up with gaps in our flooring, ugly solder joints in our plumbing, creaking decks, cracked concrete, and a cookie disclaimer that returns every time you refresh the page.
> In trades I found people were very opinionated about the Right Way to do things,
My experience of moving from tech to doing a lot of home renovations and dealing with hundreds of trades people is that it was just like tech. 90% of people in 90% of environments are just trying to make it work so they can collect their pay-cheque and go home.
High quality output in any domain is a result of stumbling across the 10% of genuinely passionate people, and creating the 10% environment for them to want to be passionate in. If you don't luck out with that, everything will still work, it'll just be a bit rough round the edges.
Amen - As a SWE and i've come to realize that no one pays for me to treat code as craftman quality so I don't. The whole agile mindset is get something out that demonstrates value and fix it things while you go. And in ten years i've also realized that my ability to sit down and make something really special the first time through is - shit. This is the first time i've been able to meet timelines while still producing a better product.
I think it's going to go the opposite way: we'll get a lot more custom made software, that fits exactly what a small customer needs. The code might be utter crap, the design might not be award winning, but it will be custom made to a degree that you can't customize your average Savas.
Agreed. Trying to live in a high level of abstraction without understanding the components of the system will cause the project to fail after a certain point.
I think the kind of people, who in the past constructed extremely useful (if brittle) solutions with Excel, will be creating all sorts of AI bespoke and very useful tools.
It won't bother them at all what the code looks like under the hood. Not that the code will look worse that what an "average" developer produces. Claude and ChatGPT both write better code than most of the existing code I usually look at.
Beautifully formatted, exquisitely incorrect code that provides the simulacra of a feature on the happy path and subtly fails in all other scenarios with hard to detect, impossible to debug errors. Can't wait (it's already there to be honest).
Better than poorly formatted code, making basic mistakes like SQL query string concatenation, from someone who didn't bother to write any tests. You just have to treat it like code you got from someone else. It would be hard for AI to produce more magical errors that are harder to debug than what humans write. LLMs are one of the best debugging tools out there too.
Yeah and then you’ll get hundreds of slightly different protocols formats and standards and nothing will talk to each other anymore without bespoke integration
I kinda feel differently - it's more like how nowadays you have access to high-quality power tools at cheap prices, and tons of tutorials on Youtube that teach you how to do woodworking, and even if you can't afford the masterwork furniture made by craftsmen, you don't have to buy the shitty mass produced stuff - sure yours won't be as good, but it will be made to your spec.
Moving on into a concrete software example, thanks to AI productivity, we replaced a lot of expensive and crappy subscription SaaS software with our homegrown stuff. Our stuff is probably 100x simpler (everyone knows the pain of making box software for a diverse set of customer needs, everything needs to be configurable, which leads to crazy convoluted code, and a lot of it). It's also much better and cheaper to run, to say nothing of the money we save by not paying the exorbitant subscription fee.
I suspect the biggest losers of the AI revolution will be the SaaS companies whose value proposition was: Yes you can use open source for this, but the extra cost of an engineer who maintains this is more than we charge.
As for bespoke software, 'slop' software using Electron, or Unity in video games exists because people believe in the convenience of using these huge lumbering monoliths that come with a ton of baggage, while they were taught the creed that coding to the metal is too hard.
LLMs can help with that, and show people that they can do bespoke from scratch (and more importantly teach people how to do that). Claude/o3/whatever can probably help you build a game in WebGL you thought you needed a game engine for.
We went through decades of absolutely hideous slop, and now people are yearning for the past and learning how to make things that are aesthetically appealing, like the things that used to be common.
I think we're looking at at least a decade of absolute garbage coming up because it's cheap to make, and people like things that are cheap in the short term. Then people will look back at when software was "good", and use new tools to make things that were as good as they were before.
And not limited to AI and power tools, it happened with art as well. Great art was made with oil paints, watercolors, and brushes. Then digital painting and Photoshop came around and we had a long period of absolute messes on DeviantArt and a lot of knowledge of good color usage and blending was basically lost, but art was produced much faster. Now digital artists are learning traditional methods and combining it with modern technology to make digital art that can be produced faster than traditional art, but with quality that's just as good.
2005 digital paintings have a distinct, and even in the hands of great artists, very sloppy and amateurish feel. Meanwhile 2020s digital artists easily rival the greats of decades and centuries past.
Don't talk about AI with those digital artists though, they will slaughter you for it. The discourse on GenAI in the digital art world (and gaming world, for that matter) has reached an absolutely deranged fever pitch that far outpaces the original valid points about copyright, compensation and intent that were there before. Now it's just screeching.
Although I still wonder how long we're in this phase and how ubiquitous it will be, because didn't power tools coincide with improved automation in factories eliminating manufacturing jobs?
I like this metaphor because power tools didn’t lead to more sophisticated craftspeople despite the increase in efficiency and potential. I think it will be the same with code. More outputs, not necessarily more refined or better in any way, but not innately bad either.
Yea man, people say combine harvesters will eliminate agriculture jobs, but then who will operate these combine harvesters? Obviously every single manual farm laborer will just switch to being an operator of those.
God, will we never move this discussion past this worthless argument? What value would there be in any of these automatization tools, be in in agriculture or AI, if it just made every single worker switch to being an [automatization tool] operator?
Like in the story about cosmologist and the old lady, you seem to be asking "What is the AI standing on?", and the reply here is of course "You're very clever, young man, very clever, but it's AIs all the way down!"
Many already let Claude Code update its own CLAUDE.md, so I don't see any reason why you couldn't (dangerously-skipping-permissions) let it edit its own hooks. And as in Jurassic Park, the question of whether we should seems to be left by the wayside.
> unless AI evolves to the point that it even thinks of this
The #1 goal of every AI company is to create an AI that is capable of programming and improving itself to create the next, more powerful AI. Of course, these kind of configuration jobs will be outsourced to AI as soon as possible, too.
I jumped at this HN post, because Claude Code Opus 4 has this stupid habit of never terminating files with a return.
To test a new hook one needs to restart claude. Better to run the actual processing through a script one can continually edit in one session. This script uses formatters on C files and shell scripts, and just fixes missing returns on other files.
As usual claude and other AI is poor at breaking problems into small steps, and makes up ways to do things. The above hook receives a json file, which I first saved to disk, then extracted the file path and saved that to disk, then instead called save-hook.sh on that path. Now we're home after a couple of edits to save-hook.sh
This was all ten minutes. I wasted far more time letting it flail attempting bigger steps at once.
Hooks will be important for "context engineering" and runtime verification of an agent's performance. This extends to things such as enterprise compliance and oversight of agentic behavior.
Exit Code 2 Behavior
PreToolUse - Blocks the tool call, shows error to Claude
This is great, it means you can set up complex concrete rules about commands CC is allowed to run (and with what arguments), rather than trying to coax these via CLAUDE.md.
I'm excited to see these improvements but, none of them are enough to make up for inconvenience of having to start a new conversation (/clear) after every task.
I've been using Gemini Code. The larger context window is big enough to work for a full session without having to /clear. It matters. Having to think so hard and conserve tokens with Claude is problematic.
I've been playing with Claude Code the past few days. It is very energetic and maybe will help me get over the hump on some long-standing difficult problems, but it loses focus quickly. Despite explicit directions in CLAUDE.md to build with "make -j8" and run unit tests with "make -j8 check", I see it sometimes running make without -j or calling the test executable directly. I would like to limit it to doing certain essential aspects of workflow with the commands I specify, just as a developer would normally do. Are "Hooks" the right answer?
The AI definitely gets confused when there is a lot of stuff happening. It helps if you try to make the commands as easy as possible. Like, change 'make' so that '-j8' is default, or add scripts like make-check.sh that does 'make -j check', or add an MCP server that has commands for the most common actions (tell the AI to write an MCP server for you).
Hooks would probably help, I think you could add a hook to auto-reject the bot when it calls the wrong thing.
This closes a big feature gap. One thing that may not be obvious is that because of the way Claude Code generates commits, regular Git hooks won’t work. (At least, in most configurations.)
We’ve been using CLAUDE.md instructions to tell Claude to auto-format code with the Qlty CLI (https://github.com/qltysh/qlty) but Claude a bit hit and miss in following them. The determinism here is a win.
It looks like the events that can be hooked are somewhat limited to start, and I wonder if they will make it easy to hook Git commit and Git push.
I never have to reformat. It picks up my indentation preferences immediately and obeys my style guide flawlessly. When I ask it to perfect my JavaDoc it is awesome.
Must be a ton of fabulous enterprise Java in the training set.
Find it a bit odd they didn't model this as an MCP server itself, and making hooks just mcp tools with pre-agreed names.
Wouldn't it be nice to have the agent autodiscover the hooks and abstracting their implementation details away under the mcp server, which you could even reuse by other agents?
So, form my limited understanding, this doesn't take up context, it's something auto where you can configure per tool use, and not MCP that Claude decides "when" to run it?!
An abstraction via a script should work, right? They document that it pipes the JSON data to your command's stdin,
```lint-monorepo.sh
# read that data
json_input=$(cat)
# do some parsing here with jq, get the file path (file_path)
if [$file_path" == "$dir1"*]
run lint_for_dir1
```
Does it? Claude Code is the product that works the least well for me, mainly because of its tendency to go off and do tons of stuff. I've found LLMs are at their best when they produce few enough lines of code that I can review and iterate, not when they go off and invent the world.
For that reason, I mainly use Aider and Cursor (the latter mostly in the "give me five lines" comment mode).
Add a PostToolUse [0] hook that automatically creates a git commit whenever changes are made.
Then you can either git checkout the commit you want to rollback to...
Or you could assign those git commits an index and make a little MCP server that allows you to /rollback:goto 3 or /rollback:back 2 or whichever syntax you want to support.
In fact if you put that paragraph into Claude I wouldn't be surprised if it made it for you.
Claude Code has basically grown to dominate my initial coding workflow.
I was using the API and passed $50 easily, so I upgraded to the $100 a month plan and have already reached $100 in usage.
I've been working on a large project, with 3 different repos (frontend, backend, legacy backend) and I just have all 3 of them in one directory now with claude code.
Wrote some quick instructions about how it was setup, its worked very well. If I am feeling brave I can have multiple claude codes running in different terminals, each working on one piece, but Opus tends to do better working across all 3 repos with all of the required context.
Still have to audit every change, commit often, but it works great 90% of the time.
Opus-4 feels like what OAI was trying to hype up for the better part of 6 months before releasing 4.5
Just started using Claude (very late to the game), and I am truly blown away. Instead of struggling for hours trying to get the right syntax for a Powershell script or to convert Python to Go, I simply ask Claude to make it happen. This helps me focus on content creation instead of the mind-bending experience of syntax across various languages. While some might call it laziness, I call it freedom as it helps me get my stuff done quicker.
I have been using it for other stuff (real estate, grilling recipes, troubleshooting electrical issues with my truck), and it seems to have a very large knowledge base. At this point, my goal is to get good at asking the right kinds of questions to get the best/most accurate answers.
>before this you had to trust that claude would follow your readme instructions about running linters or tests. hit and miss at best. now its deterministic. pre hook blocks bad actions post hook validates results.
>hooks let you build workflows where multiple agents can hand off work safely. one agent writes code another reviews it another deploys it. each step gated by verification hooks.
This nicely describes where we're at with LLM's as I see it: they are 'fancy' enough to be able to write code yet at the same time they can't be trusted to do stuff which can be solved with a simple hook.
I feel that currently improvement mostly comes from slapping what to me feels like workarounds on top of something that very well may be a local maximum.
Yup, slowing down the AI is a really hard thing to do. I've mostly accomplished it, but I use extensive auto prompting and a large memory bank. All of it is designed explicitly to slow down the AI. I've taught it how to do what I call "Baby Steps", which is defined as: "The smallest possible change that still effectively moves the technology forward." Some of my prompting is explicit about human review and approval of every change including manual testing of the application in question BEFORE the model moves on to the next step.
Given the Anthropic legal terms forbid competing with them, what are we actually allowed to do with this? Seems confusing what is allowed.
No machine learning work? That would compete.
No writing stuff I would train AI on. Except I own the stuff it writes, but I can’t use it.
Can we build websites with it? What websites don’t compete with Anthropic?
Terminal games? No, Claude code is a terminal game, if you make a terminal game it competes with Claude?
Can their “trust and safety team” humans read everyone’s stuff just to check if we’re competing with LLMs (funny joke) and steal business ideas and use them at Anthropic?
Feels like the dirty secret of AI services is, every possible use case violates the terms, and we just have to accept we’re using something their legal team told us not to use? How is that logically consistent? Any safety concerns? This doesn’t seem like a law Asimov would appreciate.
It would be cool if the set of allowed use cases wasn’t empty. That might make Anthropic seem more intelligent
Anthropic's terms typically restrict training competing AI models with their outputs, not building standard applications or websites that simply use their API as a tool.
I tried to make an app in claude code like they so much fanfare it could do, and it failed. It was obvious it will fail, I wanted something that I think it was not done before, using the Youtube api. But it failed nonetheless.
I am tired of pretending that this can actually pull any meaningful work besides a debug companion or a slightly enhanced google/stackoverflow.
I have a side project Android app. To test Claude Code, I loaded it up in the repo and asked it to add a subscription billing capability to the app. Not rocket science, but it probably would have taken me a day or two to figure out the mechanics of Google Play subscription billing and implement it in code.
Claude Code did it in 30 seconds and it works flawlessly.
I am so confused how people are not seeing this as a valuable tool. Like, are you asking it to solve P versus NP or something?
If you need to do something that's been done a million times, but you don't have experience with it, LLMs are an excellent way to get running quickly.
Its definitely not there yet. You have to babysit it alot. Its not autonomous.
The utility i find is that it helps _me_ do the real engineering work, the planning and solution architechting, and then can bang out code once it has rock solid instructions (in natural language but honestly one level above psuedocode) and then i have to review it with absolutely zero faith in its ability to do things. Then it can work well.
Interesting how long ago did you do this? How long did you spend on it?
I was skeptical about Claude code and then I spent a week really learning how to use it. Within the week I had built a backend with FastAPI that supported user creation, password reset, email confirmation, a front end, and support for ouath into a few systems.
It definitely took me some time to learn how to make it work, but I’m astounded at how much work I got done and for so little typing.
I build things that haven't been done before with Cursor all the time. You have to break it down into simple building blocks rather than specifying everything up front.
If you do it right this actually forces good design.
"I wanted something that I think it was not done before"
But you do know, that this is what LLMs ain't good at.
So your conclusion is somewhat off, because there are plenty of programming work of things that were done before and require just tweaking.
I mean, I am also not hooked yet and just occasionally use chatGPT/claude for concrete stuff, but I do find it useful and I do see, where it can get really useful for me (once it really knows my codebase and the libaries used and does not jump between incompatible API versions)
So many people yearn for LLM's to be like the Star Trek ship computer, which when asked a question unconditionally provides a response relevant and correct, needing no verification.
A better analogy is LLM's are closer to the "universal translator" with an occasional interaction similar to[0]:
Black Knight: None shall pass.
King Arthur: What?
Black Knight: None shall pass!
King Arthur: I have no quarrel with you good Sir Knight, But I must cross this bridge.
Black Knight: Then you shall die.
King Arthur: I command you, as King of the Britons, to stand aside!
Black Knight: I move for no man.
King Arthur: So be it!
[they fight until Arthur cuts off the Black Knight's left arm]
King Arthur: Now, stand aside, worthy adversary.
Black Knight: 'Tis but a scratch.
King Arthur: A scratch? Your arm's off!
Black Knight: No, it isn't.
King Arthur: Well, what's that then?
Black Knight: I've had worse.
As an aside, people say AI will eliminate coding jobs, but then who will configure these hooks? Or think about adding such a feature?
These kinds of tooling and related work will still be there unless AI evolves to the point that it even thinks of this and announces this to all other AI entities and they also implement it properly etc.
To misuse a woodworking metaphor, I think we’re experiencing a shift from hand tools to power tools.
You still need someone who understands the basics to get the good results out of the tools, but they’re not chiseling fine furniture by hand anymore, they’re throwing heaps of wood through the tablesaw instead. More productive, but more likely to lose a finger if you’re not careful.
And we may get an ugly transitory period where a lot of programs go from being clearly hand made with some degree of care and some fine details that show the developer's craftsmanship, to awful prefab and brutalist software that feels inhuman, mass-produced, and nothing is really fit for the job but still shipped because it kind of works well enough.
People go to museums to admire old hand-carved furniture and travel to cities to admire the architecture of centuries past made with hand-chiseled blocks. While power tools do let people make things of equal quality faster, they're instead generally used to make things of worse quality much, much faster and the field has gone from being a craft to simply being an assembly line job. As bad as software is today, we're likely to hit even deeper lows and people will miss the days where Electron apps are good compared to what's yet to come.
There's already been one step in this direction with the Cambrian extinction of 90s/early 2000s software. People still talk about how soulful Winamp/old Windows Media Player/ZSNES/etc were.
>nothing is really fit for the job but still shipped because it kind of works well enough.
This is true for most of the software these days (except for professional software like Photoshop and the like) without LLMs.
Exactly. As a non-software engineer, people talk about software as some fine art on here while my experience as a user is that most software basically sucks in one way or another.
Your experience is a perfect reflection of reality. Most software is not well done.
In trades I found people were very opinionated about the Right Way to do things, but we tended to cut corners constantly there as well. People who work in a craft seem to like the idea of doing things right more than they actually do things right in practice. We end up with gaps in our flooring, ugly solder joints in our plumbing, creaking decks, cracked concrete, and a cookie disclaimer that returns every time you refresh the page.
> In trades I found people were very opinionated about the Right Way to do things,
My experience of moving from tech to doing a lot of home renovations and dealing with hundreds of trades people is that it was just like tech. 90% of people in 90% of environments are just trying to make it work so they can collect their pay-cheque and go home.
High quality output in any domain is a result of stumbling across the 10% of genuinely passionate people, and creating the 10% environment for them to want to be passionate in. If you don't luck out with that, everything will still work, it'll just be a bit rough round the edges.
Amen - As a SWE and i've come to realize that no one pays for me to treat code as craftman quality so I don't. The whole agile mindset is get something out that demonstrates value and fix it things while you go. And in ten years i've also realized that my ability to sit down and make something really special the first time through is - shit. This is the first time i've been able to meet timelines while still producing a better product.
I think it's going to go the opposite way: we'll get a lot more custom made software, that fits exactly what a small customer needs. The code might be utter crap, the design might not be award winning, but it will be custom made to a degree that you can't customize your average Savas.
This was the promise of "no code" / "low code" solutions in the past
It has never worked in the past, I'm not entirely convinced that it will work now
Agreed. Trying to live in a high level of abstraction without understanding the components of the system will cause the project to fail after a certain point.
I think the kind of people, who in the past constructed extremely useful (if brittle) solutions with Excel, will be creating all sorts of AI bespoke and very useful tools.
It won't bother them at all what the code looks like under the hood. Not that the code will look worse that what an "average" developer produces. Claude and ChatGPT both write better code than most of the existing code I usually look at.
Beautifully formatted, exquisitely incorrect code that provides the simulacra of a feature on the happy path and subtly fails in all other scenarios with hard to detect, impossible to debug errors. Can't wait (it's already there to be honest).
Better than poorly formatted code, making basic mistakes like SQL query string concatenation, from someone who didn't bother to write any tests. You just have to treat it like code you got from someone else. It would be hard for AI to produce more magical errors that are harder to debug than what humans write. LLMs are one of the best debugging tools out there too.
Yeah and then you’ll get hundreds of slightly different protocols formats and standards and nothing will talk to each other anymore without bespoke integration
LLMs can do that integration :)
But "utter crap" isn't just an aesthetic issue - that can mean all kinds of bugs and malfunctions!
And infinite job security for those who know how to turn on a debugger
Or people will just throw the product away and buy a replacement because it’s cheaper than fixing it, much like we no longer mend clothes
I kinda feel differently - it's more like how nowadays you have access to high-quality power tools at cheap prices, and tons of tutorials on Youtube that teach you how to do woodworking, and even if you can't afford the masterwork furniture made by craftsmen, you don't have to buy the shitty mass produced stuff - sure yours won't be as good, but it will be made to your spec.
Moving on into a concrete software example, thanks to AI productivity, we replaced a lot of expensive and crappy subscription SaaS software with our homegrown stuff. Our stuff is probably 100x simpler (everyone knows the pain of making box software for a diverse set of customer needs, everything needs to be configurable, which leads to crazy convoluted code, and a lot of it). It's also much better and cheaper to run, to say nothing of the money we save by not paying the exorbitant subscription fee.
I suspect the biggest losers of the AI revolution will be the SaaS companies whose value proposition was: Yes you can use open source for this, but the extra cost of an engineer who maintains this is more than we charge.
As for bespoke software, 'slop' software using Electron, or Unity in video games exists because people believe in the convenience of using these huge lumbering monoliths that come with a ton of baggage, while they were taught the creed that coding to the metal is too hard.
LLMs can help with that, and show people that they can do bespoke from scratch (and more importantly teach people how to do that). Claude/o3/whatever can probably help you build a game in WebGL you thought you needed a game engine for.
Hence the transitory period.
We went through decades of absolutely hideous slop, and now people are yearning for the past and learning how to make things that are aesthetically appealing, like the things that used to be common.
I think we're looking at at least a decade of absolute garbage coming up because it's cheap to make, and people like things that are cheap in the short term. Then people will look back at when software was "good", and use new tools to make things that were as good as they were before.
And not limited to AI and power tools, it happened with art as well. Great art was made with oil paints, watercolors, and brushes. Then digital painting and Photoshop came around and we had a long period of absolute messes on DeviantArt and a lot of knowledge of good color usage and blending was basically lost, but art was produced much faster. Now digital artists are learning traditional methods and combining it with modern technology to make digital art that can be produced faster than traditional art, but with quality that's just as good.
2005 digital paintings have a distinct, and even in the hands of great artists, very sloppy and amateurish feel. Meanwhile 2020s digital artists easily rival the greats of decades and centuries past.
Don't talk about AI with those digital artists though, they will slaughter you for it. The discourse on GenAI in the digital art world (and gaming world, for that matter) has reached an absolutely deranged fever pitch that far outpaces the original valid points about copyright, compensation and intent that were there before. Now it's just screeching.
Great analogy.
Although I still wonder how long we're in this phase and how ubiquitous it will be, because didn't power tools coincide with improved automation in factories eliminating manufacturing jobs?
Feels more like photography. Everyone will “soon” have a tool that lets you take pretty great photos - surpassing professional of thirty years ago.
If you want professional work done you’ll still hire someone but that person will also use a lot of professional grade computer tooling with it.
But there definitely won’t be as many jobs as before - especially on the low skill end.
I like this metaphor because power tools didn’t lead to more sophisticated craftspeople despite the increase in efficiency and potential. I think it will be the same with code. More outputs, not necessarily more refined or better in any way, but not innately bad either.
When all you have is a hammer, everything looks like your thumb.
Great metaphor, exactly how it feels to me too!
Yea man, people say combine harvesters will eliminate agriculture jobs, but then who will operate these combine harvesters? Obviously every single manual farm laborer will just switch to being an operator of those.
God, will we never move this discussion past this worthless argument? What value would there be in any of these automatization tools, be in in agriculture or AI, if it just made every single worker switch to being an [automatization tool] operator?
Like in the story about cosmologist and the old lady, you seem to be asking "What is the AI standing on?", and the reply here is of course "You're very clever, young man, very clever, but it's AIs all the way down!"
Many already let Claude Code update its own CLAUDE.md, so I don't see any reason why you couldn't (dangerously-skipping-permissions) let it edit its own hooks. And as in Jurassic Park, the question of whether we should seems to be left by the wayside.
> unless AI evolves to the point that it even thinks of this
The #1 goal of every AI company is to create an AI that is capable of programming and improving itself to create the next, more powerful AI. Of course, these kind of configuration jobs will be outsourced to AI as soon as possible, too.
If programmers become 10x more productive but demand only grows by 5x, what will happen?
Except that for most people this is not coding, is administration work, DevOps kind of stuff.
I already do lots of "coding" in SaaS products, that have very little to do with what most HNers think of proper coding.
You can already ask Claude Code to modify its own settings
I generally agree that "we" will still be needed, but OTOH, who needs prettier if no human is ever going to read the code?
> people say AI will eliminate coding jobs
Yes.
> then who will configure these hooks?
It will also create jobs.
> unless AI evolves to the point that it even thinks of this and announces this to all other AI entities and they also implement it properly
Also yes.
---
People think that technology is some sort of binary less jobs/more jobs thing.
Technology eliminates some jobs and creates others.
.claude/settings.local.json fragment:
I jumped at this HN post, because Claude Code Opus 4 has this stupid habit of never terminating files with a return.To test a new hook one needs to restart claude. Better to run the actual processing through a script one can continually edit in one session. This script uses formatters on C files and shell scripts, and just fixes missing returns on other files.
As usual claude and other AI is poor at breaking problems into small steps, and makes up ways to do things. The above hook receives a json file, which I first saved to disk, then extracted the file path and saved that to disk, then instead called save-hook.sh on that path. Now we're home after a couple of edits to save-hook.sh
This was all ten minutes. I wasted far more time letting it flail attempting bigger steps at once.
Really excited to see this implemented.
Hooks will be important for "context engineering" and runtime verification of an agent's performance. This extends to things such as enterprise compliance and oversight of agentic behavior.
Nice of Anthropic to have supported the idea of this feature from a github issue submission: https://github.com/anthropics/claude-code/issues/712
It is indeed. I don't use Claude Code. I use Cline which is a VS Code extension (cline.bot).
This is a pretty killer feature that I would expect to find in all the coding agents soon.
E.g. you can allow
but preventYou can already do this in .Claude/settings.json
A couple of example hooks: https://cameronwestland.com/building-my-first-claude-code-ho...
I'm happy to see Claude Code reaching parity with Cursor for linting/type checking after edits.
I'm excited to see these improvements but, none of them are enough to make up for inconvenience of having to start a new conversation (/clear) after every task.
I've been using Gemini Code. The larger context window is big enough to work for a full session without having to /clear. It matters. Having to think so hard and conserve tokens with Claude is problematic.
I've been playing with Claude Code the past few days. It is very energetic and maybe will help me get over the hump on some long-standing difficult problems, but it loses focus quickly. Despite explicit directions in CLAUDE.md to build with "make -j8" and run unit tests with "make -j8 check", I see it sometimes running make without -j or calling the test executable directly. I would like to limit it to doing certain essential aspects of workflow with the commands I specify, just as a developer would normally do. Are "Hooks" the right answer?
For the `-j` issue specifically, exporting `MAKEFLAGS=-j8` should work.
Thanks, I'll let Claude know.
or mcp
The AI definitely gets confused when there is a lot of stuff happening. It helps if you try to make the commands as easy as possible. Like, change 'make' so that '-j8' is default, or add scripts like make-check.sh that does 'make -j check', or add an MCP server that has commands for the most common actions (tell the AI to write an MCP server for you).
Hooks would probably help, I think you could add a hook to auto-reject the bot when it calls the wrong thing.
I frequently have to remind Claude Code of the instructions in the CLAUDE.md file, as well as various general aspects of the code base.
Maybe this will enable a fix
> Stop using early returns in void functions! It’s in the clause.md you shouldn’t do that in this project!
Reading CLAUDE.md (22 seconds, 2.6k tokens..)
You’re absolutely right!
Honestly patterns like these that can have a tool fix the LLM quirks seems powerful.
adding a hook to have it push to prod every time baby
We have to do this, otherwise China wins the "AI" race!
This also:
1) Assign coding task via prompt 2) Hook: Write test for prompt proves 3) Write code 4) Hook: Test code 5) Code passes -> Commit 6) Else go to 3.
you can just tell it do that or in your claude.md. don't need hooks
Would love to see this in Cursor. My workaround right now is using a bunch of rules that sort of work some of the time.
As an ex-Cursor user myself, is there any reason that you’re still using it? Genuinely curious.
This closes a big feature gap. One thing that may not be obvious is that because of the way Claude Code generates commits, regular Git hooks won’t work. (At least, in most configurations.)
We’ve been using CLAUDE.md instructions to tell Claude to auto-format code with the Qlty CLI (https://github.com/qltysh/qlty) but Claude a bit hit and miss in following them. The determinism here is a win.
It looks like the events that can be hooked are somewhat limited to start, and I wonder if they will make it easy to hook Git commit and Git push.
FYI.
Claude loves Java.
I never have to reformat. It picks up my indentation preferences immediately and obeys my style guide flawlessly. When I ask it to perfect my JavaDoc it is awesome.
Must be a ton of fabulous enterprise Java in the training set.
Why is it that regular git hooks do not work with claude code?
Husky and lint-staged worked for me. Pre Commit Hooks did not work for me.
Find it a bit odd they didn't model this as an MCP server itself, and making hooks just mcp tools with pre-agreed names.
Wouldn't it be nice to have the agent autodiscover the hooks and abstracting their implementation details away under the mcp server, which you could even reuse by other agents?
So, form my limited understanding, this doesn't take up context, it's something auto where you can configure per tool use, and not MCP that Claude decides "when" to run it?!
This needs a way to match directories for changes in monorepos. E.g. run this linter only if there were changes in this directory.
An abstraction via a script should work, right? They document that it pipes the JSON data to your command's stdin,
Set up pre-commit and call that from a hook? That's what we have
Whatever you run in the hook can check whatever conditions you want.
This can be implemented at the line level if the linter is Git aware
Amazing how there's whole companies dedicated to this and yet claude code keeps leading the way.
Does it? Claude Code is the product that works the least well for me, mainly because of its tendency to go off and do tons of stuff. I've found LLMs are at their best when they produce few enough lines of code that I can review and iterate, not when they go off and invent the world.
For that reason, I mainly use Aider and Cursor (the latter mostly in the "give me five lines" comment mode).
Not to take away anything here, but hooks are present in other similar products. Atleast one example here - https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/com...
I was yesterday only searching for ways to live lint than waiting for Claude to do that or during pre-commit
Wish it supported rollbacks..
With this you could add support yourself!
Add a PostToolUse [0] hook that automatically creates a git commit whenever changes are made. Then you can either git checkout the commit you want to rollback to... Or you could assign those git commits an index and make a little MCP server that allows you to /rollback:goto 3 or /rollback:back 2 or whichever syntax you want to support.
In fact if you put that paragraph into Claude I wouldn't be surprised if it made it for you.
[0] https://docs.anthropic.com/en/docs/claude-code/hooks#posttoo...
Claude Code has basically grown to dominate my initial coding workflow.
I was using the API and passed $50 easily, so I upgraded to the $100 a month plan and have already reached $100 in usage.
I've been working on a large project, with 3 different repos (frontend, backend, legacy backend) and I just have all 3 of them in one directory now with claude code.
Wrote some quick instructions about how it was setup, its worked very well. If I am feeling brave I can have multiple claude codes running in different terminals, each working on one piece, but Opus tends to do better working across all 3 repos with all of the required context.
Still have to audit every change, commit often, but it works great 90% of the time.
Opus-4 feels like what OAI was trying to hype up for the better part of 6 months before releasing 4.5
You can’t use it across three repos like a workspace in Xcode?
Just started using Claude (very late to the game), and I am truly blown away. Instead of struggling for hours trying to get the right syntax for a Powershell script or to convert Python to Go, I simply ask Claude to make it happen. This helps me focus on content creation instead of the mind-bending experience of syntax across various languages. While some might call it laziness, I call it freedom as it helps me get my stuff done quicker.
I have been using it for other stuff (real estate, grilling recipes, troubleshooting electrical issues with my truck), and it seems to have a very large knowledge base. At this point, my goal is to get good at asking the right kinds of questions to get the best/most accurate answers.
That’s great. Regardless of the naysayers about AI hype in tech, it was a major development for general society even if this is all it ends up being.
>before this you had to trust that claude would follow your readme instructions about running linters or tests. hit and miss at best. now its deterministic. pre hook blocks bad actions post hook validates results.
>hooks let you build workflows where multiple agents can hand off work safely. one agent writes code another reviews it another deploys it. each step gated by verification hooks.
This nicely describes where we're at with LLM's as I see it: they are 'fancy' enough to be able to write code yet at the same time they can't be trusted to do stuff which can be solved with a simple hook.
I feel that currently improvement mostly comes from slapping what to me feels like workarounds on top of something that very well may be a local maximum.
I wonder how hard it is to create an alternate user account and have Claude run as that user instead?
[dead]
[flagged]
[flagged]
Yup, slowing down the AI is a really hard thing to do. I've mostly accomplished it, but I use extensive auto prompting and a large memory bank. All of it is designed explicitly to slow down the AI. I've taught it how to do what I call "Baby Steps", which is defined as: "The smallest possible change that still effectively moves the technology forward." Some of my prompting is explicit about human review and approval of every change including manual testing of the application in question BEFORE the model moves on to the next step.
The key phrase is "Do not overengineer."
I say stuff like:
This code must be minimal.
Meet only the stated requirements.
Do not overengineer.
Create a numbered index of requirements.
Verify after you write the code that all requirements are met and no more.
Given the Anthropic legal terms forbid competing with them, what are we actually allowed to do with this? Seems confusing what is allowed.
No machine learning work? That would compete.
No writing stuff I would train AI on. Except I own the stuff it writes, but I can’t use it.
Can we build websites with it? What websites don’t compete with Anthropic?
Terminal games? No, Claude code is a terminal game, if you make a terminal game it competes with Claude?
Can their “trust and safety team” humans read everyone’s stuff just to check if we’re competing with LLMs (funny joke) and steal business ideas and use them at Anthropic?
Feels like the dirty secret of AI services is, every possible use case violates the terms, and we just have to accept we’re using something their legal team told us not to use? How is that logically consistent? Any safety concerns? This doesn’t seem like a law Asimov would appreciate.
It would be cool if the set of allowed use cases wasn’t empty. That might make Anthropic seem more intelligent
Anthropic's terms typically restrict training competing AI models with their outputs, not building standard applications or websites that simply use their API as a tool.
Would you argue that Cursor (valued at $10B) is breaking Anthropic's terms by making an IDE that competes with their Canvas feature?
Oh come on, your CRUD app is not competing with an LLMaaS
You’re only competing with them if you’re doing something they consider competitive. OpenAI is competitive, you are not
This is nice but I really wish they’d just let me fork the damn thing already.
I tried to make an app in claude code like they so much fanfare it could do, and it failed. It was obvious it will fail, I wanted something that I think it was not done before, using the Youtube api. But it failed nonetheless.
I am tired of pretending that this can actually pull any meaningful work besides a debug companion or a slightly enhanced google/stackoverflow.
I have a side project Android app. To test Claude Code, I loaded it up in the repo and asked it to add a subscription billing capability to the app. Not rocket science, but it probably would have taken me a day or two to figure out the mechanics of Google Play subscription billing and implement it in code.
Claude Code did it in 30 seconds and it works flawlessly.
I am so confused how people are not seeing this as a valuable tool. Like, are you asking it to solve P versus NP or something?
If you need to do something that's been done a million times, but you don't have experience with it, LLMs are an excellent way to get running quickly.
If you do not know the software design, claude code will fail. If you know the software design, you can guide it toward success.
Its definitely not there yet. You have to babysit it alot. Its not autonomous.
The utility i find is that it helps _me_ do the real engineering work, the planning and solution architechting, and then can bang out code once it has rock solid instructions (in natural language but honestly one level above psuedocode) and then i have to review it with absolutely zero faith in its ability to do things. Then it can work well.
But its not where these guys are claiming it is
Interesting how long ago did you do this? How long did you spend on it?
I was skeptical about Claude code and then I spent a week really learning how to use it. Within the week I had built a backend with FastAPI that supported user creation, password reset, email confirmation, a front end, and support for ouath into a few systems.
It definitely took me some time to learn how to make it work, but I’m astounded at how much work I got done and for so little typing.
Use TDD so CC can converge. I see people tuning a prompt, that is fun, not software engineering.
I build things that haven't been done before with Cursor all the time. You have to break it down into simple building blocks rather than specifying everything up front.
If you do it right this actually forces good design.
If you can’t build something without Claude you will probably fail to build it with Claude.
"I wanted something that I think it was not done before"
But you do know, that this is what LLMs ain't good at.
So your conclusion is somewhat off, because there are plenty of programming work of things that were done before and require just tweaking.
I mean, I am also not hooked yet and just occasionally use chatGPT/claude for concrete stuff, but I do find it useful and I do see, where it can get really useful for me (once it really knows my codebase and the libaries used and does not jump between incompatible API versions)
So many people yearn for LLM's to be like the Star Trek ship computer, which when asked a question unconditionally provides a response relevant and correct, needing no verification.
A better analogy is LLM's are closer to the "universal translator" with an occasional interaction similar to[0]:
0 - https://en.wikiquote.org/wiki/Monty_Python_and_the_Holy_Grai...