Indeed, that's what I kind of hinted at in https://news.ycombinator.com/item?id=46442195 and coincidentally https://news.ycombinator.com/item?id=46437688 briefly after, namely that OK, one can "generate" a "solution", that's much easier than before... but until we can verify somehow that it actually does what it say it does (and we know of hallucinations and have no reason to believe this changed) then testing itself, especially of well know "problems" is more and more important.
That being said, it doesn't answer the "why" in the first place, an even more important question. At least though it does help somehow to compare with existing alternatives.
Folks think, they write code, they do their own localized evaluation and testing, then they commit and then the rest of the (down|up)stream process begins.
LLM's skip over the "actually verify that the code I just wrote does what I intended it to" step. Granted, most humans don't do this step as thoroughly and carefully as would be desirable (sometimes through laziness, sometimes because of a belief in (down|up)stream testing processes). But LLM's don't do it at all.
They absolutely can do that if you give them the tools. Seeing Claude (I use it with opencode agents) run curl and playwright to verify and then fix it's implementation was a real 'wow' moment for me.
We have different experiences. Often I’ll see Claude, et. al. find creative ways to fulfill the task without satisfying my intent, e.g., changing the implementation plan I specifically asked for, changing tolerances or even tests, and frequently disabling tests.
> LLM's skip over the "actually verify that the code I just wrote does what I intended it to" step.
I'm not sure where this idea comes from. Just instruct it to write and run unit tests and document as it goes. All of the ones I've used will happily do so.
You still have to verify that the unit tests are valid, but that's still far less work than skipping them or writing the code/tests yourself.
One commercial equivalent to the project I work on, called ProTools (a DAW), has a test "harness" that took 6 people more than a year to write and takes more than a week to execute.
Last month, I made a minor change to our own code and verified that it worked (it did!). Earlier this week, I was notified of an entirely different workflow that had been broken by the change I had made. The only sort of automated testing that would have detected this would have been similar in scope and scale to the ProTools test harness, and neither an individual human nor an LLM is going to run that.
Moreover, that workflow was entirely graphically based, so unless Claude Opus 4.5 or whatever today's flavor of vibe coding LLM agent is has access to a testing system that allows it to inject mouse events into a running instance of our application (hint: it does not), there's no way it could run an effective test for this sort of code change.
I have no doubt that Claude et al. can verify that their carefully defined module does the very limited task it is supposed to do, for cases where "carefully defined" and "very limited" are appropriate. If that's the only sort of coding you do, I am sorry for your loss.
> access to a testing system that allows it to inject mouse events into a running instance of our application
FWIW that's precisely what https://pptr.dev is all about. To your broader point though designing a good harness itself remains very challenging and requires to actually understand what value for user, software architecture (to e.g. bypass user interaction and test the API first), etc.
It’s a shame that the source code isn’t commented and documented more. At the very least, I would see it being helpful to add some documentation for every CPU op code being emulated.
Forbidding LLM to write comments and docstrings (preferrably enforced by build and commit hook) is one of the best "hacks" for using that thing. LLM cannot help itself but emit poisonous comments.
Meh. No human has written the horrors llm produces. At least I am yet to see codebase like that. Let me attempt a theatrical reenactment:
// Use buffer that is large enough to hold any possible value. Avoid using JSON configuration, this optimizes codebase and prevents possible security exploits!
size_t len = 32;
// this function does not call "sort" utility using shell anymore, but instead uses optimized library function "sort" for extreme perfomance improvement!!!
void get_permutations() {
... and so on. It basically uses comments as a wall to scribble grandiose graffiti about it's valiant conquests in following explicit instruction after fifth repeat and not commiting egregious violence agains common sense.
Even if you try to get them to not, they will still overcomment the code. Or at least overcomment it from the perspective of a human. From the perspective of the LLM, I suspect the comments are necessary for it to be able to get the code output correct.
Heck, when Satya Nadella wanted to demonstrate Copilot coding, he had it emit an Altair emulator. I guess there's little room for creativity in 8-bit emulator design so LLMs can handle them well. https://thenewstack.io/from-basic-to-vibes-microsofts-50-yea...
This is a good point. I wonder how much NES emulator code is in Claude's training set? Not to knock what the author has done here, but I wonder if this is more of a softball challenge than it looks.
WASM and the performance seems catastrophically bad (45ms to render a frame on an M4 laptop)? It would be much more impressive if Claude could optimize it into something that someone would actually want to play? Compare this to a random hit from Google, https://jsnes.org/ which has sound, much smaller payload, and runs really fast (<1ms to render a frame).
The cost of slop is >40X drop in performance? Pick any metric that you care about for your domain perhaps that's what you're going to lose and is the effort to recover that practical with current vibe-coding strategies?
Give it copy paste / translate tasks and it’s a no brainer (quite literally)
But same can be said of humans.
The question here is, did it implement it because it read the available online documentation about the NES architecture OR did it just see one too many of such implementations.
Indeed, the 'cleanroom' standard always was one team does the RE and writes a spec, another team that has never seen the original (and has written statements with penalty clauses to prove it) then does the re-implementation. If you were to read the implementation, write the spec and then write the re-implementation that would be definitely violating the standard for claiming an original work.
It demonstrated the capabilities of an AI to a potentially on-the-fence audience while giving the author experience using the new tools/environment. That's solid value. I also just find it really cool to see that an AI did this.
I'd be curious in how well it passes 100th Coin's NES accuracy tests https://github.com/100thCoin/AccuracyCoin
Indeed, that's what I kind of hinted at in https://news.ycombinator.com/item?id=46442195 and coincidentally https://news.ycombinator.com/item?id=46437688 briefly after, namely that OK, one can "generate" a "solution", that's much easier than before... but until we can verify somehow that it actually does what it say it does (and we know of hallucinations and have no reason to believe this changed) then testing itself, especially of well know "problems" is more and more important.
That being said, it doesn't answer the "why" in the first place, an even more important question. At least though it does help somehow to compare with existing alternatives.
Isn’t this how all software development works? Folks commit code, it’s tested, and reviewed, and then deployed.
Why would this be any different?
That's not how software development works.
Folks think, they write code, they do their own localized evaluation and testing, then they commit and then the rest of the (down|up)stream process begins.
LLM's skip over the "actually verify that the code I just wrote does what I intended it to" step. Granted, most humans don't do this step as thoroughly and carefully as would be desirable (sometimes through laziness, sometimes because of a belief in (down|up)stream testing processes). But LLM's don't do it at all.
They absolutely can do that if you give them the tools. Seeing Claude (I use it with opencode agents) run curl and playwright to verify and then fix it's implementation was a real 'wow' moment for me.
We have different experiences. Often I’ll see Claude, et. al. find creative ways to fulfill the task without satisfying my intent, e.g., changing the implementation plan I specifically asked for, changing tolerances or even tests, and frequently disabling tests.
Are you a customer?
> LLM's skip over the "actually verify that the code I just wrote does what I intended it to" step.
I'm not sure where this idea comes from. Just instruct it to write and run unit tests and document as it goes. All of the ones I've used will happily do so.
You still have to verify that the unit tests are valid, but that's still far less work than skipping them or writing the code/tests yourself.
> actually verify that the code I just wrote does what I intended it to
That's what the author did when they ran it.
Claude Opus 4.5 will routinely test its own code before handing it off to you, even with zero instruction to do so.
One commercial equivalent to the project I work on, called ProTools (a DAW), has a test "harness" that took 6 people more than a year to write and takes more than a week to execute.
Last month, I made a minor change to our own code and verified that it worked (it did!). Earlier this week, I was notified of an entirely different workflow that had been broken by the change I had made. The only sort of automated testing that would have detected this would have been similar in scope and scale to the ProTools test harness, and neither an individual human nor an LLM is going to run that.
Moreover, that workflow was entirely graphically based, so unless Claude Opus 4.5 or whatever today's flavor of vibe coding LLM agent is has access to a testing system that allows it to inject mouse events into a running instance of our application (hint: it does not), there's no way it could run an effective test for this sort of code change.
I have no doubt that Claude et al. can verify that their carefully defined module does the very limited task it is supposed to do, for cases where "carefully defined" and "very limited" are appropriate. If that's the only sort of coding you do, I am sorry for your loss.
> access to a testing system that allows it to inject mouse events into a running instance of our application
FWIW that's precisely what https://pptr.dev is all about. To your broader point though designing a good harness itself remains very challenging and requires to actually understand what value for user, software architecture (to e.g. bypass user interaction and test the API first), etc.
I’m sure you can point Claude at that page and have it make the necessary changes to pass.
Or it could loop infinitely, never quite being able to pass all the tests.
https://github.com/willtobyte/NES
Why not use the LLM for more meaningful commit titles & messages as well while you are at it?
Surprised there's no README file at all.
It’s a shame that the source code isn’t commented and documented more. At the very least, I would see it being helpful to add some documentation for every CPU op code being emulated.
Forbidding LLM to write comments and docstrings (preferrably enforced by build and commit hook) is one of the best "hacks" for using that thing. LLM cannot help itself but emit poisonous comments.
Or maybe clone the comments from where it cloned the source.
Meh. No human has written the horrors llm produces. At least I am yet to see codebase like that. Let me attempt a theatrical reenactment:
... and so on. It basically uses comments as a wall to scribble grandiose graffiti about it's valiant conquests in following explicit instruction after fifth repeat and not commiting egregious violence agains common sense.Probably better to look at a human-authored emulator if you want comments containing accurate information anyway.
If you let it, Claude Code will write a comment for almost every single line of code.
Even if you try to get them to not, they will still overcomment the code. Or at least overcomment it from the perspective of a human. From the perspective of the LLM, I suspect the comments are necessary for it to be able to get the code output correct.
Nice, but NES emulator is one of the most written pet projects anywhere, which makes it considerably less impressive.
Heck, when Satya Nadella wanted to demonstrate Copilot coding, he had it emit an Altair emulator. I guess there's little room for creativity in 8-bit emulator design so LLMs can handle them well. https://thenewstack.io/from-basic-to-vibes-microsofts-50-yea...
And said emulator was opensourced and tested by third parties, right ?
Until it's so, it's just hearsay to me by someone having a multi-billion horse in the race.
This is a good point. I wonder how much NES emulator code is in Claude's training set? Not to knock what the author has done here, but I wonder if this is more of a softball challenge than it looks.
Somewhere along the line the AI bros stopped separating training and testing sets. It's great for impressing the villagers
WASM and the performance seems catastrophically bad (45ms to render a frame on an M4 laptop)? It would be much more impressive if Claude could optimize it into something that someone would actually want to play? Compare this to a random hit from Google, https://jsnes.org/ which has sound, much smaller payload, and runs really fast (<1ms to render a frame).
The cost of slop is >40X drop in performance? Pick any metric that you care about for your domain perhaps that's what you're going to lose and is the effort to recover that practical with current vibe-coding strategies?
I will be impressed when new game consoles come to market and it can write the first emulator for it.
How much was grifted from existing emulators?
Git wrote a functional NES emulator for me by simply cloning one of the many publicly available ones!
This is the comment.
Give it copy paste / translate tasks and it’s a no brainer (quite literally)
But same can be said of humans.
The question here is, did it implement it because it read the available online documentation about the NES architecture OR did it just see one too many of such implementations.
> But same can be said of humans.
Indeed, the 'cleanroom' standard always was one team does the RE and writes a spec, another team that has never seen the original (and has written statements with penalty clauses to prove it) then does the re-implementation. If you were to read the implementation, write the spec and then write the re-implementation that would be definitely violating the standard for claiming an original work.
Who care what it did. What did you learn? To live is to learn.
When I consider the utility of a hammer, my first priority is to ask what the hammer can teach me.
Do you think that the use of a hammer is an innate skill, and that woodworkers learn nothing from their craft?
Okay, so let's say the use of a coding agent isn't an innate skill, so the author was gaining experience with the tool.
There are NES emulators aplenty, the only value in writing a new one is pedagogic, for the writer.
This endeavor had negative net value.
It demonstrated the capabilities of an AI to a potentially on-the-fence audience while giving the author experience using the new tools/environment. That's solid value. I also just find it really cool to see that an AI did this.
How about being entertained by the process?
They didnt call it the "Nintendo Entertainment System" for nothing.
If it's a zillion dollar hammerbot the company is offering to your boss for pennies, that had better be your first priority!
Ask not what your hammer can do for you.
Do you like to read posts about what hammer can do? Especially when it has been done 100 times already.
I'm no carpenter, but I can honestly say I've probably read a hundred articles about vim..
Yeah I think this is the wrong approach. If they were making money out of it, that would be different. But this is pointless.
Is this why you only wrote in machine code until you fully understood the entire compiler front end, back end chain?
to live is to build
to build what you don't understand is to suffer in future
Except OP isn't learning or building. He's telling a computer to do the work for him and padding his resume.
How cynical. Just seeing if the current crop of automation systems can do it can be interesting enough for some of us.
A simple git clone is faster.
So is drinking a sip of water, but neither show what an agentic system can cook up.
I learned claude can write a functional NES emulator. I wonder what else it can do?
Trained on 1000s of NES emulators, it's not really impressive.
Github alone has +4k NES emulator projects: https://github.com/search?q=nes%20emulator&type=repositories
This is more like "wow, it can quote training data".