If you maintain an open source project, you should absolutely run claude, codex, and gemini through your code base looking for security issues. It found some surprising vulns in some of my repos that were so subtle that even when it pointed them out to me, I still couldn't see the problem. I chatted back and forth for a bit and finally realized that it was right. Fixed the bugs and moved on.
Exactly! I think it might go deeper than that. Some issue or result of like hosting or configuration that it doesn't seem that all is just by looking at the code base. It's a combination of a lot of stuff.
Maybe there is some astroturfing going on, as is usually the case, but it's already known that Codex/Claude Code and their ilk have been ruining CTFs for a while.
And well, one can always prompt "review my feature branch" or "review this file for bugs" with these tools; code analysis plays into the strengths of LLMs far more than code generation, since false positives/hallucinations aren't a problem with the former.
If you maintain an open source project, you should absolutely run claude, codex, and gemini through your code base looking for security issues. It found some surprising vulns in some of my repos that were so subtle that even when it pointed them out to me, I still couldn't see the problem. I chatted back and forth for a bit and finally realized that it was right. Fixed the bugs and moved on.
Exactly! I think it might go deeper than that. Some issue or result of like hosting or configuration that it doesn't seem that all is just by looking at the code base. It's a combination of a lot of stuff.
More generally, AI is enabling predatory use cases more than positive use cases. There's more resources and more will behind the former.
Yeah, spears usually wins first.
Overhyped.
Maybe there is some astroturfing going on, as is usually the case, but it's already known that Codex/Claude Code and their ilk have been ruining CTFs for a while.
And well, one can always prompt "review my feature branch" or "review this file for bugs" with these tools; code analysis plays into the strengths of LLMs far more than code generation, since false positives/hallucinations aren't a problem with the former.
LMAO, I do genuinely believe it's much easier to hack right now. Just matter of timer sth blows up on the news.