12 comments

  • the_harpia_io 4 hours ago ago

    the part nobody talks about is that this isn't just a productivity bottleneck - it's a security one. when your team doesn't deeply understand the codebase anymore because AI wrote most of it, code review becomes theater. I've watched PRs go through where the reviewer clearly didn't understand the auth flow well enough to notice the AI introduced a path that bypassed token validation. not malicious, just a plausible looking implementation that happened to be wrong in a way that matters.

    the contracts argument above sounds nice in theory but in practice most codebases don't have well defined contracts for anything security sensitive. they have implicit assumptions that only make sense if you've been in the code long enough to absorb them. AI doesn't absorb those, it pattern matches around them. and if the humans reviewing the AI output also don't have that context anymore then honestly who's catching these things

  • svpyk 21 hours ago ago

    Would it really become a bottleneck ? Only if we force a human in the loop when it may not be really necessary.

    If there are well defined contracts for the software, and the software behaves correctly, is it really necessary for understanding the code entirely ? We seem to develop on many abstractions already ignoring how the code actually executes on the hardware without any issues.

    Secondly, wouldn't AI help in understanding the codebase and make that easier as well ? Debugging must also immensely benefit from AI assisted tools.

    So i'm less concerned overall with the auto-generated code as long as the code thats landing is reviewed by an AI bot that's aggressively prompting to ensure the code is as simple as it could be.

    • dodu_ 21 hours ago ago

      >If there are well defined contracts for the software, and the software behaves correctly [...]

      But how would you ever know if this assumption is true?

      • 10 hours ago ago
        [deleted]
    • MichaelRo 14 hours ago ago

      >> Would it really become a bottleneck ? Only if we force a human in the loop when it may not be really necessary.

      Honestly, I don't know what crack you people are smoking.

      • 10 hours ago ago
        [deleted]
    • baijan 20 hours ago ago

      yeah, this is exactly AI helping you understand the codebase

      but not using plain text -- it uses diagrams/execution flows/animations etc

      it's easier to parse

      i think working at a fast-moving startup can make you understand this problem more

  • baijan a day ago ago

    Lately I’ve had a contrarian feeling about AI-assisted development.

    If AI is going to write a large percentage of the code, the highest-leverage thing a developer can do might actually be slowing down and deeply understanding the system (not generating more code faster).

    I noticed I was spending more time reconstructing context than actually building: – figuring out what changed – tracing data flow – rebuilding mental models before I could even prompt properly (without breaking other features) - debugging slop with more slop

    Better understanding → better prompts, fewer breaking changes, and more real debugging.

    Over the weekend I hacked on a small prototype exploring this idea. It visualizes execution flow and system structure to make it easier to reason about unfamiliar or AI-modified codebases.

    Not really a polished “product” — more a thinking tool / experiment.

    I’m curious whether others are running into the same bottleneck, or if this is just a local maximum I’ve fallen into.

    • sorobahn 21 hours ago ago

      I've had this same thought. Given humans were the primary authors of code, the tooling has optimized for writing (IDEs, Text Editors, LSPs, etc). Even though we all know we read more code than write, the tooling space for reading code is rather small, I can only think of stuff like SourceGraph and Github. I'm sure big companies have internal tooling for this, but even then I feel the best experience exploring a single repo today is your EDITor.

      I'm also experimenting with building a platform that is optimal for reading code particularly distributed systems since they have many non-local dependencies and are harder to explore locally in code editors/viewers. I def fall in the camp that current AI tools can probably help us understand our systems better than they can contribute code to them. One win of using AI to help us analyze large codebases is that if they can extract useful things for us, they can also extract useful things for their own agentic loops.

      Let me know if you wanna chat more about this, would love to bounce some ideas/contribute!

      • terpimost 12 hours ago ago

        I would like to chat more about it. Let’s connect (vladkorobov.com)

    • zrn900 7 hours ago ago

      > I noticed I was spending more time reconstructing context than actually building: – figuring out what changed – tracing data flow – rebuilding mental models before I could even prompt properly (without breaking other features) - debugging slop with more slop

      Yep. We literally shifted the workload from writing the code to reviewing the code.

  • millzlane 8 hours ago ago

    Is this spam?