12 comments

  • Gnobu 31 minutes ago ago

    Interesting concept! I can see how new developers often get stuck figuring out an organization’s internal frameworks or dependencies. I’m curious would the AI rely purely on code analysis, or also integrate internal docs and examples to provide more complete answers?

  • ccosky 8 hours ago ago

    This isn't "THE" biggest pain point, but one I dealt with today: we have new devs who ask the whole team, "Can someone do my code review?" This leads to the bystander effect: nobody volunteers to do the code review, because we all think someone else will do it. (Alternately: the team leads get asked to do every code review, because they're the leads and know the most.)

    So today I wrote a Claude skill that does a git diff against master to determine what files were changed, looks at the git history of those files (most recent commits and who committed the most lines of code), filters out the people who don't work here anymore, and suggests 3 devs who could be good matches for their MR. Hopefully that will get some of the load off the team leads and staunch the "can someone do a code review for me?" requests.

    So there's my suggestion to you: something that will let new devs know 1) who is the best person to do their code review and maybe even 2) who the SME for a particular area of the system is.

    • buggy6257 7 hours ago ago

      You should take a look at CODEOWNERS file specs

  • ahmed-fathi 11 hours ago ago

    The problem isn't finding code. it's inheriting judgment. New engineers don't struggle to locate files. They struggle to know which files to trust, which patterns are intentional vs accidental, and which senior engineer silently owns what. That's not searchable. You're building a map. What they need is a compass.

  • causalzap 5 hours ago ago

    For me, the biggest pain point is the lack of a 'Single Source of Truth' for the tech stack logic. When joining a team that uses meta-frameworks like Astro, if the hydration strategy or the data-scraping scripts (Python) aren't well-documented, I spend more time reverse-engineering the build process than actually shipping features. A clean README and a clear schema for JSON data go a long way.

  • lowenbjer 12 hours ago ago

    Cursor, Sourcegraph Cody, and honestly just Claude with a well-maintained project README already solve the "ask questions about the codebase" part reasonably well. The actual hard part of joining a new team isn't the code (today, any longer). It's getting to know the people. Understanding their strengths, figuring out where you fit in, getting past imposter syndrome, not stepping on toes. No AI for that yet.

  • stephenr 4 hours ago ago

    After 20 years of freelancing and being asked to join an existing project (literally have only ever had I think 1 client-paid greenfield project) the biggest challenge is usually finding that projects biggest WTF points.

    There's always something that you look at and question the sanity of the people who wrote it.

    I can't begin to imagine what new levels of ridiculousness will abound in such scenarios with the advent of people relying on spicy autocomplete to write their shitty code for them.

  • PaulHoule 13 hours ago ago

    1 click at most to install what I need to build, 1 click to build.

    I ask the dev manager how long the build takes and get an answer that is within 20% of the ground truth.

    • KevStatic 13 hours ago ago

      That's a great point

      The setup friction is real. Once you're past that... do you find understanding the codebase itself (where things live, why decisions were made) is also painful, or does that come naturally after a few days?

      • PaulHoule 12 hours ago ago

        I think most places I work haven’t had good documentation or particularly good processes for onboarding people. For instance, at some start ups we were always chasing a demo for customer B this week and customer C next week and people were fuzzy about what path got us there. Other places saw software as secondary or tertiary to their main business, or had a lot of turnover, etc. Or maybe we bought a web site from so and so and nobody has any idea how it works. Or the guy who made the product was pretty smart but wasn’t a good finisher and his wife just had a baby.

        So I am used to looking at a mysterious code base and gradually figuring out how to safely change it. When I run into something that’s particularly dangerous (e.g. how does the auth system work?) I will document it myself, and I love writing “how do I?” runbook/procedural documentation if it is likely I’ll need it in six months or will need to hand something off to somebody else.

        In the AI age there is a lot to say for just loading up a project in an agent-enabled IDE and asking questions like “When I do X, the system does Y, why is that?” and “How would I do Z?” and having extended conversations “well, I like D but I am concerned about E?” Or “What if we did F instead?” Even if you write every line of code yourself you mind find an AI coding buddy is more talkative than your coworkers.

  • AnimalMuppet 13 hours ago ago

    If I were given an electrical design, I would expect a schematic, a parts list, a board layout, and a theory of operations" - a prose document explaining why the rest of the stuff is the way it is.

    My last job, there was the code itself, and there were UML class and sequence diagrams. But there wasn't anything like a theory of operations. That made it very difficult to learn, because it was so object oriented that you couldn't tell what anything actually did. Or, more to the point, when you needed to make a modification, you couldn't find where to make it without heroic feats of exploration.

    So I think that's the great need. A human needs to sit down and write out why the code does what it does, and why it's organized the way it is, and where the parts are that are most likely to be needed, and where to make the most likely changes. I'm not sure an AI can write that - certainly AIs at the current level cannot.

  • nonameiguess 12 hours ago ago

    Totally depends on the org. I've worked as the 3rd employee of a months old startup and the biggest pain point was we owned no infrastructure and I did everything on my own computer, which was easy but limiting because were heavy into statistical modeling and needed more powerful compute, which we got but it took a few months. I've worked on a military/intelligence project that had been in operation for over 40 years. The oldest code had no commit history because it predated any modern VCS. Whoever wrote it was long retired if not dead. Pieces were spread across thousands of repos on many different forges. With large old-school Java OSGI and now with Kubernetes, which is basically Java written in Go, it's the abstraction and dependency injection. There is no possible way to tell until runtime what code is actually going to be used. Knowing the codebase means nothing if you can't mentally track where data flows because you don't know what the implementation is actually going to be from the code alone. With extreme HPC, it was damn near literally everything being bespoke. Custom filesystems. Custom network stacks. Expectations from running workloads on regular computers and software everyone else uses were always being subverted. With cloud environments, it's not having physical access to the machines. With on-prem labs, it's needing to provision and troubleshoot the physical layer. Either way has its drawbacks. With customer-facing software, it's customers not knowing what they want and not even knowing what they don't want until they have it, which turns out to be what they didn't want. With software for modeling and understanding the real world, it's that physics turns out to be really f'ing complicated.