You can't design software you don't work on

(seangoedecke.com)

288 points | by saikatsg 3 days ago ago

116 comments

  • fogleman 3 days ago ago

    > The kinds of topic being discussed are not "is DRY better than WET", but instead "could we put this new behavior in subsystem A? No, because it needs information B, which isn't available to that subsystem in context C, and we can't expose that without rewriting subsystem D, but if we split up subsystem E here and here..."

    Hmm, sounds familiar...

    Bingo knows everyone's name-o

    Papaya & MBS generate session tokens

    Wingman checks if users are ready to take it to the next level

    Galactus, the all-knowing aggregator, demands a time range stretching to the end of the universe

    EKS is deprecated, Omega Star still doesn't support ISO timestamps

    https://www.youtube.com/watch?v=y8OnoxKotPQ

    • lkglglgllm 3 days ago ago

      Wngman.

      Number of softwares not supporting iso8601, TODAY (no pun), is appalling. For example, git (claiming compatibility, but isn’t).

      • 2 days ago ago
        [deleted]
    • tormeh 2 days ago ago

      It's an infuriatingly accurate sketch. A team should usually have responsibility for no more than one service. There are many situations where this is not possible or desired (don't force your kafka connect service into your business logic service), but it's the ideal, IMO. More services mean more overhead. But someone read a blog post somewhere and suddenly we have four microservices per dev. Fun times.

    • bitwize 2 days ago ago

      This is the kind of situation you get into when you let programmers design the business information systems, rather than letting systems analysts design the software systems.

      • QuercusMax 2 days ago ago

        I don't think I've ever worked on a project that had "system analysts". You might as well say "this is what happens when you don't allow sorcerers to peer into the future". Best I've ever had are product managers who maybe have a vague idea of what the customer wants.

        • bitwize 2 days ago ago

          Well, that's just the problem, innit. In decades past, systems analysts performed a vital function, viewing the business and understanding its information flows as a whole and determining what information systems needed to be implemented or improved. Historically, in well-functioning information-systems departments, the programmer's job was confined to implementation only. Programming was just a translation step, going from human requirements to machine readable code.

          Beginning in about the 1980s or so, with the rise of PCs and later the internet, the "genius programmer" was lionized and there was a lot of money to be made through programming alone. So systems analysts were slowly done away with and programmers filled that role. These days the systems analyst as a separate profession is, as you say, nearly extinct. The programmers who replaced the analysts applied techniques and philosophies from programming to business information analysis, and that's how we got situations like with Bingo, WNGMAN, and Galactus. Little if any business analysis was done, the program information flows do not mirror the business information flows, and chaos reigns.

          In reality, 65% of the work should be in systems analysis and design—well before a single line of code is written. The actual programming takes up maybe 15% of the overall work. And with AI, you can get it down to maybe a tenth that: using Milt Bryce's PRIDE methodology for systems analysis and development will yield specs that are precise enough to serve as context that an LLM can use to generate the correct code with few errors or hallucinations.

          • myth2018 2 days ago ago

            I worked for a somewhat large bank that used to do this "system analysis" job at its beginnings. Don't recall how they called this process step, but the idea was the same. Besides the internal analysts, they used to hire consultancies full of experienced ladies and gentlemen to design larger projects before coding started.

            Sometimes they were hired only to deliver specifications, sometimes the entire system. The software they delivered was quite stable, but that's beyond the point. There sure were software issues there, but I was impressed by how those problems were usually contained in their respective originating systems, rarely breaking other software. The entire process was clear enough and the interfaces between the fleet of windows/linux/mainframe programs were extremely well documented. Even the most disorganized and unprofessional third-party suppliers had an easier time writing software for us. It wasn't a joy, but it was rational, there was order. I'm not trying to romanticize the past, but, man, we sure un-learned a few things about how to build software systems

          • majormajor 2 days ago ago

            Nobody wants to wait for those cycles to happen in the sorts of businesses that feature most prominently on HN. That flow works much better for "take existing business, with well defined flows, computerize it" than "people would probably get utility out of doing something like X,Y,Z, let's test some crap out."

            Now, later-stage in those companies, yes, part of the reason for the chaos is because nobody knows or cares to reconcile the big-picture, but there won't be economic pressure on that without major scaling-back of growth expectations. Which is arguably happening in some sectors now, though the AI wave is making other sectors even more frothy than ever at the same time in the "just try shit fast!" direction.

            But while growth expectations are high, design-by-throwing-darts like "let's write a bunch of code to make it easy to AB test random changes that we have no theory about to try to gain a few percent" will often dominate the "careful planning" approach.

            • bitwize a day ago ago

              > Nobody wants to wait for those cycles to happen in the sorts of businesses that feature most prominently on HN.

              Bryce's Law: "We don't have enough time to do things right. Translation: We have plenty of time to do things wrong." Which was definitely true for YC startups, FAANGs, and the like in the ZIRP era, not so much now.

              Systems development is a science, not an art. You can repeatably produce good systems by applying a proven, tested methodology. That methodology has existed since 1971 and it's called PRIDE.

              > That flow works much better for "take existing business, with well defined flows, computerize it" than "people would probably get utility out of doing something like X,Y,Z, let's test some crap out."

              The flows are the system. Systems development is no more concerned with computers or software than surgery is with scalpels. They are tools used to do a job. And PRIDE is suited to developing new systems as well as upgrading existing ones. The "let's test some crap out" method is exactly what PRIDE was developed to replace! As Milt Bryce put it: "do a superficial feasibility study, do some quick and dirty systems design, spend a lot of time in programming, install prematurely so you can irritate the users sooner, and then keep working on it till you get something accomplished." (https://www.youtube.com/watch?app=desktop&v=SoidPevZ7zs&t=47...) He also proved that PRIDE is more cost-effective!

              The thing is, all Milt Bryce really did was apply some common sense and proven principles from the manufacturing world to systems development. The world settled upon mass production using interchangeable parts for a reason: it produces higher-quality goods cheaper. You would not fly in a plane with jet engines built in an ad-hoc fashion the way today's software is built. "We've got a wind tunnel, let's test some crap out and see what works, then once we have a functioning prototype, mount it on a plane that will fly hundreds of passengers." Why would a company trust an information system built in this way? It makes no sense. Jet engines are specced, designed, and built according to a rigorous repeatable procedure and so should our systems be. (https://www.modernanalyst.com/Resources/Articles/tabid/115/I...)

              > Which is arguably happening in some sectors now, though the AI wave is making other sectors even more frothy than ever at the same time in the "just try shit fast!" direction.

              I think the AI wave will make PRIDE more relevant, not less. Programmers who do not upskill into more of a systems analyst direction will find themselves out of a job. Remember, if you're building your systems correctly, programming is a mere translation step. It transforms human-readable specifications and requirements into instructions that can be executed by the computer. With LLMs, business managers and analysts will soon be able to express the inputs and outputs of a system or subsystem directly, in business language, and automatically get executable code! Who will need programmers then? Perhaps a very few, brilliant programmers will be necessary to develop new code that's outside the LLMs' purview, but most business systems can be assembled using common, standard tools and techniques.

              Bryce's Law: "There are very few true artists in computer programming, most are just house painters."

              The problem is, and always has been, that all of systems development has been gatekept by programmers for the past few decades. AI may be the thing that finally clears that logjam.

          • seec 13 hours ago ago

            In the construction world, it's basically the separation between architects and builders.

            Sure you can definitely build things and figure out things along the way. But for any sufficiently complex project, it's unlikely to yield good results.

          • array_key_first 2 days ago ago

            IMO programs are 90% data or information, and modern software vastly underutilizes that concept.

            If you know what data you need, who needs it, and where it needs to go, you have most of your system designed. If you just raw dog it then stuff is all over the place and you need hacks on hacks on hacks to perform business functions, and then you have spaghetti code. And no, I don't think domain modeling solves it. It often doesn't acknowledge the real system need but rather views the data in an obtuse way.

            • bitwize 2 days ago ago

              This!

              Per Fred Brooks: "Show me your flowcharts, but keep your tables hidden, and I shall continue to be mystified. Show me your tables, and I won't need to see your flowcharts; they'll be obvious."

              It's telling that PRIDE incorporates the concept of Information Resource Management, or meticulous tracking and documentation of every piece of data used in a system, what it means, and how it relates to other data. The concept of a "data dictionary" comes from PRIDE.

      • wavemode 2 days ago ago

        No, this is the situation you get into when you have programmers build a system, the requirements of that system change 15 times over the course of 15 years, and then you never give those programmers time to go back and redesign, so they keep having to stack new hacks and kludges on top of the old hacks and kludges.

        Anyone who has worked at a large company has encountered a Galactus, that was simply never redesigned into a simple unified service because doing so would sideline other work considered higher priority.

  • nullorempty 3 days ago ago

    > You can't design software you don't work on

    In 30 years in software dev, I am yet to see any significant, detailed and consistent effort to be extended into design and architecture. Most architects do not design, do not architect.

    Senior devs design and architect and then take their design to the architects for *feedback and approvals*.

    These senior devs make designs for features and only account for code and systems they've been exposed to.

    With an average employment term of 2 years most are exposed to a small cut of the system, which affects the depth and correctness of their design.

    And architects mostly approve, sometimes I think without even reading the docs.

    At most, you can expect the architects to give generic advice and throw a few buzzwords.

    At large, they feel comfortable and secure in their positions and mostly don't give a shit!

    • WillAdams 3 days ago ago

      John Ousterhout has been addressing this one class at a time at Stanford for a while now, and has scaled up to a book:

      https://www.goodreads.com/en/book/show/39996759-a-philosophy...

      Video overview at:

      https://www.youtube.com/watch?v=bmSAYlu0NcY

    • octorian 2 days ago ago

      The last time I worked on a project that actually had all these roles, "architect" basically meant someone who sat in meetings all day and played very little role in the actual software development of the project.

      There were plenty of times where it would have been useful to have someone providing real architecture/design guidance, but no such person functionally existed.

    • mmis1000 2 days ago ago

      > 2 years

      I feel it's already enough to rewrite a big part of subsystem or change the whole thing into shit (depends on maintainer).

      Software today moves quite fast. 2 year is sometimes difference between a new company and a dead company

    • makeitdouble 3 days ago ago

      > 2 years

      I've been thinking about this a lot. 2~3 years is a long time, long enough to have a pretty good grasp on what a code maintained by 50~100 does in pretty concrete terms, come up with decent improvement ideas, and see at least one or two structural ideas hit production.

      If the person then stays 1 or 2 more years they get a chance to further refine, but usually will be moved up the ladder Peter Principle style. If they get a chance to lead these architecture changes that company has a chance to be on a decent path technally speaking.

      I'm totally with you on the gist of it: architects will usually be a central switch arranging these ideas coming from more knowledgeable places. In the best terms I see their role as guaranteeing consistency and making sure teams don't impede each other's designs.

  • skydhash 3 days ago ago

    "Generic Software Design" as the author called it, is nice for setting the general direction of some implementation. This is why I like to read software engineering books. It's easier to solve a problem if you have some kind of framing to guide you. And it's easier to talk about the solution if everyone share the same terminology.

    But yes, the map is not the territory, and giving directions is not the same as walking the trail. The actual implementation can deviate from the plan drafted at the beginning of the project. A good explanation is found in Naur's Theory of Programming, where he says the true knowledge of the system is inside the head of the engineers that worked on it. And that knowledge is not easily transferrable.

    • DerArzt 2 days ago ago

      Man I would kill for some direction from my "architects", even if it were a bit wrong. I'm at the point that I can't even get them to review my architectural diagrams demanded by my company to guide me on what's expected.

  • eviks 3 days ago ago

    > For instance: In large codebases, consistency is more important than “good design”

    But this is exactly the type of generic software design advice the article warns us about! And it mostly results in all all the bad software practices we as users know and love remaining unchanged (consistently "bad" is better than being good at least in some areas!)

    • johnfn 3 days ago ago

      I don’t know. At my place a lot of cowboy engineers decided to do things their own way. So now we have the random 10k lines written in Redux (not used anywhere else) that no one likes working with. Then there’s the part that randomly uses some other query library because they didn’t like the one we use in 95% of the code for some reason, so if you ever want to work with that code you need to keep two libraries in your head instead of one. Yes, the existing query library is out of date. Yes, the new one is better— in isolation. But having both is even worse than having the bad one!

      • whstl 3 days ago ago

        GP is talking about "consistently bad" being worse than "inconsistently good". Not defending any inconsistency.

        What you describe just sounds "inconsistent AND bad".

        • johnfn 3 days ago ago

          I didn’t really get into it, but I think that most decisions which are not consistent are made with some feeling of “I will improve upon the existing state of this ugly codebase by introducing Good Decisions”. I’m sure even the authors of the Redux section of my code felt the same way. But code with two competing standards, only one good, is almost always worse than code with one bad standard. So breaking with consistency must be carefully considered, and the developers must have the drive to push their work forward rather than just leaving behind an isle of goodness.

          • emerent 2 days ago ago

            You're getting a lot of pushback in the comments here and I don't understand why. This is exactly right. Stay consistent with the existing way or commit to changing it all (not necessary all at once) so it's consistent again, but better.

            • whstl 2 days ago ago

              Nobody is pushing back about "commit to changing all".

              Nobody is denying that "inconsistent" can be bad on its own.

              But you can't say that "inconsistent but good" is bad by providing an example of how "inconsistent and bad" is bad.

          • whstl 2 days ago ago

            I don't know what to say.

            That’s a logic error. The claim was that "inconsistent but good" can exist, not that "inconsistent == good". Responding with one example where "inconsistent" turned out badly is a totally different claim and doesn't refute what GP says.

            • johnfn 2 days ago ago

              Who said that I only had one example? I just listed one so you'd have an idea of what I was talking about. I could give you like a hundred. This is a heuristic I've developed over a lot of time working in codebases with inconsistencies and repeatedly getting burned.

              • whstl 2 days ago ago

                I'm not disagreeing with your example and conclusion, and I've seen many of those.

                I actually agree that half-assing a problem is not the best solution.

                It's just that they are not examples of "inconsistent but good". They are not even "good", just "inconsistent". You said yourself that they're worse overall.

          • 2 days ago ago
            [deleted]
      • yunnpp 2 days ago ago

        The author never really defines "consistency" anyway. Consistency of what?

        I've never seen consistency of libraries and even programming languages have a negative impact. Conversely, the situation you describe, or even going out of the way to use $next_lang entirely, is almost always a bad idea.

        The consistency of where to place your braces is important within a given code base and teams working on it, but not that important across them, because each one is internally consistent. Conversely, two code bases and teams using two DBs that solve the same problem is likely not a good idea because now you have two types of DBs to maintain. Also, if one team solves a DB-specific problem, say, a performance issue, it might not be obvious how the other team might be able to pick up the results of that work and benefit from it.

        So I don't know. I think the answer depends on how you define "consistency", which OP hasn't done very well.

      • roguecoder 2 days ago ago

        This is where an architect is useful, because they can ask "why?"

        Sometimes there is a reason! Sometimes there isn't a reason, but it might be something we want to move everything over to if it works well and will rip out if it doesn't. Sometimes it's just someone who believes that functional programming is Objectively Better, and those are when an architect can say "nope, you don't get to be anti-social."

        The best architects will identify some hairy problem that would benefit from those skills and get management to point the engineer in that direction instead.

        A system that requires homogeneity to function is limited in the kinds of problems it can solve well. But that shouldn't be an excuse to ignore our coworkers (or the other teams: I've recently been seeing cowboy teams be an even bigger problem than cowboy coders.)

      • benoau 3 days ago ago

        Ugh I remember a "senior" full stack dev coming to me with various ideas for the backend - start use typeorm instead of sequelize and replace nestjs with express, for the tickets they would work on, despite having no experience with any of these. The mess of different libraries and frameworks they left in the frontend will haunt that software for years lol.

      • shayway 3 days ago ago

        It's essentially the same problem as https://xkcd.com/927/ [How Standards Proliferate]

        • eviks 3 days ago ago

          So following that silly comic you'd ban utf-8 because it breaks consistency? (even though in reality it beat most other standards, not just became 15th)

          • 2 days ago ago
            [deleted]
    • jpollock 3 days ago ago

      This isn't really about software quality, it's about the entire organization.

      Consistency enables velocity. If there is consistency, devs can start to make assumptions. "Auth is here, database is there, this is how we handle ABC". Possible problems show up in reviews by being different to expectation. "Hey, where's XYZ?", "Why are you querying the database in the constructor?"

      Onboarding between teams becomes a lot easier, ramp up time is smaller.

      Without consistency, you end up with lots of small pockets of behavior that cause downstream problems for the org as a whole.

      Every team needs extra staff to handle load peaks, resulting in a lot of idle devs.

      Senior devs can't properly guess where the problematic parts of fixes or features would be. They don't need to know the details, just where things will be _difficult_.

      Every feature requires coordination between the teams, with queuing and prioritizing until local staff become available.

      Finally, consistency allows classes of bugs to be fixed once. Fix it once and migrate everyone to the new style.

    • karmakaze 3 days ago ago

      Yeah that line gave me a twitch. Reading on though it's more about the resulting coherence and correctness rather than like the Ralph Waldo Emerson quote: "A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines."

      • kayo_20211030 3 days ago ago

        I agree. It's only the foolish consistency that's problematic. A sensible consistency does, as you say, provide a coherence. William James, who overlapped Emerson, has a lot to say about positive habits.

    • snoman 3 days ago ago

      My reading of it also violates the Boy Scout Rule. That is to say: if improving some portion of the codebase would make it better, but inconsistent, you should avoid the improvement; which is something that I would disagree with.

      I think adherence to “consistency is more important than ‘good design’” naturally leads to boiling the ocean refactoring and/or rewrites, which are far riskier endeavors with lower success rates than iterative refactoring of a working system over time.

      • jpollock 3 days ago ago

        If improving a portion of the codebase makes it better, but inconsistent...

        migrate the rest of the codebase!

        Then everyone benefits from the discovery.

        If that's difficult, write or find tooling to make that possible.

        It's in the "if it hurts, do it more often" school of software dev.

        https://martinfowler.com/bliki/FrequencyReducesDifficulty.ht...

      • CuriouslyC 2 days ago ago

        The problem with small refactors over time is that your information about what constitutes a good/complete model of your system increases over time as you understand customers and encounter edge cases. Small refactors over time can cause architectural churn and bad abstractions. Additionally, if you ever want to do a programmatic rewrite of code, with a bunch of small refactors that becomes more difficult, with a single surface you can sometimes just use a macro to change everything all at once.

        This is an example of a premature optimization. The reason it can still be good is that large refactors are an art that most people haven't suffered enough to master. There are patterns to make it tractable, but it's riskier and engineers often aren't personally invested in their codebases enough to bother over just fixing the few things that personally drive them nuts.

      • strogonoff 2 days ago ago

        If improving some portion of the codebase would make it better, but inconsistent, you should avoid the improvement. Take note, file a ticket, make a quick branch, and get back to what you were working on; later implement that improvement across the whole codebase as its own change, keeping things consistent.

      • johnbcoughlin 3 days ago ago

        if you have some purported improvement to a codebase that would make it inconsistent, then it's a matter of taste, not fact, whether it is actually an improvement.

    • Night_Thastus 2 days ago ago

      Consistency is best, with a slow gradual, measured movement towards 'better' where possible. When and where the opportunity strikes.

      If you see a massive 50 line if/else/if/else block that can be replaced with a couple calls to std::minmax, in code that you are working on, why not replace it?

      But don't go trying to rewrite everything at once. Little improvements here or there whenever you touch the code. Look for the 'easy wins' which are obvious based on more modern approaches. Don't re-write already well-written code into a new form if it doesn't benefit anything.

    • Waterluvian 3 days ago ago

      I feel like “be consistent” is a rule that applies very broadly.

      There’s absolutely exceptions and nuances. But I think when weighing trade-offs, program makers by and large deeply under-weigh being consistent.

      • Sankozi 3 days ago ago

        I have opposite experience. Consistency is commonly enforced in bigger corporations while it's value is not that high (often negative). Lots of strategies/patterns promoted and blindly followed without a brief reflection that maybe this is a bad solution for certain problems. TDD, onion/hexagonal architecture, SPA, React, etc.

    • pydry 3 days ago ago

      Moreover, saying that consistency is more important than good design is like saying that eating leafy greens is more important than a good diet.

    • tonyhart7 3 days ago ago

      Yeah its called the expectations, consistently bad is predictable

      software that has "good" and "bad" parts in unpredictable

      • whstl 3 days ago ago

        > software that has "good" and "bad" parts in unpredictable

        Software that has only "bad" parts is also very unpredictable.

        (Unless "bad" means something else than "bad", it's hard to keep up with the lingo)

        • tonyhart7 3 days ago ago

          that's why I write the first parts of my comment

          your example is just bad code that unpredictable

          • whstl 2 days ago ago

            And I disagree.

            My assertion is that software that has only bad parts is way more unpredictable than software that has both good and bad.

            For multiple reasons: because "bad" is not necessarily internally consistent. Because it's buggy.

            Unless, again, "bad" here means "objectively good quality but I get to call it bad because it's not in the way I like to write code".

      • redrove 3 days ago ago

        So we should all write bad code to keep it predictable? raising the quality of the codebase is unacceptable under this premise.

        • evilduck 2 days ago ago

          Possibly. Probably even.

          High quality and consistent > Low quality and consistent > Variable quality and inconsistent. If you're going to be the cause of the regression into variable quality and inconsistent you'd better deliver on bringing it back up to high quality and consistent. That's a lot of work that most people aren't cut out for because it's usually not a technical change but a cultural change that's needed. How did a codebase get into the state of being below standards? How are you going to prevent that from happening again? You are unlikely to Pull Request your way out of that situation.

        • tonyhart7 3 days ago ago

          "So we should all write bad code to keep it predictable?"

          its true and false at the same time, it depends

          here I can bring example: you have maintaining production system that has been run for years

          there is flaw in some parts of codebase that is probably ignored either because

          1. bad implementation/hacky way

          2. the system outgrow the implementation

          so you try to "fix" it but suddenly other internal tools stops working, customer contact the support because it change the behaviour on their end, some CI randomly fails etc

          software isn't exist in a vacuum, complex interaction sometimes prevent "good" code to exist because that just reality

          I don't like it either but this is just what it is

  • jkaptur 3 days ago ago

    There are two extremes here: first, the "architects" that this article rails against. Yes, it's frustrating when a highly-paid non-expert swoops in to offer unhelpful or impossible advice.

    On the other hand, there are Real Programmers [0] who will happily optimize the already-fast initializer, balk at changing business logic, and write code that, while optimal in some senses, is unnecessarily difficult for a newcomer (even an expert engineer) to understand. These systems have plenty of detail and are difficult to change, but the complexity is non-essential. This is not good engineering.

    It's important to resist both extremes. Decision makers ultimately need both intimate knowledge of the details and the broader knowledge to put those details in context.

    0. http://www.catb.org/jargon/html/story-of-mel.html

  • 7402 3 days ago ago

    > if you come up with the design for a software project, you ought to be responsible for the project’s success or failure

    I think this should also apply to people who come up with or choose the software development methodology for a project. Scrum masters just don't have the same skin in the game that lead engineers do.

  • Sankozi 3 days ago ago

    "In large codebases, consistency is more important than “good design”" - this is completely opposite from my experience. There is some value in consistency within single module but consistency in a large codebase is a big mistake (unless in extremely rare case that code base consists entirely of very similar modules).

    Modules with different requirements should not have single consistent codebase. Testing strategy, application architecture, even naming should be different across different modules.

  • augustk 3 days ago ago

    In the best scenario the developers are also active users of the software they produce. Then a design flaw or an error that affects the users will also affect the developers and will (hopefully) motivate the latter to correct it.

    • octorian 2 days ago ago

      Its also useful for developers to have a way of bypassing customer support to have direct visibility into what issues the actual users are experiencing. This can come in the form of browsing tickets, online forums, or social media.

      Often something that's easily brushed off by a support rep will ring a bell in the mind of a developer who has recently worked in the area of the code related to the issue.

    • roguecoder 3 days ago ago

      XP putting a customer on the team was the best thing in the methodology. Replacing those with business representatives is one of Scrum's original sins.

      • bitwize 2 days ago ago

        > XP putting a customer on the team was the best thing in the methodology.

        Recently my boss said to me: "Customers want something that WORKS. If you deliver something, and it doesn't work, what's the customer going to think?" The huge drawback to putting a customer on the team is that the customer probably doesn't want to know, let alone be involved with, how the sausage is made. They want a turnkey solution unveiled to them on the delivery date, all ready to go, with no effort on their part.

        Generally what you want is a customer proxy in that role, who knows or can articulate what the customer needs better than the customer themselves can. Steve Jobs was a fantastic example of someone who filled this role.

      • augustk 2 days ago ago

        It's also worth noting that a customer is not necessarily a user. As a developer I don't care so much about the customer but I care wholeheartedly about the users.

  • kevinlearynet 2 days ago ago

    The best applications I've ever been a part of building, measured by user satisfaction, are those where the engineers:

    1. Value simple, effective systems

    2. Understand all use cases, because they use it

    3. Have enough freedom to fix small things as they find them

    #3 is controversial sometimes, but I believe this flexibility and creative freedom for devs leads to much happier people and much better products.

  • ilaksh 3 days ago ago

    This is also the type of thing that makes having separate software architects that aren't actually maintaining the software generally a nonsensical idea.

    There are too many decisions, technical details, and active changes to have someone come in and give direction from on high at intervals.

    Maybe at the beginning it could make sense sort of, but projects have to evolve and more often than not discover something important early on in the implementation or when adding "easy" features, and if someone is good at doing software design then you may need them even more at that point. But they may easily be detrimental if they are not closely involved and following the rest of the project details.

    • roguecoder 2 days ago ago

      The best "architects" serve as facilitators, rather than deciding themselves how software is built. They have to be reading the code, but they don't themselves have to be coding to be effective.

      You don't need one until you've got 30-70 engineers, but a strong group of collaborative architects is the most important thing for keeping software development effective and efficient at the 30-1,000 engineer range.

    • o_nate 2 days ago ago

      I guess I'm lucky not to have worked at a place with a role for software architects who don't actually write code. I honestly don't know how that would work. However, I think I can appreciate the author's point. Any sufficiently complex piece of existing software is kind of like a chess game in progress. There is a place for general principles of chess strategy, but once the game is going, general strategy is much less relevant than specific insights into the current state of play, and a player would probably not appreciate advice from someone who has read a lot of chess books but hasn't looked at the current state of the board.

  • kayo_20211030 3 days ago ago

    > I don’t know if structural engineering works like this, but I do know that software engineering doesn’t.

    Structural Engineering (generally construction engineering) does work like that. Following the analogy, the engineers draw; they don't lay bricks. But, all the best engineers have probably been site supervisors at some point and have watched brick being layed, and spoken to the layers of bricks, etc. Construction methods change, but they don't change as quickly as software engineering methods. There is also a very material and applicable "reality" constraint. Most struct's knowledge/heuristics remains valid over long periods of time. The software engineers' body of knowledge can change 52 times in a year. To completely stretch the analogy - the site conditions for construction engineering are better known than the site conditions for a large software project. In the latter case the site itself can be adjusted more easily, and more materially, by the engineering itself i.e. the ground can move under your feet. Site conditioning on steroids!

    Ultimately, that's why I agree fully with the piece. Generic advise may be helpful, but it always applies to some generic site conditions that are less relevant in practice.

    • bitwize 2 days ago ago

      My father mentored some engineering college students about 15 years ago. He came away from the experience a bit disappointed: they knew how to model a part, but not how to machine one. When he came up in the world of slide-rule-and-drafting-pencil mechanical engineering, every engineer knew, in principle at least, how to machine a part; such knowledge was necessary for good designs because a design was instructions to shop-floor personnel on how to make the part, including info like materials to be used, tolerances, tools, etc.

      • imtringued 20 hours ago ago

        In CAD you can make an arbitrarily sized hole, in the real world you can only drill holes if you have the corresponding drill bit.

    • glitchc 3 days ago ago

      It sounds like you are making the argument that there is no established way to generate good software. If that's the case, then software isn't engineering, but rather art. The former requires established/best practices to be called a discipline, while the latter is a creative endeavour.

      • kayo_20211030 3 days ago ago

        That's true. I do. I consider it a creative art, with some disciplinary adjacency to engineering. The creative sculptor has to know the material stone in order to make anything good with it. But, construction engineering is creative too; just different.

      • Scarblac 3 days ago ago

        I think we know how to reliably make good software. E.g. NASA manages.

        The problem is that doing it like that is much too expensive and too slow for most businesses.

        • kayo_20211030 3 days ago ago

          NASA is a bit of an outlier. In the 50's through the 70's any failure, particularly a failure involving the loss of a life, would have been a national catastrophe; a blow to national prestige. So, they were super careful that it didn't happen. The spent-cost was irrelevant compared to the reputational value at stake. Honestly, it was a wise investment given the operative quid pro quo in those days. Maybe they still do good software, I don't know, but I suspect that the value at risk today makes them more cost averse, and less sensitive to poor software.

          "Business" runs the same calculations. I'd posit that, as a practical matter, most businesses don't want "good" software; they want "good enough" software.

        • pixl97 3 days ago ago

          >and too slow for most businesses.

          A lot of this is because while a 'good' business is waiting for the 'good' software to be written, some crappy business has already written the crappy software and sold it to all the customers you were depending on. In general customers are very bad at knowing the difference between good and bad software and typically buy what looks flashy or the sales people bribe them the most for.

    • atrettel 3 days ago ago

      Reading that particular section made me think of the tree swing cartoon [1]. I agree that the best engineers have likely been on the ground making concrete changes at some point, watching bricks being laid as you said, but I have encountered quite a few supervisors who seemingly had no idea how things were being implemented on the ground. As the post says, people on the ground then sometimes have to figure out how to implement the plan even if it ignores sound design principles.

      I don't view that as a failure of abstraction as a design principle as much as it is a pitfall of using the wrong abstraction. Using the right abstraction requires on the ground knowledge, and if nobody communicates that up the chain, well, you get the tree swing cartoon.

      [1] https://en.wikipedia.org/wiki/Tree_swing_cartoon

      • kayo_20211030 3 days ago ago

        I agree with you. But, talk too long or too fulsomely about "abstractions" or "principles" and you'll lose the brick layers. They're paid by the course, generally. Trust them to make the site adjustments, but always verify that it's not a bad-bad-thing.

    • whstl 3 days ago ago

      > The software engineers' body of knowledge can change 52 times in a year

      Nah, those changes are only in the surface, at the most shallow level.

      There's always new techniques, materials and tools in structural engineering as well.

      Foundations take a lifetime to change.

      • RaftPeople 3 days ago ago

        > Nah, those changes are only in the surface, at the most shallow level.

        Very strongly disagree.

        There are limitless methods of solving problems with software (due to very few physical constraints) and there are an enormous number of different measures of whether it's "good" or "bad".

        It's both the blessing and curse of software.

        • whstl 2 days ago ago

          Once again, that's only true at the surface level.

          If you dig deeper you'll realize that it's possible to categorize techniques, tools, libraries, algorithms, recipes, whatever.

          And if you dig even deeper, you'll realize that there is foundational knowledge that lets you understand a lot of things that people complain about being too new.

          The biggest curse of software is people saying "no" to education and knowledge.

          • RaftPeople 2 days ago ago

            > Once again, that's only true at the surface level.

            Can you provide concrete examples of the things that you think are foundational in software? I'm thinking beyond "be organized so it's easier for someone to understand", which applies to just about everything we do (e.g. modularity, naming, etc.)

            For every different approach like OOP, functional, relation DB, object DB, enterprise service bus + canonical documents, microservices, cloud, on prem, etc. etc., they are just options with pros and cons.

            With each approach the set of trade-offs is dependent on the context that the approach is applied into, it's not an absolute set of trade-offs, it's relative.

            A critical skill that takes a long time to develop is to see the problem space and do a reasonably good job of identifying how the different approaches fit in with the systems and organizational context.

            Here's a real example:

            A project required a bunch of new configuration capabilities to be added to a couple systems using the normal configuration approach found in ERP systems (e.g. flags and codes attached to entities in the system controlling functional flow and data resolution, etc.). But for some of them a more flexible "if then" type capability made sense when analyzing the types of situations the business would encounter in these areas. For these areas, the naive/simple approach would have been possible but would have been fragile and difficult to explain to the business how to get the different configurations in different places to come together to produce the desired result.

            There is no simple rule you can train someone on to spot when this is the right approach and when it is not. It's heavily dependent on the business context and takes experience.

            • whstl 2 days ago ago

              > Can you provide concrete examples of the things that you think are foundational in software?

              Are you really expecting an answer here? I'll answer anyway.

              • A big chunk of the CompSci curriculum is foundational.

              • Making wrong states unrepresentable either via type systems or via code itself, using invariants, pre/post-conditions, etc. This applies to pretty much every tool or every language you can use.

              • Error handling is a topic that goes beyond tools and languages, and even beyond whether you use try/catch, algebraic objects or values. It seeps into logging and observability too.

              • Reasoning about time/space and tradeoffs of algorithms and structures, knowing what can and can't be computed, parsed, or recognized at all. Knowing why some problems don’t scale and others do.

              • Good modeling of change, including ordering: immutability vs mutation, idempotency, retry logic, concurrency. How to make implicit timing explicit. Knowing which choices are cheap to undo and which are expensive, and design for those.

              • Clear ownership of responsibilities and data between parts of the system via design of APIs, interfaces and contracts. This applies to OOP, FP, micro-services, modules and classes, and even to how one deals with third party-services beyond the basic.

              • Computer basics (some of which goes back to 60s/70s or even back): processes, threads, green memory, scheduling, cache, instructions, memory hierarchy, threads, but races, deadlock, and ordering.

              • Information theory (a lot goes to Claude Shannon, and back): compression, entropy, noise. And logic, sets, relations, proofs.

              I never said there is a "simple rule" only foundational topics, but I'll say again: The biggest curse of software is people saying "no" to education and knowledge.

              • RaftPeople 2 days ago ago

                > Are you really expecting an answer here? I'll answer anyway.

                Yes, and thanks for the examples, it's now clear what you were referring to. I agree that most of those are generally good fundamentals (e.g. wrong states, error handling, time+space), but some are already in complex territory like mutability. Even though we can see the problem, we have a massive amount of OOP systems with state all over the place. So the application of a principle like that is very far from settled or easy to have a set of rules to guide SE's.

                > The software engineers' body of knowledge can change 52 times in a year

                Nah, those changes are only in the surface, at the most shallow level.

                I think the types of items you listed above are the shallow layer. The body of knowledge about how to implement software systems above that (the patterns and approaches) is enormous and growing. It's a large collection of approaches each with some strengths and weaknesses but no clear cut rule for application other than significant experience.

                • whstl a day ago ago

                  > I think the types of items you listed above are the shallow layer

                  They are not, by definition. You provided proof for it yourself: you mention the "body of knowledge [...] above that", so they really aren't the topmost layer.

                  > is enormous and growing

                  That's why you learn the fundamentals. So you can understand the refinements and applications of them at first glance.

                  • RaftPeople a day ago ago

                    > They are not, by definition. You provided proof for it yourself: you mention the "body of knowledge [...] above that", so they really aren't the topmost layer

                    I said "shallow", not "topmost".

                    > That's why you learn the fundamentals. So you can understand the refinements and applications of them at first glance.

                    Can you explain when (if ever) a person should use an OOP approach and when (if ever) he/she should use a functional approach to implement a system?

                    I don't think those fundamentals listed above help answer questions like that and those questions are exactly what the industry has not really figured out yet. We can see both pros and cons to all of the different approaches but we don't have a body of knowledge that can point to concrete evidence that one approach is preferred over the many other approaches.

                    • whstl 17 hours ago ago

                      I'm really sorry, but if you think those topics above are "shallow", I don't think we have much to talk about and should probably agree to disagree.

                      > Can you explain when (if ever) a person should use an OOP approach and when (if ever) he/she should use a functional approach to implement a system?

                      I can, and have done several times, actually, for different systems.

                      > I don't think those fundamentals listed above help me

                      The list I gave was not exhaustive. You asked yourself for "concrete examples" and I gave examples.

                      The reason I can't answer hard questions in a simple message is exactly because those foundations are not "shallow" at all.

                      • RaftPeople 13 hours ago ago

                        > I can, and have done several times, actually, for different systems.

                        The reason I asked that question isn't to be argumentative, it's because, IMO, the answer to those types of questions are exactly what does not exist in the software engineering world.

                        And talking through the details of our different opinions is how we can understand where each one is coming from and possibly, maybe, incorporate some new information or new way of looking at things into our mental models of the world.

                        So, if you do think you have an answer, I am truly interested in when you think OOP is appropriate and when functional is better suited (or neither).

                        If someone asked me that question, I would say "If we're in fantasy land and it's the first system ever built and there are no variables related to existing systems and supportability and resource knowledge, etc., then I really can't answer the question. I've never built a system that was significantly functional, I've only built procedural, OOP and mixtures of those two with sprinklings of functional. I know there are significant pros to functional, but without actually building a complete system at least once, I can't really compare"

                        • whstl 9 hours ago ago

                          You asked whether one should use OOP or FP to implement a system.

                          I can answer that, and did in the past, as I have done projects in both OOP and FP. But before I answer, I ask follow-up question about the system itself, and I will be giving lots of "it depends" and conditions.

                          There is no quick and dirty rule that will apply to any situation, and it's definitely not something I can teach in a message board.

      • kayo_20211030 3 days ago ago

        Respectfully, I disagree. You're correct on the facts, but any "new techniques, materials and tools" need to be communicated to the brick layers. That takes time and effort i.e. it all needs to be actively managed. The brick layers have to be able to work with those new techniques and materials. I don't want some of them using method #1 over here, and method #2 over there, unless I'm wholly conversant with the methods, and fully confident that it'll all mesh eventually. The system i.e. the whole shebang has to work coherently to serve its purpose.

        • whstl 2 days ago ago

          > Respectfully, I disagree. You're correct on the facts, but

          I'm fine with the disagreement if you say I'm correct. ¯\_(ツ)_/¯

          > any "new techniques, materials and tools" need to be communicated to the brick layers

          Same for software.

          > That takes time and effort i.e. it all needs to be actively managed. The brick layers have to be able to work with those new techniques and materials.

          Same for software.

          > I don't want some of them using method #1 over here, and method #2 over there, unless I'm wholly conversant with the methods, and fully confident that it'll all mesh eventually. The system i.e. the whole shebang has to work coherently to serve its purpose.

          Same for software.

          Virtually every profession has a body of knowledge that's constantly getting updated. Only software engineers seem to have this faulty assumption that they must apply it all immediately. Acknowledging it's a false assumption leads to a better life.

          • kayo_20211030 2 days ago ago

            Then we're in violent agreement.

            The challenge would be to control the pace of the evolution of the body of knowledge, but more importantly, its application, to a pace that's consistent with the pace of the system you're building.

            > faulty assumption that they must apply it all immediately

            No truer word was ever said. Everyone is attracted to shiny things.

            • whstl 2 days ago ago

              Ah nice, then yes, we agree!

    • wheelinsupial 2 days ago ago

      > software engineers' body of knowledge can change 52 times in a year

      I understand I’m replying against the spirit of your point, but the IEEE has actually published one and it seems to get updated very slowly.

      https://www.computer.org/education/bodies-of-knowledge/softw...

    • aidenn0 2 days ago ago

      And when changes are made at the site, bad things can happen: https://en.wikipedia.org/wiki/Hyatt_Regency_walkway_collapse...

  • MagicMoonlight 3 days ago ago

    There is real irony in a blog post saying you can’t trust generic advice, which is itself a generic advice blog post, and links to other generic advice they have written.

  • dcre 3 days ago ago

    Very impressed at the rate of high-quality interesting posts from this author.

  • guywithahat 2 days ago ago

    As a personal anecdote in agreement with the article, I've seen design consultants come in with no industry experience and ruin projects. Friends of the VP with no satellite experience saying we need to introduce random exceptions everywhere who don't understand threading or type systems outside of Java. It's frustrating to see potential bugs introduced at the design level with no regard for how the end product operates

  • wewewedxfgdf 2 days ago ago

    I'm wary of absolute statements about programming.

  • 3 days ago ago
    [deleted]
  • lunias 2 days ago ago

    In my experience an "architect" is someone that used to write code (maybe 10-20 years ago - maybe only for a few years to begin with) but followed incentives away from the practice. Now they read blogs written by other architects about technologies that none of them have used. They champion the iterative process, but have 20 hard requirements up front that "make sure we can scale" (there are 5 users of this intranet app). They have 30 minutes free per day to produce any deliverables, all other time is spent in meetings. The deliverables produced are diagrams which have a few boxes and arrows pointing between them. Engineers are quick to ask, "Aren't those arrows backwards? We pull data from that service, right?". To which the architect replies without thought, "Ah, yeah, you're right. This is more of a work in progress, something to work off of." The next slide in their presentation is about database normalization and shows some data model that they hallucinated without looking at the data. The engineers chime in again, "Do we care more about the size of the data on disk or read times here?". The architect doesn't understand why you would challenge "best practices". You don't understand why the person presumably leading the initiative is unaware that computer science is fundamentally about trade offs.

    It's not always like this, but the bigger the company, the more statistically probable it is.

  • roguecoder 3 days ago ago

    The problem isn't programmers: it is cheap-ass executives obsessed with compliance.

    Good software designers are facilitators. They don't tell people how to build software, but say "not like that" by making the technical requirements clear. They enable design to constantly change as the needs change.

    It has been a long time since I've been at a company willing to actually employ someone in that roll. They require that their most senior engineers be focused on writing code themselves, at the expense of the team and skill-building necessary for quality software.

    Instead we get bullshit like "team topologies" or frameworks that are more about how the company wants to manage teams than they are about how well the software works. We get "design documents" that are considered more important than working code. Even the senior engineers that are around aren't allowed to say "no" if it is going to interfere with some junior project manager's imagined deadline.

    Software companies are penny-wise and pound foolish, resulting in shittastic spaghetti messes with microservice meatballs.

  • watters 3 days ago ago

    The second footnote acknowledges that the post is largely tautological.

  • nik282000 2 days ago ago

    And software engineers can't design cars that they don't drive. As evidenced by the millions of tons od e-waste rolling around on city streets these days.

  • mashally 2 days ago ago

    True somehow, I see a lot of people who don't understand the business well cannot, absolutely, design a good software.

  • bitwize 2 days ago ago

    The job of the big-picture software architect is not to give "generic software design advice". It's precisely to see the big picture: understand the information needs and flows of the business and determine WHAT needs to be built in order to serve those precise needs. Let the programmers worry about the details. That's their job and their strength: they are detailists who are fluent in the language of the machine, but their biggest drawback is, they tend to have difficulty seeing the big picture and understanding how those details fit into a greater whole.

    One does not need to be a programmer in order to be a great systems analyst/architect. Matter of fact it's the opposite: great analysts are good with people, and have a strong intuitive grasp of what people need in order to effectively run the business. Leaving that to programmers is a recipe for disaster, as without documentation of existing business systems and requirements and a solid design, programmers will happily build the wrong thing.

  • neopointer 2 days ago ago

    Didn't read the article yet, but came here to say that I couldn't agree more with that sentence in the sense that the industry should stop making people "software architects", specially those that do not code at all but want to make design decisions. Software architect should be a role and not a position.

    • chanux 2 days ago ago

      Don't want to generalize this to all but based on my experience I see it like this: The material architects build with is hollow Lego blocks. Engineers have to build with blocks that have complex machinery inside it.

  • narag 3 days ago ago

    The article incurs in the very same problem it's describing. It's generic advice that might not be appliable to specific situations.

  • austin-cheney 3 days ago ago

    I completely disagree with almost the entirety of the article. It’s all about prior experience building large things many times yourself, not using some framework or other external abstraction.

    When you have done this many times you absolutely can design a large application without touching the code. This is part planning and risk analysis experience and part architecture experience. You absolutely need a lot of experience creating large applications multiple times and going through that organizational grind but prior experience in management and writing high level plans is extremely helpful.

    • wduquette 3 days ago ago

      You’re speaking of implementing yet another system of a familiar kind, I.e. a new project. The OP says that generic design works for new projects. He’s mostly talking about designing new features to be added to an existing system, in which case the design has to be contingent on the existing system.

      • austin-cheney 3 days ago ago

        Software developers like to think they are special. They aren't. Software, from a planning perspective, is not much different than physical construction.

        When it comes to extending an existing application it really comes down to how well the base application was planned to begin with. In most cases the base application developers have no idea, because they either outsourced the planning to some external artifact or simply pushed through it one line at a time and never looked back. Either way the person writing the extension will be more concerned with the corresponding service data and accessibility than conformance to the base application code if it is not well documented and not well tested in a test automation scheme.

    • roguecoder 3 days ago ago

      Why are you rewriting the same application a second time?

      I've personally yet to have a situation where that comes up. And every application I've ever worked on has its architecture evolve over time, as behavior changes and new domain concepts are identified.

      There are recurring patterns (one might even call them Design Patterns), but by the time we've internalized them we have even less need for up-front planning. Why write the doc when you can just implement the code?

    • austin-cheney 2 days ago ago

      I have watched this comment bounce up and down in votes all day from 0 earlier in the day up to a max of 4 up votes and now back to 0. I really think this controversy goes to the extreme Dunning-Kruger of software. There are so many developers that are just unqualified button pressers who cannot see what they don't know, but for people who have enough experience to write original software there is common knowledge that most lesser developers cannot accept.

  • michaelt 3 days ago ago

    I've always felt it's unrealistic to separate upfront architecture from implementation, because my experience is a lot of systems turn out to have requirements that are a lot more complex in reality than they might seem at first, even if you think quite hard about the requirements.

    Imagine if you worked for an online retailer like Amazon, and you were assigned to architect a change so you can add free sample items into customers' orders. Take a moment to think about how you'd architect such a system, and what requirements you'd anticipate fulfilling. In the next paragraph, I'll tell you what the requirements are. Or you can skip the next paragraph, the size of which should tell you the requirements are more complex than they seem.

    The samples must be items in the basket, so the warehouse knows to pick them. They must be added at the moment of checkout, because that's when the order contents and weight can change. Often a customer should receive a sample only once, even if they check out multiple orders - so a record should be kept of which customers have already been allocated a given sample. It should be possible to assign a customer the same sample multiple times, in which case they should receive it once per order until they've received the assigned number. Some samples go out of stock regularly, so the sample items should not be visible to the customer when they view their order on the website, but if shipped it should appear on their receipt to assure them they haven't been charged for it. Samples should never be charged for, even if their barcode is identical to something we normally charge for. If the warehouse is unable to ship the sample, the customer should not receive a missing-item apology or a separate shipment, and the record saying that customer has had that sample already should be decremented. If the warehouse can't ship anything except the sample, the entire orders should be delayed/cancelled, never shipping the sample alone. If a customer ordered three of an item and was assigned one sample item with the same barcode but the warehouse only had three items with that barcode in stock, something sensible should happen. One key type of 'sample' is first-time-customer gifts; internal documentation should explain that if the first order a customer places is on 14-day delivery and their second order is on faster delivery and arrives first, the first-order gift will be in the second order to arrive but that's expected because it's assigned at checkout. If the first-order-checked-out is cancelled, either by the customer or the warehouse, the new-customer gift should be added to the next order they check out. Some customers will want to opt out of free samples, those who do should not be assigned any samples. But the free sample system is also used by customer services to give out token apology gifts to customers whose orders have had problems, customers who've been promised a gift should receive it even if they've opted out of free samples.

    No reasonable person can design such a system upfront, because things like 'opt-out mechanism sometimes shouldn't opt you out' and 'more than one definition of a customer's first order' do not occur to reasonable people.

    • karmakaze 3 days ago ago

      I would hope that a large successful online retailer got that way by factoring their implementation so that many aspects can be dealt with mostly as a configuration matter. This is mistaking quantity for difficulty. First of all, separate the domains fulfillment doesn't care about pricing, but they do care about grouping items in a shipment so they should already have had grouping rules, so apply the 'do not ship this item alone' rule, etc. The other pattern to apply repeatedly here is to separate the decision-making from effecting a change, i.e. separation of policy from mechanism. So you can have a library of mechanisms (e.g. add item to order at checkout) vs the policies which decide who, which item, and what to charge. If you don't conflate all these separate concerns as a single 'thing' to begin with, then none of the individual things is complicated, just has to be the right things in the right places.

      This thought processes does use some knowledge of online retail but not really that much. It's mostly patterns of system decomposition and good engineering.

      Edit: the point of the article itself stands, if the codebase is in no shape to have these free samples built as I described then my input is useless, other than to consider working toward that architectural goal.