A laptop computer is extremely complex, but is actively developed and maintained by a small number of people, built on parts themselves developed by a small number of people, many of which are themselves built on parts themselves developed by a small number of people, and so on and so forth.
This works well in electronics design, because everything is documented and tested to comply with the documentation. You'd think this would slow things down, but developing a new generation of a laptop takes fewer man hours and less calendar time than developing a new generation of any software of a similar complexity running on it, despite the laptop skirting with the limitations of physics. Technical debt adds up really fast.
The top-level designers only have access to what the component manufacturers have published, and not to their internal designs, but that doesn't matter because the publications include correct and relevant data. When the component manufacturer comes out with something new, they use documentation from their supplier, to design the new product.
As long as each components of documentation is complete and accurate, it will meet all of the needs of anyone using that component. Diving deeper would only be necessary if something is incomplete or inaccurate.
Every company I've worked with has started with an ER diagram for their primary database (and insisted on it, in fact), only to give up when it became too complex. You quickly hit the point where no one can understand it.
You then eventually have that same pattern happen with services, where people give up on mapping the full thing out as well.
What I've done for my current team is to list the "downstream" services, what we use them for, who to contact, etc. It only goes one level deep, but it's something that someone can read quickly during an incident.
I don't think OP is looking for context from the AI model perspective but rather a process for maintaining a mental picture of the system architecture and managing complexity.
I'm not sure I've seen any good vendors but I remember seeing a reverse devops tool posted a few days ago that would reverse engineer your VMs into Ansible code. If that got extended to your entire environment, that would almost be an auto documenting process.
Context rots when it stays implicit.
Make the system model an explicit artifact with fixed inputs and checkpoints, then update it on purpose.
Otherwise you keep rebuilding the same picture from scratch.
Im honestly looking for both. I haven't found a vender to do this well for just humans nor am I seeing something that can expose this context, read only, to all of the ai agent coding models
One thing that’s evidently helped: using CLAUDE.md / agent instructions as de facto architecture docs. If the agent needs to understand system boundaries to work effectively, those docs actually get maintained
You don't, it's a map of intent, not infra state. What exists, why, what talks to what. Live state still needs IaC and observability. The .md captures the 'why' that terraform can't
Continuous improvement is essential, but we must distinguish between progress and mere decoration. If an old car runs perfectly and a new one offers the same speed but with a different shell, why replace the entire vehicle? It’s a waste of time and resources. Why not focus on upgrading the 'shell' instead of reinventing the wheel?
Good hierarchical documentation
A laptop computer is extremely complex, but is actively developed and maintained by a small number of people, built on parts themselves developed by a small number of people, many of which are themselves built on parts themselves developed by a small number of people, and so on and so forth.
This works well in electronics design, because everything is documented and tested to comply with the documentation. You'd think this would slow things down, but developing a new generation of a laptop takes fewer man hours and less calendar time than developing a new generation of any software of a similar complexity running on it, despite the laptop skirting with the limitations of physics. Technical debt adds up really fast.
The top-level designers only have access to what the component manufacturers have published, and not to their internal designs, but that doesn't matter because the publications include correct and relevant data. When the component manufacturer comes out with something new, they use documentation from their supplier, to design the new product.
As long as each components of documentation is complete and accurate, it will meet all of the needs of anyone using that component. Diving deeper would only be necessary if something is incomplete or inaccurate.
Every company I've worked with has started with an ER diagram for their primary database (and insisted on it, in fact), only to give up when it became too complex. You quickly hit the point where no one can understand it.
You then eventually have that same pattern happen with services, where people give up on mapping the full thing out as well.
What I've done for my current team is to list the "downstream" services, what we use them for, who to contact, etc. It only goes one level deep, but it's something that someone can read quickly during an incident.
Sorry what is an ER diagram?
First hits on DDG, anonymous Google, Bing
ERD/ Entity Relationship Diagram https://www.lucidchart.com/pages/er-diagrams
ERM / Entity-Relationship Model https://en.wikipedia.org/wiki/Entity%E2%80%93relationship_mo...
(same-same, ERD is the more common acronym)
That is what I figured it would be, but you never know anymore with the amount of acronyms thrown around nowadays.
I use nix (nixos) with AI-agents. Its everything i ever dreamed of and a bit more. Makes all other distros and buildsystems look old and outdated :D
Woah what are you doing?
Yea im curious too, is this because most of your system can be explained by nixos configuration ? So the LLM can easily fetch context?
I don't think OP is looking for context from the AI model perspective but rather a process for maintaining a mental picture of the system architecture and managing complexity.
I'm not sure I've seen any good vendors but I remember seeing a reverse devops tool posted a few days ago that would reverse engineer your VMs into Ansible code. If that got extended to your entire environment, that would almost be an auto documenting process.
Context rots when it stays implicit. Make the system model an explicit artifact with fixed inputs and checkpoints, then update it on purpose. Otherwise you keep rebuilding the same picture from scratch.
Im honestly looking for both. I haven't found a vender to do this well for just humans nor am I seeing something that can expose this context, read only, to all of the ai agent coding models
I will check that tool out.
Monitoring tools (APM) will show dependencies (web calls, databases, etc) and should contain things like deployment markers and trend lines.
All of those endpoints should be documented in an environment variable or similar as well.
The breakdown is when you don't instrument the same tooling everywhere.
Documentation is generally out of date by the time you finish writing it so I don't really bother with much detail there.
This has been my experience as well. imo documentation feels like one of the few areas that AI can be good at today.
It's okay but it often lies. At an SRE level you need a pretty zoomed-out view of the world until you are trying to zoom-in to a problem component.
Always start at the head (what a customer sees -- actually load the website) and work down into each layer.
If something is breaking way downstream and customers don't see it then it doesn't actually matter right now.
One thing that’s evidently helped: using CLAUDE.md / agent instructions as de facto architecture docs. If the agent needs to understand system boundaries to work effectively, those docs actually get maintained
But how do you ensure the .md file is able to see all of the details of the infra?
You don't, it's a map of intent, not infra state. What exists, why, what talks to what. Live state still needs IaC and observability. The .md captures the 'why' that terraform can't
If the system is so good, why constantly change the context?
but think about the shareholders!
I think it is because of continous improvement mindset.
Continuous improvement is essential, but we must distinguish between progress and mere decoration. If an old car runs perfectly and a new one offers the same speed but with a different shell, why replace the entire vehicle? It’s a waste of time and resources. Why not focus on upgrading the 'shell' instead of reinventing the wheel?