Well… if you look at pure functions without ant state then thats a whole class of computing you can refer to. The problem is that its not efficient to calculate state from arguments for everything. We end up saving to disk, writing packets over the network, etc. In a purely theoretical environment you could avoid state, but the real world imposes constraints that you need to operate within or between.
Additionally, depending how deep down you go, theres state stored somewhere to calculate against. Vues are stored in some kind of register and theyre passed into operations with a target register as an additional argument.
I agree, and I think this is where the distinction matters.
I’m not claiming that state disappears, or that computation can be purely stateless all the way down. There is always state somewhere - registers, buffers, disks, networks. The question is where authority lives and whether correctness depends on reconstructing history.
The inefficiency you point out is real: recomputing everything from arguments is often worse than persisting state. That’s why the pattern I’m aiming at isn’t “no state,” but no implicit, negotiated state. State can exist, be large, and even be shared — but it should be explicit, bounded, and verifiable, not something the system has to infer or reconcile in order to proceed.
At the lowest levels, yes, registers hold values and operations mutate targets. But those mutations are local, immediate, and enforced by hardware invariants. Problems tend to appear higher up when systems start treating historical state as narrative, as something to reason about, merge, or explain, rather than as input with strict admissibility rules.
So I see this less as a theoretical purity claim and more as a placement problem: push state to places where enforcement is cheap and local, and keep it out of places where it turns into coordination and recovery logic.
Rigidity in inputs lock down your system's evolution. The whole system need to evolve in lockstep if you need to change what the systems processes.
In practice, you either end up with an enlarging monolith or introducing state evolution (either explicitly, or by adding incremental input types that the system processes and expanding its API surface).
Beyond a certain inflection point of complexity, flexibility in introducing change becomes necessary.
I think that’s a real tradeoff, but I’d frame it slightly differently.
The rigidity here is intentional at the gate, not across the whole system. The constraint is that admissibility rules must be explicit and versioned, not that they never change. Evolution happens by introducing new admissible inputs (new proofs, new schemas, new validators), while the old ones continue to fail or succeed deterministically.
If the system requires coordinated internal evolution to handle change, then yes, you drift toward a monolith. But if evolution is pushed to the edges (new request types, new validators, new execution paths) while the gate remains simple, the core doesn’t need to evolve in lockstep.
I see this less as “rigidity vs flexibility” and more as where change is allowed to accumulate. If change accumulates inside the core, complexity grows superlinearly. If it accumulates at the boundary as new admissible forms, the core stays boring even as the surface evolves.
There’s definitely an inflection point where negotiated state becomes unavoidable, but the goal is to push that point as far out as possible, not pretend it doesn’t exist.
It depends on what kind of system you're talking about.
If you have no memory, that memory can't get corrupted.
If the memory is carried by the request the memory can't get desynchronized with the request.
You can use cryptographic techniques to prevent tampering and even reuse of states, though reuse can be a feature instead of a bug. Sometimes the state is too big to pass around like a football but even then you can access it with a key and merge it in in a disciplined way.
I agree, and I think you’ve named the core constraint cleanly.
The distinction I’m trying to draw isn’t “no memory ever,” but no implicit memory required for correctness. If there’s no memory, there’s nothing to corrupt. If memory is carried by the request, it can’t desynchronize from the request. That’s really the invariant I care about.
I also agree that cryptographic techniques make this tractable in practice. Signed tokens, capabilities, idempotency keys, and replay protection let you move state to the edge, while also keeping the core enforcement logic stateless. In that model, reuse can be a feature rather than a bug, as long as it’s explicit and verifiable.
Where I’ve seen things break down, is when state is large or shared and gets merged implicitly. As you say, sometimes you can’t pass it around like a football, but even then accessing it by key and merging it in a disciplined and bounded way, preserves the same principle: the system shouldn’t need to remember in order to act correctly.
So for me it’s less “stateless vs stateful” and more “enforced state vs negotiated state.” Once the system starts negotiating with history, entropy creeps in very fast.
Well… if you look at pure functions without ant state then thats a whole class of computing you can refer to. The problem is that its not efficient to calculate state from arguments for everything. We end up saving to disk, writing packets over the network, etc. In a purely theoretical environment you could avoid state, but the real world imposes constraints that you need to operate within or between.
Additionally, depending how deep down you go, theres state stored somewhere to calculate against. Vues are stored in some kind of register and theyre passed into operations with a target register as an additional argument.
I agree, and I think this is where the distinction matters. I’m not claiming that state disappears, or that computation can be purely stateless all the way down. There is always state somewhere - registers, buffers, disks, networks. The question is where authority lives and whether correctness depends on reconstructing history. The inefficiency you point out is real: recomputing everything from arguments is often worse than persisting state. That’s why the pattern I’m aiming at isn’t “no state,” but no implicit, negotiated state. State can exist, be large, and even be shared — but it should be explicit, bounded, and verifiable, not something the system has to infer or reconcile in order to proceed. At the lowest levels, yes, registers hold values and operations mutate targets. But those mutations are local, immediate, and enforced by hardware invariants. Problems tend to appear higher up when systems start treating historical state as narrative, as something to reason about, merge, or explain, rather than as input with strict admissibility rules. So I see this less as a theoretical purity claim and more as a placement problem: push state to places where enforcement is cheap and local, and keep it out of places where it turns into coordination and recovery logic.
Rigidity in inputs lock down your system's evolution. The whole system need to evolve in lockstep if you need to change what the systems processes.
In practice, you either end up with an enlarging monolith or introducing state evolution (either explicitly, or by adding incremental input types that the system processes and expanding its API surface).
Beyond a certain inflection point of complexity, flexibility in introducing change becomes necessary.
I think that’s a real tradeoff, but I’d frame it slightly differently. The rigidity here is intentional at the gate, not across the whole system. The constraint is that admissibility rules must be explicit and versioned, not that they never change. Evolution happens by introducing new admissible inputs (new proofs, new schemas, new validators), while the old ones continue to fail or succeed deterministically. If the system requires coordinated internal evolution to handle change, then yes, you drift toward a monolith. But if evolution is pushed to the edges (new request types, new validators, new execution paths) while the gate remains simple, the core doesn’t need to evolve in lockstep. I see this less as “rigidity vs flexibility” and more as where change is allowed to accumulate. If change accumulates inside the core, complexity grows superlinearly. If it accumulates at the boundary as new admissible forms, the core stays boring even as the surface evolves. There’s definitely an inflection point where negotiated state becomes unavoidable, but the goal is to push that point as far out as possible, not pretend it doesn’t exist.
It depends on what kind of system you're talking about.
If you have no memory, that memory can't get corrupted.
If the memory is carried by the request the memory can't get desynchronized with the request.
You can use cryptographic techniques to prevent tampering and even reuse of states, though reuse can be a feature instead of a bug. Sometimes the state is too big to pass around like a football but even then you can access it with a key and merge it in in a disciplined way.
I agree, and I think you’ve named the core constraint cleanly. The distinction I’m trying to draw isn’t “no memory ever,” but no implicit memory required for correctness. If there’s no memory, there’s nothing to corrupt. If memory is carried by the request, it can’t desynchronize from the request. That’s really the invariant I care about. I also agree that cryptographic techniques make this tractable in practice. Signed tokens, capabilities, idempotency keys, and replay protection let you move state to the edge, while also keeping the core enforcement logic stateless. In that model, reuse can be a feature rather than a bug, as long as it’s explicit and verifiable. Where I’ve seen things break down, is when state is large or shared and gets merged implicitly. As you say, sometimes you can’t pass it around like a football, but even then accessing it by key and merging it in a disciplined and bounded way, preserves the same principle: the system shouldn’t need to remember in order to act correctly. So for me it’s less “stateless vs stateful” and more “enforced state vs negotiated state.” Once the system starts negotiating with history, entropy creeps in very fast.