Loss of Agency Is a Scaling Failure in Modern Software Systems

(traulmen.blogspot.com)

3 points | by Traumen 6 hours ago ago

5 comments

  • Traumen 6 hours ago ago

    As systems scale, control increasingly shifts from users to opaque layers: policy engines, algorithms, and now LLM-based agents. This isn’t an anti-AI argument, but an engineering one: collapsing policy, logic, and execution creates systems that are harder to reason about, override, or trust. This post examines loss of agency as a recurring failure mode in modern software architectures.

    • jruohonen 6 hours ago ago

      These two are spot on:

      > Outputs are probabilistic but treated as deterministic.

      > [Systems that] replace explicit mechanisms with probabilistic ones.

      In other words, many things should be deterministic, not probabilistic. That's why the notion of probabilistic programming never really took off for most application domains.

      • Traumen 5 hours ago ago

        mostly agree, with one nuance.

        Probabilistic systems do make sense at the edges — perception, ranking, recommendation, search, fuzzy matching. The problem starts when we let probabilistic outputs cross into domains that used to have hard contracts: policy enforcement, state transitions, or irreversible actions.

        What feels new isn’t probabilistic programming itself, but treating probabilistic inference as if it were a deterministic control layer. Once probability collapses into authority, you lose debuggability and guarantees.

        So the failure mode isn’t “probabilistic vs deterministic” per se, but where the probabilistic boundary is drawn — and whether it’s explicit.

        • jruohonen 5 hours ago ago

          > Probabilistic systems do make sense at the edges

          Sure, and that's why I used the wording most application domains.

          > So the failure mode isn’t "probabilistic vs deterministic" per se, but where the probabilistic boundary is drawn -- and whether it’s explicit.

          If you take a look at

             https://arxiv.org/pdf/2512.22418
          
          it is difficult to draw any boundaries because a dice is rolled at so many stages. Unpredictability is acknowledged but unreliable probabilistic verification is done; requirements and specifications are prompted on the fly in response to unpredictable outputs; debugging is psychologically stochastic too and deterministic tools are used for trying to control stochastic outputs; and so forth and so on.
  • Traumen 6 hours ago ago

    I’m not arguing against scale or automation. I’m arguing that many modern systems optimize for throughput and engagement while quietly removing inspectability, reversibility, and human interruptibility. Curious how others here think about “agency” as a system requirement, not a UX concern.