1 comments

  • DanielKenessy 9 hours ago ago

    OP here. This is an experiment to replace standard RNN/Transformer attention mechanisms with a geometric control-theory approach.

    The core idea is a "Pilot" (pointer) that physically navigates a 1D Riemann Helix based on gradient flux. We treat learning as a physics problem involving Inertia, Friction (Deadzone), and Stochastic Walk.

    It behaves like a quantum system where the particle location (training state) and the wave function (inference state) have de-synced.

    The code is raw, manual CUDA. Looking for feedback on the inertia logic and if anyone has seen this specific "Shell vs. Core" divergence before.