1 comments

  • amthorn 12 hours ago ago

    This demo uses standard transformer weights with a very small attention/KV component, but most temporal memory is handled by a stateful operator rather than a growing context window.

    Outputs are similar to a transformer, while running super fast on CPU with much lower memory use.