My motivation for building Hekate is simple: I am done watching well-funded teams with 50+ people and a busload of PhDs produce engineering trash.
There is a massive, widening gap between academic brilliance and silicon-level implementation. You can write the most elegant paper in the world, but if your prover requires 100GB of RAM to execute a basic trace, you haven't built a protocol, you've built a research project that collapses under its own weight.
I don't have "strategic planning" committees or HR-mandated consensus. If Hekate's core doesn't meet my performance standards, I rewrite it in 48 hours. This agility is a weapon. I want to prove that a single engineer, driven by physics and zero-copy principles, can wreck the unit economics of a multi-million dollar venture-backed startup.
Disrupting inefficient financial models is more than fun—it's necessary. The current "safe" hiring meta (US-only, HR-compliant, resume-padded candidates) is a strategic failure. While industry leaders focus on compliance, state-sponsored actors like Lazarus are eating their lunch.
You don't need "safe" candidates. You need predators. You need the difficult, inconvenient outliers who don't need a visa to outcode your entire department. Hekate is a reminder that in deep-tech, capital is noise, but performance is the only signal that matters.
You should probably write this as a blog post or readme and submit the link instead. I can't provide any technical feedback since I don't even understand what a row is in this context.
I don't have "strategic planning" committees or HR-mandated consensus...
Look, if your code is better just say it's better. But this kind of LinkedIn slop conspiracist virtue signaling isn't a good look. It's fine to believe that but you should never say it out loud.
Fair point on the tone. I'll trade the rhetoric for physics.
A "row" in this context is a single step of the Keccak-f[1600] permutation within the AIR (Algebraic Intermediate Representation) table. Most engines materialize this entire table in RAM before proving. At 2^24 rows, that’s where you hit the "Memory Wall" and your cloud bill goes parabolic.
Hekate is "better" because it uses a Tiled Evaluator to stream these rows through the CPU cache (L1/L2) instead of saturating the memory bus. While Binius64 hits 72GB RAM on 2^20 rows, Hekate stays at 21.5GB for 16x the workload (2^24).
The "committees" comment refers to the gap between academic theory and hardware-aware implementation. One prioritizes papers; the other prioritizes cache-locality. Most well-funded teams choose the easy path (more RAM, more AWS credits) over the hard path (cache-aware engineering).
If you want to talk shop, tell me how you'd handle GPA Keys computation at 2^24 scale without a zero-copy model. I’m genuinely curious.
Since the edit window is closed, I want to clarify the AIR structure for those asking about the "row" definition.
In Hekate's Keccak AIR, the relationship is ~25 trace rows per 1 Keccak-f[1600] permutation.
2^24 Rows = The raw size of the execution trace matrix (height).
~671k Permutations = The actual cryptographic workload (equivalent to hashing ~90MB of data).
The benchmark compares the cost to prove the same cryptographic work, regardless of internal AIR row mapping.
Interesting work. This seems highly relevant for ZK systems that need to generate large proofs on commodity hardware. Streaming-first proving could be a key enabler for permissionless ZK infrastructure
Exactly. If we can't prove 2^24 rows on a laptop, ZK will stay centralized forever. Hekate is my answer to the memory wall that forces teams into $2+/hour AWS instances. Proving should be a commodity, not a luxury.
Agreed. The scary part is that memory requirements quietly define who is allowed to be a prover. If ZK infra assumes 64–128GB RAM by default, decentralization is already lost, regardless of the cryptography. Streaming-first designs feel like a prerequisite for permissionless proving, not just an optimization.
My motivation for building Hekate is simple: I am done watching well-funded teams with 50+ people and a busload of PhDs produce engineering trash.
There is a massive, widening gap between academic brilliance and silicon-level implementation. You can write the most elegant paper in the world, but if your prover requires 100GB of RAM to execute a basic trace, you haven't built a protocol, you've built a research project that collapses under its own weight.
I don't have "strategic planning" committees or HR-mandated consensus. If Hekate's core doesn't meet my performance standards, I rewrite it in 48 hours. This agility is a weapon. I want to prove that a single engineer, driven by physics and zero-copy principles, can wreck the unit economics of a multi-million dollar venture-backed startup.
Disrupting inefficient financial models is more than fun—it's necessary. The current "safe" hiring meta (US-only, HR-compliant, resume-padded candidates) is a strategic failure. While industry leaders focus on compliance, state-sponsored actors like Lazarus are eating their lunch.
You don't need "safe" candidates. You need predators. You need the difficult, inconvenient outliers who don't need a visa to outcode your entire department. Hekate is a reminder that in deep-tech, capital is noise, but performance is the only signal that matters.
You should probably write this as a blog post or readme and submit the link instead. I can't provide any technical feedback since I don't even understand what a row is in this context.
I don't have "strategic planning" committees or HR-mandated consensus...
Look, if your code is better just say it's better. But this kind of LinkedIn slop conspiracist virtue signaling isn't a good look. It's fine to believe that but you should never say it out loud.
Fair point on the tone. I'll trade the rhetoric for physics.
A "row" in this context is a single step of the Keccak-f[1600] permutation within the AIR (Algebraic Intermediate Representation) table. Most engines materialize this entire table in RAM before proving. At 2^24 rows, that’s where you hit the "Memory Wall" and your cloud bill goes parabolic.
Hekate is "better" because it uses a Tiled Evaluator to stream these rows through the CPU cache (L1/L2) instead of saturating the memory bus. While Binius64 hits 72GB RAM on 2^20 rows, Hekate stays at 21.5GB for 16x the workload (2^24).
The "committees" comment refers to the gap between academic theory and hardware-aware implementation. One prioritizes papers; the other prioritizes cache-locality. Most well-funded teams choose the easy path (more RAM, more AWS credits) over the hard path (cache-aware engineering).
If you want to talk shop, tell me how you'd handle GPA Keys computation at 2^24 scale without a zero-copy model. I’m genuinely curious.
lol, post the prompt that generated this
The prompt was: RUST_LOG=debug RUSTFLAGS="-C target-cpu=native" cargo run --release --example keccak --no-default-features --features "std parallel blake3"
The completion took 88s and 21.5GB RAM.
I don’t think it was.
Since the edit window is closed, I want to clarify the AIR structure for those asking about the "row" definition.
In Hekate's Keccak AIR, the relationship is ~25 trace rows per 1 Keccak-f[1600] permutation.
2^24 Rows = The raw size of the execution trace matrix (height). ~671k Permutations = The actual cryptographic workload (equivalent to hashing ~90MB of data).
The benchmark compares the cost to prove the same cryptographic work, regardless of internal AIR row mapping.
THE MANIFESTO: https://github.com/oumuamua-corp/hekate
Interesting work. This seems highly relevant for ZK systems that need to generate large proofs on commodity hardware. Streaming-first proving could be a key enabler for permissionless ZK infrastructure
Exactly. If we can't prove 2^24 rows on a laptop, ZK will stay centralized forever. Hekate is my answer to the memory wall that forces teams into $2+/hour AWS instances. Proving should be a commodity, not a luxury.
Agreed. The scary part is that memory requirements quietly define who is allowed to be a prover. If ZK infra assumes 64–128GB RAM by default, decentralization is already lost, regardless of the cryptography. Streaming-first designs feel like a prerequisite for permissionless proving, not just an optimization.