I'm the original author of this open-source reasoning engine.
What it does:
It lets a language model *close its own reasoning loops* inside embedding space — without modifying the model or retraining.
How it works:
- Implements a mini-loop solver that drives semantic closure via internal ΔS/ΔE (semantic energy shift)
- Uses prompt-only logic (no finetuning, no API dependencies)
- Converts semantic structures into convergent reasoning outcomes
- Allows logic layering and intermediate justification without external control flow
Why this matters:
Most current LLM architectures don't "know" how to *self-correct* reasoning midstream — because embedding space lacks convergence rules.
This engine creates those rules.
Just to clarify: I'm a real human, not a bot. My native language is Chinese, and I did use AI to help polish some of the English—but the ideas, structure, and code are entirely my own.
I’m new to Hacker News and still learning how to present things in the right tone here.
If anything seems off, I genuinely welcome feedback. I’m here to share something I’ve been building with care. Thanks again.
Haha, fair enough. Maybe I do sound like someone who skipped their meds
But everything I posted comes from real effort. I’ve spent a long time building this—from the semantic theory to the open-source structure. It might look wild at first glance, but there’s real logic and care underneath.
If you give it a chance, I think you’ll find it’s more grounded than it seems.
I'm the original author of this open-source reasoning engine.
What it does: It lets a language model *close its own reasoning loops* inside embedding space — without modifying the model or retraining.
How it works: - Implements a mini-loop solver that drives semantic closure via internal ΔS/ΔE (semantic energy shift) - Uses prompt-only logic (no finetuning, no API dependencies) - Converts semantic structures into convergent reasoning outcomes - Allows logic layering and intermediate justification without external control flow
Why this matters: Most current LLM architectures don't "know" how to *self-correct* reasoning midstream — because embedding space lacks convergence rules. This engine creates those rules.
GitHub: https://github.com/onestardao/WFGY
Happy to explain anything in more technical detail!
If you can't even do the prompt engineering to adapt the AI to HN's style, it's hard to believe that you're doing this work in any meaning
Actually I am really new to here. I will check the rules
Beside the format, avoid unsuported claims like "the valuation could exceed $30M".
OK Thanks for info
Please don't post slop to Hacker News. This is a place for curious conversation between humans.
https://news.ycombinator.com/item?id=39528000
https://news.ycombinator.com/item?id=42976756
https://news.ycombinator.com/item?id=40569734
Thanks for the feedback.
Just to clarify: I'm a real human, not a bot. My native language is Chinese, and I did use AI to help polish some of the English—but the ideas, structure, and code are entirely my own.
I’m new to Hacker News and still learning how to present things in the right tone here.
If anything seems off, I genuinely welcome feedback. I’m here to share something I’ve been building with care. Thanks again.
I think bro forgot to take his meds
Haha, fair enough. Maybe I do sound like someone who skipped their meds
But everything I posted comes from real effort. I’ve spent a long time building this—from the semantic theory to the open-source structure. It might look wild at first glance, but there’s real logic and care underneath.
If you give it a chance, I think you’ll find it’s more grounded than it seems.
welcome to leave any message here