Already the title of your submission does not check out. Do you know how many clock cycles a 1 GHz CPU realizes in one nanosecond? One. Just reading the input argument of a function takes a "nanosecond-scale" amount of time.
> I'm a self-taught developer and researcher who left school at 16, and I've spent some time exploring a first-principles approach to system design for various frontier problems.
As much as I appreciate new ways of thinking, whenever I read "first-principles approach", my alarm bells go off. More often than not it just means "I chose to ignore (or am too impatient to learn about) all insights that generations of research in this field have made". The "left school at 16" and "self-taught" parts also indicate that. This may explain the hyperbole of the title as well, as it does not pass the smell test.
If you are looking for advice, here is mine: try to not ignore those that came before you. Giants' shoulders are very wide, very high up and pretty solid. There is no shame in standing on them, but it takes effort to climb up.
Great vision, challenging the "scale" of current AI solutions is super valid, if only for the reason that humans don't learn like this.
Architecture: despite other comments, I am not so bothered with MMAP (if read only) but rather with the performance claims. If your total db is 13kb you should be answering queries at amazing speeds, because you're just running code on in-cache data at that point. The performance claim at this point means nothing, because what you're doing is not performance intensive.
Claims: A frontal attack on the current paradigm would at least have to include real semantic queries, which I think is not currently what you're doing, you're just doing language analytics like NLP. This is maybe how you intend to solve semantic queries later, but since this is not what you're doing, I think that should be clear from the get-go. Especially because the "scale" of the current AI paradigm has nothing to do with how the tokenization happens, but rather with how the statistical model is trained to answer semantic queries.
Finally, the example of "Find all Greek-origin technical terms" is a poor one because it is exactly the kind of "knowledge graph" question that was answerable before the current AI hype.
Nevertheless, love the effort, good luck!
(oh and btw: I'm not an expert, so if any of this is wrong, please correct me)
Yep, another developer enthusiastically proposing mmap as an "easy win" for database design, when in reality it often causes hard-to-debug correctness and performance problems.
To be fair, I use it to share financial time series between multiple processes and as long as there is a single writer it works well. Been in production since several years.
Creating a shared memory buffer by mapping it as a file is not the same as mapping files on disk. The latter has weird and subtle problems, whereas the former just works.
To be clear, I am indeed doing mmap to the same file on disk. Not using shmap. But there is only one thread in one process writing to it and the readers are tolerant to millisecond delays.
No issue if you know what you are doing. Not sure about the author but I know very high perf mmap systems for decades without corruption / issues (in hft/finance/payments).
Advice to OP: lay off the Claude Code if your goal is to become an “independent researcher”. Claude doesn’t know what it’s doing, but it’s happy to lead you into a false sense of achievement because it’ll never tell you when you’re wrong, or when it’s wrong.
Bizarre because a quick look at the code and commit log shows it was likely 100% coded by AI, so the author is not trying too hard to hide it, but they also seemed to forget to mention it anywhere in the README or the blog post.
All of the code is imported in 1 commit. The rest of the commits are deleting the specs that I guess were used to generate the code. There’s one commit adding code which explicitly says generated by Claude code. There’s basically no chance the whole codebase is not AI slop.
Already the title of your submission does not check out. Do you know how many clock cycles a 1 GHz CPU realizes in one nanosecond? One. Just reading the input argument of a function takes a "nanosecond-scale" amount of time.
> I'm a self-taught developer and researcher who left school at 16, and I've spent some time exploring a first-principles approach to system design for various frontier problems.
As much as I appreciate new ways of thinking, whenever I read "first-principles approach", my alarm bells go off. More often than not it just means "I chose to ignore (or am too impatient to learn about) all insights that generations of research in this field have made". The "left school at 16" and "self-taught" parts also indicate that. This may explain the hyperbole of the title as well, as it does not pass the smell test.
If you are looking for advice, here is mine: try to not ignore those that came before you. Giants' shoulders are very wide, very high up and pretty solid. There is no shame in standing on them, but it takes effort to climb up.
What an amazing comment, criticism on the title without going into any content with a side of character judgement
Ok, since you're looking for sincere feedback.
Great vision, challenging the "scale" of current AI solutions is super valid, if only for the reason that humans don't learn like this.
Architecture: despite other comments, I am not so bothered with MMAP (if read only) but rather with the performance claims. If your total db is 13kb you should be answering queries at amazing speeds, because you're just running code on in-cache data at that point. The performance claim at this point means nothing, because what you're doing is not performance intensive.
Claims: A frontal attack on the current paradigm would at least have to include real semantic queries, which I think is not currently what you're doing, you're just doing language analytics like NLP. This is maybe how you intend to solve semantic queries later, but since this is not what you're doing, I think that should be clear from the get-go. Especially because the "scale" of the current AI paradigm has nothing to do with how the tokenization happens, but rather with how the statistical model is trained to answer semantic queries.
Finally, the example of "Find all Greek-origin technical terms" is a poor one because it is exactly the kind of "knowledge graph" question that was answerable before the current AI hype.
Nevertheless, love the effort, good luck!
(oh and btw: I'm not an expert, so if any of this is wrong, please correct me)
> • Memory-Mapping (mmap): We treat the database file as if it’s already in memory, eliminating the distinction between disk and RAM.
Ugh, not another one...
Yep, another developer enthusiastically proposing mmap as an "easy win" for database design, when in reality it often causes hard-to-debug correctness and performance problems.
To be fair, I use it to share financial time series between multiple processes and as long as there is a single writer it works well. Been in production since several years.
Creating a shared memory buffer by mapping it as a file is not the same as mapping files on disk. The latter has weird and subtle problems, whereas the former just works.
To be clear, I am indeed doing mmap to the same file on disk. Not using shmap. But there is only one thread in one process writing to it and the readers are tolerant to millisecond delays.
> millisecond delays
I thought you said financial time series!
But yeah, this is a case where mmap works great - convenience, not super fast, single writer and not necessarily super durable.
Why not though, from what I can see from the docs, these databases supposed to be static and read only. At least when you use it on a device.
Page cache reclamation is mostly single threaded. It's much simpler, than you can create in a user space, it has no weight for specific pages etc.
Traveling into kernel flushes branch predictor caches, tlb. So it's not free at all.
No issue if you know what you are doing. Not sure about the author but I know very high perf mmap systems for decades without corruption / issues (in hft/finance/payments).
Ctrl-Fd you here the moment i saw that in the article
Not too sure, reading with mmap is ok but simultaneous read/write operations are a bit tricky.
Really impressive work :)
The repo is 100% AI slop.
Advice to OP: lay off the Claude Code if your goal is to become an “independent researcher”. Claude doesn’t know what it’s doing, but it’s happy to lead you into a false sense of achievement because it’ll never tell you when you’re wrong, or when it’s wrong.
Bizarre because a quick look at the code and commit log shows it was likely 100% coded by AI, so the author is not trying too hard to hide it, but they also seemed to forget to mention it anywhere in the README or the blog post.
Out of interest: can you elaborate how you analyzed the repo to come to this conclusion?
All of the code is imported in 1 commit. The rest of the commits are deleting the specs that I guess were used to generate the code. There’s one commit adding code which explicitly says generated by Claude code. There’s basically no chance the whole codebase is not AI slop.
For those interested in the referenced spec:
https://github.com/RobAntunes/lingodb/blob/e8e56a2b2dfe19a27...
The specs themselves seem generated with LLMs too, as in https://github.com/RobAntunes/lingodb/blob/5e3834de648debf08... – overuse of emojis, excitement, etc