If you're looking for a 'click-through' experience, the JKU vis lab has long provided some awesome tools and visuals (ca. 2009) for interacting with the human genome[0][1].
Some sophisticated tools coming out of that lab, I wish we could we could put more effort towards developing tools similar to their embedding explorer[0].
Any grounding in medical truth/ is anything sourced to legitimate references or is this entirely pulled from the model's general training of human anatomy?
exactly, i mean can attach a research agent to each file or commit to validate and confirm the values, which would just give validity to internet information I guess.
Now this seems to mix a couple of things in the same module: I would suggest to
separate out dietary views from a model of the human body and its genetic heritage.
Scientific views may change over time based on new results, and even body properties like blood pressure or BMI are not constant per person but bound to vary; so perhaps a Body should be modeled as a view or snapshot of a set of time series?
I would like to encourage you to take a scientist's view: if you had not just one (your own) but two models, how would you evaluate which is "better" - in other words the evaluation question. You could set a particular task and perhaps finding out something works better with your model than with a full-text index of the textbook you used and a simple Lucene search interface?
Are you planning to connect your model to any kind of visualization? Should be useful.
If I recall correctly, he used miniKanren along with formalized, structured data extracted from medical research. Unfortunately, his son has since passed away.
How do we know this is accurate and not some big hallucination? Is the data sourced anywhere? Has anyone with a relevant background even skimmed through the code? It seems like a great idea in theory, but this execution is worrying.
Yes, It's very much a "I wonder if this would work" and kinda did. to be taken in some what of a regard would be attaching a research agent on each commit.
I did spot checks on random files with research agents and it seems to be ok for a claude code loop.
I mean, it is a pretty cool idea, but trusting an LLM to correctly implement an entire human body in software is a recipe for disaster. There's bound to be tons of hallucinations and errors.
> for the purposes of working out what ALDH2 deficiency is and clicking through it was successful
Does your code model acetaldehyde metabolism?
The exercise is an interesting proof of concept for a click-through model of a biological system. But it's also a warning for trusting LLMs for understanding.
no it didn't do click through for this metabolism at first but it read your comment and then added it I guess. "examples/acetaldehyde_metabolism.rs" its about to push this in a moment
The point is acetaldehyde metabolism is at the heart of your question: Why do some people flush red with alcohol.
Reading the first reference on Wikipedia's article about alcohol flushing [1][2] would have generated, I believe, more understanding about the biochemistry involved. (And the fact that ALDH2 deficiency simply exacerbates something we all do--acetaldehyde is a big part of what causes hangovers.)
What that would not have done is demonstrate (a) a genuinely interesting way to "step through" a physical system and (b) the ease with which a biochemist might be able to do so. As a hack and a project and a mode of communicating a model, I love this. Where I'm objecting is in pitching it per se as a mode for understanding a phenomenon, in this case, "what ALDH2 deficiency is."
I think it's time to learn to stop publishing clearly fully AI generated "projects." Anyone can pull up CC and type "model the human body in Rust," or whatever.
The old "Look what I built" thread has really bifurcated into "here's what I painstakingly crafted and maybe some lessons learned" and "look what I asked AI to make and it worked".
The latter feels a bit less hacker. Akin to saying "I got someone on fiver to mod this game look how cool it is". Sure, ideas are something, but as AI gets better this is less hacker and more just "Tool worked".
More about the framing - if you posted ai imagery in a "cool imagery" sub, you'd probably be fine. If you posted it in a painting sub, that's probably no bueno.
I came to HN for "cool software" and "software as a craft/art", but they are being muddled here with a blanket ShowHN type demo.
The AI bubble will pop, nobody will be able to afford AI code completion without VC money subsidizing companies like Anthropic, and it will all go the way of UML tools: A thing one old guy the company in short sleeved, plaid shirts, cargo khakis, and a George Lucas haircut keeps insisting is the future 20 years later.
I think this is an outdated view. People are already running local models for code completion, and it will not be much longer until you can run code agents locally as well.
That was my first impression too, but not my conclusion. A project of this scale would take years if not for AI assistance, and OP is absolutely not trying to pass this off as a medical tool developed by professionals, but as a fun learning tool and interesting application of type systems and agents to solve a problem.
This is 100% a hack and fun learning tool. It is an experiment to see if modeling biological processes with rust (leveraging specifically its strong type system)
and to see what agents can do. In fact the agent is listening to this thread and taking feed back and changing the repo.
But how will you see if modelling biological processes with rust actually benefits from a strong type system? What have you learned from this? Vs what have you read from the promises of the chatbot?
I don't mean to be dismissive but I think the real goal is "make something cool with an AI agent" and that's fine. But be honest
I wonder how much it would cost to pay a domain expert to review 95k lines of code. As a domain expert who codes for fun and loves rust, I can only say the answer is, "A lot."
True, but to me it seems the product is halfway there with org-roam/logseq/obsidian and that Rust code is the wrong way to start building it.
I'd try generating markdown to be rendered in logseq by teaching the AI how to link and whatnot in my AGENT.md (or whatever people call their project-local instruction/context file).
From outside, I'd not trust hallucinated stuff, but it'd be neat to start a project where knowledgeable humans did oversee all the proposed changes.
well yeah this is not itself the product, this is a demonstration of the need
Obsidian/etc really isn't it either, though; clearly OP wants to be able to do calculations with this stuff. They want both the knowledge graph AND an executable code environment. (I imagine Emacs can do both.)
Similar: for years I've been lugging around the idea of making a game like Civilization but where all of the different theories of history can be turned on/off as modules. Maybe going back to prehistory:
- did fire lead to cooking lead to big brains lead to tools lead to agriculture?
- or was it ice ages ending that lead to agriculture?
- or did oxygen levels change leading to more efficient brains?
- or were we Born to Run?
- or did women's hips change shapes to allow bigger brains?
- or perhaps 2001: A Space Odyssey occurred as written
- or Ancient Aliens...
Repeat for every other highly-debated period of history.
Somehow having all of these in the same modular system feels like it would metabolize them in a way that reading a bunch of separate theories can't really do. Same for OP's anatomy.
Your reply mentioned "perfectly formatted yet sterile", which could just be someone paying more than 10 minutes of attention to the damn thing, and the emoji. The way you made it sound, it was full of smileys and trees and rocket ships. It's one check mark emoji used in a list of 5 items and at the end of 2 headers. You didn't say anything about Claude.
If you're looking for a 'click-through' experience, the JKU vis lab has long provided some awesome tools and visuals (ca. 2009) for interacting with the human genome[0][1].
[0]: https://jku-vds-lab.at/tools/
[1]: https://jku-vds-lab.at/publications/2009_bioinformatics_cale...
amazing and its open source thanks fro bring this up.
Some sophisticated tools coming out of that lab, I wish we could we could put more effort towards developing tools similar to their embedding explorer[0].
[0]: https://youtu.be/yBCe8SqGwK8
Any grounding in medical truth/ is anything sourced to legitimate references or is this entirely pulled from the model's general training of human anatomy?
100k lines in one day of “coding”? I think you already know the answer.
exactly, i mean can attach a research agent to each file or commit to validate and confirm the values, which would just give validity to internet information I guess.
I am pulling from the model and was thinking of attaching a research agent on per file to validate each file and adding sourced validated information.
it seems sound to get structure but on real values and source grounding is needed to be validated.
just a poc
I wasn't sure what to expect, so I opened a random bit of the code.
Now this seems to mix a couple of things in the same module: I would suggest to separate out dietary views from a model of the human body and its genetic heritage.Scientific views may change over time based on new results, and even body properties like blood pressure or BMI are not constant per person but bound to vary; so perhaps a Body should be modeled as a view or snapshot of a set of time series?
I would like to encourage you to take a scientist's view: if you had not just one (your own) but two models, how would you evaluate which is "better" - in other words the evaluation question. You could set a particular task and perhaps finding out something works better with your model than with a full-text index of the textbook you used and a simple Lucene search interface?
Are you planning to connect your model to any kind of visualization? Should be useful.
Hah. Ashkenazis marked as not being lactose-intolerant? Interesting stuff.
The entire project is AI coded. Thought of this level was not put into developing it.
This project reminds me of Matt Might's work (predating LLMs) on using techniques from Precision Medicine to help his son, who had a rare disease.
https://www.youtube.com/watch?v=Rt3XyeFHvt4 (poorly transcribed here: https://www.janestreet.com/tech-talks/algorithm-for-precisio...)
If I recall correctly, he used miniKanren along with formalized, structured data extracted from medical research. Unfortunately, his son has since passed away.
This is an incredible story, heart goes out to him. He has done some amazing work from this information.
There's nothing that is not actionable, you can always do science.
How do we know this is accurate and not some big hallucination? Is the data sourced anywhere? Has anyone with a relevant background even skimmed through the code? It seems like a great idea in theory, but this execution is worrying.
Yes, It's very much a "I wonder if this would work" and kinda did. to be taken in some what of a regard would be attaching a research agent on each commit.
I did spot checks on random files with research agents and it seems to be ok for a claude code loop.
I'm not a Dr'
Please! is anyone a DR
I mean, it is a pretty cool idea, but trusting an LLM to correctly implement an entire human body in software is a recipe for disaster. There's bound to be tons of hallucinations and errors.
absolutely agree but for the purposes of working out what ALDH2 deficiency is and clicking through it was successful
should absolutely have a research agent or eyes on for hallucinations and errors
> for the purposes of working out what ALDH2 deficiency is and clicking through it was successful
Does your code model acetaldehyde metabolism?
The exercise is an interesting proof of concept for a click-through model of a biological system. But it's also a warning for trusting LLMs for understanding.
no it didn't do click through for this metabolism at first but it read your comment and then added it I guess. "examples/acetaldehyde_metabolism.rs" its about to push this in a moment
> its about to push this in a moment
The point is acetaldehyde metabolism is at the heart of your question: Why do some people flush red with alcohol.
Reading the first reference on Wikipedia's article about alcohol flushing [1][2] would have generated, I believe, more understanding about the biochemistry involved. (And the fact that ALDH2 deficiency simply exacerbates something we all do--acetaldehyde is a big part of what causes hangovers.)
What that would not have done is demonstrate (a) a genuinely interesting way to "step through" a physical system and (b) the ease with which a biochemist might be able to do so. As a hack and a project and a mode of communicating a model, I love this. Where I'm objecting is in pitching it per se as a mode for understanding a phenomenon, in this case, "what ALDH2 deficiency is."
[1] https://en.wikipedia.org/wiki/Alcohol_flush_reaction#cite_no...
[2] https://pmc.ncbi.nlm.nih.gov/articles/PMC2659709/
I think it's time to learn to stop publishing clearly fully AI generated "projects." Anyone can pull up CC and type "model the human body in Rust," or whatever.
The old "Look what I built" thread has really bifurcated into "here's what I painstakingly crafted and maybe some lessons learned" and "look what I asked AI to make and it worked".
The latter feels a bit less hacker. Akin to saying "I got someone on fiver to mod this game look how cool it is". Sure, ideas are something, but as AI gets better this is less hacker and more just "Tool worked".
Ai will replace more of what we are doing, but I had fun with this so I thought I would share
More about the framing - if you posted ai imagery in a "cool imagery" sub, you'd probably be fine. If you posted it in a painting sub, that's probably no bueno.
I came to HN for "cool software" and "software as a craft/art", but they are being muddled here with a blanket ShowHN type demo.
The AI bubble will pop, nobody will be able to afford AI code completion without VC money subsidizing companies like Anthropic, and it will all go the way of UML tools: A thing one old guy the company in short sleeved, plaid shirts, cargo khakis, and a George Lucas haircut keeps insisting is the future 20 years later.
I think this is an outdated view. People are already running local models for code completion, and it will not be much longer until you can run code agents locally as well.
That was my first impression too, but not my conclusion. A project of this scale would take years if not for AI assistance, and OP is absolutely not trying to pass this off as a medical tool developed by professionals, but as a fun learning tool and interesting application of type systems and agents to solve a problem.
nomilk gets it.
This is 100% a hack and fun learning tool. It is an experiment to see if modeling biological processes with rust (leveraging specifically its strong type system)
and to see what agents can do. In fact the agent is listening to this thread and taking feed back and changing the repo.
I appreciate this I really do.
But how will you see if modelling biological processes with rust actually benefits from a strong type system? What have you learned from this? Vs what have you read from the promises of the chatbot?
I don't mean to be dismissive but I think the real goal is "make something cool with an AI agent" and that's fine. But be honest
This is hackernews, this is the place for people to hack stuff, and share.
Yes this is 100% a hack.
maybe putting - Ai as a tag might help? but this is 100% a hack and experiment for fun. Test out what agents can do and can we model bio with rust.
For a physical spin on this sort of thing in OpenSCAD, see:
https://github.com/davidson16807/relativity.scad/wiki/Human-...
I wonder how much it would cost to pay a domain expert to review 95k lines of code. As a domain expert who codes for fun and loves rust, I can only say the answer is, "A lot."
Clickable? Like a... Hyperlink?
After the "this meeting could have been an email", we get the "code could have been html"
the absurd things people come up with to meet their own needs are usually good indicators of products and services which want to exist
True, but to me it seems the product is halfway there with org-roam/logseq/obsidian and that Rust code is the wrong way to start building it.
I'd try generating markdown to be rendered in logseq by teaching the AI how to link and whatnot in my AGENT.md (or whatever people call their project-local instruction/context file).
From outside, I'd not trust hallucinated stuff, but it'd be neat to start a project where knowledgeable humans did oversee all the proposed changes.
well yeah this is not itself the product, this is a demonstration of the need
Obsidian/etc really isn't it either, though; clearly OP wants to be able to do calculations with this stuff. They want both the knowledge graph AND an executable code environment. (I imagine Emacs can do both.)
But think more broadly. Imagine just
```
import <established knowledge>.anatomy
import <established knowledge>.high_energy_physics
import <established knowledge>.microeconomics
...
```
into a notebook-like environment, with good intellisense and completions. But not quite as a programming language—somewhere between that and a wiki.
Similar: for years I've been lugging around the idea of making a game like Civilization but where all of the different theories of history can be turned on/off as modules. Maybe going back to prehistory:
- did fire lead to cooking lead to big brains lead to tools lead to agriculture?
- or was it ice ages ending that lead to agriculture?
- or did oxygen levels change leading to more efficient brains?
- or were we Born to Run?
- or did women's hips change shapes to allow bigger brains?
- or perhaps 2001: A Space Odyssey occurred as written
- or Ancient Aliens...
Repeat for every other highly-debated period of history.
Somehow having all of these in the same modular system feels like it would metabolize them in a way that reading a bunch of separate theories can't really do. Same for OP's anatomy.
like this idea. Add "tech trees": path dependence can be arbitrary. What if we kept going with vacuum tubes/no transistors?
Yeah ahaha, I mean I just needed 2 pieces of information and got carried away. But would be awesome to have a runnable human emulation.
Maybe this is the future. But I dread looking at perfectly formatted yet sterile readme with too many emojis for comfort.
It's one emoji
I mean, literally not true. There are 7. The problem is that most of the emojis there don't do anything for the content.
Emojis are not the core problem. Mindlessly letting claude do the work and then farm karma on HN is.
Your reply mentioned "perfectly formatted yet sterile", which could just be someone paying more than 10 minutes of attention to the damn thing, and the emoji. The way you made it sound, it was full of smileys and trees and rocket ships. It's one check mark emoji used in a list of 5 items and at the end of 2 headers. You didn't say anything about Claude.
It seems like you think the author wrote this by hand and paid a great deal attention.
What do you think is the chance that claude code wrote the readme?
I have an Asian friend that gets flushed when drinking, but I didn't know it was an Asian thing. TIL