32 comments

  • gyomu a day ago ago

    > Werld drops 30 agents onto a graph with NEAT neural networks that evolve their own topology, 64 sensory channels, continuous motor effectors, and 29 heritable genome traits. communication bandwidth, memory decay, aggression vs cooperation — all evolvable. No hardcoded behaviours, no reward functions. - they could evolve in any direction.

    In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

    "What are you doing?", asked Minsky.

    "I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.

    "Why is the net wired randomly?", asked Minsky.

    "I do not want it to have any preconceptions of how to play", Sussman said.

    Minsky then shut his eyes.

    "Why do you close your eyes?" Sussman asked his teacher.

    "So that the room will be empty."

    At that moment, Sussman was enlightened.

    • urav 21 hours ago ago

      Love the MIT AI Koans. Minsky's actual words to Sussman were "well, it has them, it's just that you don't know what they are." And he's right, the room isn't empty.

      Werld's room has walls. The graph topology, energy mechanics, metabolic costs, seasons, those are all design choices. But those are the physics, not the behavior. I chose the laws of nature, not what agents do with them.

      Whether they cooperate or attack, broadcast or stay silent, grow complex brains or prune them down, that's selection, not me.

      The agents also aren't randomly wired like Sussman's net — they start with minimal NEAT networks and evolve structure through survival. So the preconceptions are there, I just tried to make them physics rather than policy.

      Curious how you would approach removing those from an artificial sim like this?

      • svilen_dobrev 15 hours ago ago

        some hand-wavy thoughts..

        > it has them (preconceptions), it's just that you don't know what they are.

        further in this direction... the "thing" might evolve into some cyclic (or not) system. a bit like that LIFE game, emerging a tv-tennis-like ping-ponging, or whatever. How would you know there is such thing? just stats/counts do not tell. (Which pulls another freaky question - how would u notice a different intelligence/world-order/culture/resemblance-of?)

        maybe feature: some stop-gap animation over world-map in time? Then, some pattern analysis over that.. History of the world, part one..

        btw check these "interactive simulations", maybe some ideas about "loading" the agents with preconceptions :)

        https://ncase.me/

        • urav an hour ago ago

          thing you've caught onto something there, after running it for a while (maybe not long enough) it can feel cyclical.

          I'll definitely add in the time-lapse, and some sort of a pattern detection over the agent positions/actions - shouldn't be too tough given the graph structure of the world.

          for the ncase.me - super interesting way to visualise it - the polyworld example someone gave before also had a 'view' into their worlds. on the preconceptions, maybe running two parallel experiments and comparing the outputs might work best? thanks for the pointers - let me know if you've got any ideas on how to approach.

  • dash2 a day ago ago

    I think it looks fun, but at the same time I really wish you had written the readme yourself and not using an llm. My view: if you can’t be bothered to write it yourself, why should I read it myself?

    • wibbily 11 hours ago ago

      He's not writing his comments himself either. Better off ignoring the whole thing, unfortunately

    • urav a day ago ago

      completely fair, and thanks for the nudge - expect an updated readme shortly

    • urav 21 hours ago ago

      got the updated version up and again, appreciate the nudge there!

  • csmoak 21 hours ago ago

    this reminds me of Polyworld by Larry Yaeger, an artifical life sim where each creature has a vision system. i played around with this back in the early 2000s though the hardware i had access to was basically insufficient to run it in any real way. it's nice to see its development has continued.

    https://en.wikipedia.org/wiki/Polyworld

    • urav 21 hours ago ago

      Haven't come across Polyworld before — just looked it up and its super cool, especially for 1994. The vision system is a interesting design choice. Werld takes a different approach — graph topology instead of a 2D plane, and NEAT brains instead of Hebbian learning — but the core philosophy is the same.

      And yeah hardware has caught up a bit since the early 2000s, though my hard drive is having a hard time. Thanks for the reference, going to dig into Yaeger's papers.

      wonder if the black mirror episode was based on polyworld then?

  • alexhans a day ago ago

    I love emergent behaviour and story telling. Anyone who has played City builders like Sim City or roguelikes like Dwarf Fortress knows how interesting, fun and even informative they can be.

    In a world where setting them up and letting rogue agents run rampant becomes relatively low cost and fast, I think focusing on the desired outcomes, the story telling and specially the UX for the human user, is key and maybe we can take some learnings from Will Wright on "Designing User Interfaces to Simulation Games" [1].

    I'm going to be unable to do much this weekend so I can't say I'll try check this out (yet?) but I'd be interested in your own experiences so far. Any surprises? Things you'd like to do next? What's most fun/challenging?

    An actual report/writeup will probably resonate more than a repo for people who can't check it out easily or are not willing to.

    - [1] https://donhopkins.medium.com/designing-user-interfaces-to-s...

    • urav 21 hours ago ago

      Appreciate this! and yeah the will wright talk is exactly what I was leaning into.

      Actually posted this on X 2 weeks ago, hosted the werld observatory public, and had gemini stream a new chapter of the story in natural language every 10,000 ticks - so it felt like reading through a david attenburgh novel of werld being born.

      Most interesting thing from the last run was definitely the language and the behaviours, they decoding what they were actually saying was a difficult one, and noticing them group within their diverged species.

      Up next, i want to get the storytelling side up and running too - kept running out of storage, and cloudflare playing up as usual - maybe get gemini to visualise each chapter - and get an upgraded interface for the werld observatory.

      If you want to check out my previous attempt at streaming the story line - it's still on my X - https://x.com/im_urav?s=21&t=6Si-w-DvNJC7RfvSz2Aw-w

  • fuzzythinker 10 hours ago ago

    Some NEAT related links:

    https://sharpneat.sourceforge.io/ - OSS in github, well maintained

    https://weightagnostic.github.io/ - WANN

  • e1ghtSpace 21 hours ago ago

    I like the idea of evolving agents from scratch with no "learning", they just evolve their ability to survive in the environment. Maybe one day it'll be advanced enough to see life evolve.

    How does the narrative story generator work?

    I played around a bit with NEAT networks, and tried to create a bitcoin trading bot, but the best I could do was a +10% gain over many months. I was hoping for at least 30% each month. Oh well, I guess it doesn't all just depend on past price history.

    • urav 21 hours ago ago

      Thanks! The story generator is pretty simple right now — every 10,000 ticks the sim snapshots population stats, brain complexity, species changes, births/deaths, communication activity and runs it through a template that writes a plain-english chapter.

      Building out a more engaging version, and will hopefully stream it onto X again as a story - but this time without chewing api tokens every couple of seconds.

      NEAT for trading is interesting - on BTC i used a kernels method that worked quite well and closer to that <2 sharpe on a monthly.

  • mpalmer a day ago ago

    > No hardcoded behaviours, no reward functions. - they could evolve in any direction.

    If they can hack their reward functions won't this always converge on some kind of agentic opium den?

    • urav a day ago ago

      that would be true if there was a reward function. compute_reward() exists in the code, but it returns 0.0.

      they're only living/evolving to survive, and fork (reproduce).

      can't wirehead natural selection if the brain does nothing useful, they'd die and their genome would die with them.

  • fd-codier a day ago ago

    No images in the README...

    • Towaway69 21 hours ago ago

      And stupid leading emojis for the heading.

  • urav 20 hours ago ago

    Updated based on feedback — added screenshots to the README and upgraded the story generator for a better narrative. Thanks for all the input.

  • AreShoesFeet000 21 hours ago ago

    It is impossible to enforce a world free of heuristics, but this is certainly very cool.

    Reminds me of that Black Mirror episode with the circular QR code.

    • urav 21 hours ago ago

      completely agree on the heuristics (someone else mentioned the MIT Koan comment about this). And yeah Plaything is a little too close to home... no QR codes from werld agents yet though. Will keep you posted.

      • AreShoesFeet000 19 hours ago ago

        I just wanted to add: there’s no single piece of machinery that can void the human experience. A large collection of machinery can only delay the inevitable. Please have fun with your project.

        • urav 18 hours ago ago

          Agreed — and I wouldn't want to simulate the human experience. What I'm more curious about is what experience agents would create for themselves, with no concept of ours.

  • b800h a day ago ago

    This seems to start with 2 agents, and then all of their offspring die immediately. Any hints?

    • urav 21 hours ago ago

      should be starting with 30... if you're seeing 2 that might be an older default that I tried out (an adam and eve experiment). You can change it in the config too.

      On the dying immediately thing - offspring get a fraction of the parent's energy when they fork. If the parent forks too early (low energy), the kid spawns with barely anything and can't cover its tick cost + brain metabolic cost.

      That's working as intended — reproducing too early is a bad strategy and selection should punish it. But if everything dies instantly, something else might be off.

    • b800h 21 hours ago ago

      I take that back, I was falling asleep and then suddently had a population spike. Very good!

  • midnitewarrior 21 hours ago ago

    This is one or two steps removed from Thronglets.

    • urav 21 hours ago ago

      hopefully it stays that way.... although I did start setting up a rig to host them on.

  • m0llusk 21 hours ago ago

    Arguably a powerful demonstration of why even simple creatures make use of parenting as a strategy to improve the success of their offspring.

    • BrandoElFollito 14 hours ago ago

      This is a very different conclusion from the one in the scientific documentary "Idiocracy"

    • urav 21 hours ago ago

      This actually showed up in the first run, agents that invest more energy into offspring vs ones that fork cheap and fast.

      the ones that survived population crashes were the ones passing down leaner, better-inherited brains. Cheap forking works when there's plenty of energy around, falls apart in famine.