Waypoint-1: Real-Time Interactive Video Diffusion from Overworld

(huggingface.co)

91 points | by avaer 3 days ago ago

9 comments

  • ecmulli 2 days ago ago

    I dont have a big enough GPU but I was able to play around with the model using this plugin https://github.com/daydreamlive/scope-overworld via Runpod - very cool!

  • roskelld 2 days ago ago

    The context seemed to last a few seconds. I went from a mock up screenshot of a fantasy video game, complete with first person weapon. Then as I moved forward the weapon became part of the scenery and the whole world blurred and blended until it became some sort of sci-fi abstract space. Spinning the camera completely changed look and style.

    I ended up with a UI that closely resembled the Cyberpunk 2077 one complete with VO modal popup. I guess it must have featured a lot in the training data.

    Really not sure what to make of this, seems to have no constraints on concept despite the prompt (I specifically used the word fantasy), no spatial memory, no collision, or understanding of landscape features in order to maintain a sense of place.

  • lcastricato 2 days ago ago

    BTW, there is a gradio space here:

    https://huggingface.co/spaces/Overworld/waypoint-1-small

    And our streamed version:

    https://overworld.stream

  • avaer 2 days ago ago

    If you think this is cool you might also be interested in https://github.com/MineDojo/NitroGen which is kind of the opposite (and complimentary).

  • Plankaluel 2 days ago ago

    An RTX 5090 for 20-30fps for the small model: That is not as unreasonable as I had feared :D

  • dsrtslnd23 2 days ago ago

    10,000 hours training data seems quite low for a world model?

  • khimaros 2 days ago ago

    this is like an open weights version of DeepMind's Genie

  • lcastricato 2 days ago ago

    Hi,

    Louis here. CEO of overworld. Happy to answer questions :)

  • 2 days ago ago
    [deleted]