Video models are zero-shot learners and reasoners

(video-zero-shot.github.io)

103 points | by meetpateltech 4 days ago ago

22 comments

  • liuliu 4 days ago ago

    Very interesting read. I first learned this method from a random reddit post a while ago and very happy to see a systematic study on this (wish I would save the original post somewhere to reference to!).

  • ricardobeat 4 days ago ago

    Is it possible to use a model trained on video to output single frames?

    • efskap 4 days ago ago

      Yup, people have been using local video models like Wan2.2 to generate stills, finding that for some things like human anatomy, it can outperform image generation models. Very cool how moving training data helps build spatial understanding that is applicable even to still images.

      [0] https://www.reddit.com/r/StableDiffusion/comments/1mcm7qm/wa...

      • xnx 3 days ago ago

        > for some things like human anatomy

        Pursuit of prurient interests has pioneered so many technologies: photography, videography, AI

  • mallowdram 4 days ago ago

    What is specific about this model? These categories aren't what defines intelligence in animal life. Segmentation is a post-hoc assertion into visual science, not necessarily an inside-out process inherent to perception.

    These models aren't the path, they're cheap workarounds that exclude the senses.

    • marcellus23 3 days ago ago

      I don't understand what point you're making exactly. What categories do you mean? What do you mean by segmentation not necessarily being "an inside-out process inherent to perception"?

      • mallowdram 3 days ago ago

        The criteria for learning in this model has nothing to do with biological intelligence.

        Segmentation is a cog-sci hold-over of vision science and Marr and isn't how brain's perceive scenes/objects/events.

        There is a relatovely new approach to perception that ML has ignored that's integrative, coordinated, holistic. These new approaches, affective, coordinated-dynamical, ecological (optic-flow) are the likely routes to consciousness.

        What ML does with images like these are retrofit, they're imposed ad hoc on imagery as a pretend form of intelligence.

        • mallowdram 3 days ago ago

          Inside-Out is a reversal of the stimuli model and a reversal of the cog-sci cognition model.

          https://academic.oup.com/book/35081

          The senses can't be excluded from consciousness or intelligence, otherwise the notion of intelligence is reduced from an arbitrary set of tests/criteria.

          Robotics and trained analogies, arbitrary ideas of affordance (which are not affordances) are definitely interesting, but they're not paths to intel. They're paths to homogenization posing as intelligence.

          This is the classic robotics idea of computer vision backing itself into a corner.

          https://docs.google.com/presentation/d/1Wkno8pKzWiav1a7c8IOr...

    • ACCount37 4 days ago ago

      [flagged]

      • mallowdram 4 days ago ago

        You again? Learn some manners engineer.

        • ACCount37 4 days ago ago

          [flagged]

          • mallowdram 4 days ago ago

            [flagged]

            • mallowdram 4 days ago ago

              And, it will be easy. You've made junk tech from pseudoscience, classic houses of cards.

              • ACCount37 4 days ago ago

                It's either a bot or a random internet schizophrenic.

                • the_af 3 days ago ago

                  I think it's someone playing a prank, based on their comments history here (almost everything cryptic and full of non sequiturs), and also... look at their username: "mallowdram". Say it out loud ;)

                • mallowdram 4 days ago ago

                  [flagged]

  • pvillano 4 days ago ago

    To train an AI to solve problems, you train it extrapolate the future from a starting state of having a problem and the intention to solve the it.

    So much falls out of that reframing.

    • pvillano 4 days ago ago

      Training is first done as a general predictive model: situation => result

      Then it's fine-tuned on: situation + intent => action => result

  • ThouYS 4 days ago ago

    maybe we really are headed to The One Model that can do it all

  • miguel_martin 4 days ago ago

    This is incredible.