Shrdlu

(en.wikipedia.org)

59 points | by chistev a day ago ago

6 comments

  • Legend2440 a day ago ago

    The trouble with shrdlu and all the other early "natural language" parsers is that they weren't really parsing natural language. They parsed a formal language with syntax designed to look kinda like English.

    At the time it was believed (by Chomsky, etc) that natural language could be described in formal terms and parsed by fixed rules. But after several decades of failed parsers (and the success of statistical methods like LLMs), it is clear that formal and natural are fundamentally different types of languages.

    • mcphage 9 hours ago ago

      When I was in college (back in the early 2000s) I took a course on computational linguistics as part of a linguistics degree. And we studied formal language style parsers, and statistical style parsers. And I really didn't like the statistical style parsing—coming from a math background, the formal language style parsing seemed a lot more elegant to me. But damn if the statistical parsing didn't work a lot better :-(

  • Liftyee a day ago ago

    Neat little example of what's possible even using a restricted and standardised language. One could imagine using this as an interface layer for humans to interact with robots or industrial systems today. Of course, it would still be slower than an old-fashioned control panel with tactile, individual controls - but there may be some niches in which this language-based contextual control method has advantages.

    • graypegg a day ago ago

      For industrial systems, PLC controllers programmed visually [0] are an alternate to text-based programming. It's surprisingly capable! I think this sort of fits the situation better, since every state the program can be in is visible all at once (each horizontal line is a pattern match case for the current state of the machine), and your inputs and outputs are immediately clear. In text, you're going to have to somehow introspect what nouns are available and what verbs they can do. That starts to feel like Smalltalk or something, with an object browser, [1] in which case, why not just use something general?

      Trying to handle a text-based programming language with an implicitly english subject/verb/object order also feels like it makes it a bit harder to grok for Average Person (worldwide). For english speakers this is natural, but for people used to different grammar, this is nearly the same difficultly of learning a general purpose programming language already.

      [0] https://en.wikipedia.org/wiki/Ladder_logic

      [1] https://en.wikipedia.org/wiki/Smalltalk#Browser

  • mcphage a day ago ago

    > I very carefully worked through, line by line. If you sat down in front of it, and asked it a question that wasn't in the dialogue, there was some probability it would answer it. I mean, if it was reasonably close to one of the questions that was there in form and in content, it would probably get it. But there was no attempt to get it to the point where you could actually hand it to somebody and they could use it to move blocks around. And there was no pressure for that whatsoever. Pressure was for something you could demo. Take a recent example, Negroponte's Media Lab, where instead of "perish or publish" it's "demo or die." I think that's a problem. I think AI suffered from that a lot, because it led to "Potemkin villages", things which - for the things they actually did in the demo looked good, but when you looked behind that there wasn't enough structure to make it really work more generally.

    Ah, that makes sense. I've always wondered why SHRDLU seemed so powerful—and yet nothing ever followed up from it, you couldn't run it, there was no "we took SHRDLU and improved it". Just the same couple bits of example dialog. I've wondered if maybe it was fake? But I guess it makes more sense that it was just a very brittle demo. Like the software equivalent of a genetic sport.

    • Peteragain a day ago ago

      I'm not sure the situation has changed for AI. For a real scientific agenda we need to at least attend to the things a demo can't do, not just what they can. The trouble with such analysis of current ai systems is that the negative examples are instantly included in the training data.