2 comments

  • jqpabc123 11 hours ago ago

    In other words, answers derived from statistical processes are not very reliable.

    Who knew?

    In some ways, LLMs are anti-computers. They negate much of the utility that made computing popular --- instead of reliable answers at low cost, we get unreliable answers at high cost .

  • richrichie 11 hours ago ago

    It is wild how humanised neural networks have become! The use of terms like “lying” or “hallucination” even in research setting is going to be problematic. I can’t articulate well, but it is going to restrict our ability to problem solve.