> individuals rarely pause to consider what information they may be missing, they assume that the cross-section of relevant information to which they are privy is sufficient to adequately understand the situation.
Yes, this is a chronic dysfunction.
I like to say people do not reason, they look for reasons, or satisfying stories to fill the void of ignorance.
Voltaire rolls in his grave as his dream for an age of reason devolves into an orgy of reasons.
Would love to see how LLMs fare on this type of study. Paradoxically, it may be comforting to know they are at most as overconfident as humans in situations with limited information. Perhaps an overconfidence scale on which we assess models as a proxy for hallucination proclivity.
> individuals rarely pause to consider what information they may be missing, they assume that the cross-section of relevant information to which they are privy is sufficient to adequately understand the situation.
Yes, this is a chronic dysfunction.
I like to say people do not reason, they look for reasons, or satisfying stories to fill the void of ignorance.
Voltaire rolls in his grave as his dream for an age of reason devolves into an orgy of reasons.
Would love to see how LLMs fare on this type of study. Paradoxically, it may be comforting to know they are at most as overconfident as humans in situations with limited information. Perhaps an overconfidence scale on which we assess models as a proxy for hallucination proclivity.