One interesting angle here is that the whole engine is built around explicit structural rules (AST → NNF via De Morgan) rather than black-box search.
I’d be curious how people here would compare this kind of tree-based, fully inspectable reasoning with today’s neural approaches to “explainability”.
Happy to go into details about the algorithms or the GUI design if anyone’s interested.
One interesting angle here is that the whole engine is built around explicit structural rules (AST → NNF via De Morgan) rather than black-box search. I’d be curious how people here would compare this kind of tree-based, fully inspectable reasoning with today’s neural approaches to “explainability”.
Happy to go into details about the algorithms or the GUI design if anyone’s interested.