A user digests diverse information to make sense of a concept: understanding this diverse information involves a pattern of questions: What is the concept? Why does it work this way? How does it work? What changes can I make to it? Why does it not work this way?
This investigation is about the kind of information users require to reasonably understand and reason with an AI’s explanation. Specifically, do users find counterfactual information useful in the sense making process? Do they require it?