What do users need to understand an AI’s explanation?

Imagine you’re Pat. You’re a pilot in an aviation company called FlyAir, and there’s been a plane crash recently. You’re part of the committee that has to find out what went wrong with the plane. The twist is that one of the components of the plane is an AI system, and it’s your job to find out if it may have had responsibility in the crash.

That’s the problem I’m trying to solve: How do we facilitate domain experts like Pat to understand and successfully debug artificially intelligent systems when they might have no knowledge of the underpinnings of AI.

To begin the investigation, let’s first establish the specific domain and AI system.

The AAR/AI process that helped users navigate the AI's explanation and domain

The AI's (Reinforcement learning agent) explanation