Activity: Talk or presentation › Oral presentation › science to science / art to art
Description
This paper presents a brief overview of requirements for development and evaluation of human centered explainable systems. We propose three perspectives on evaluation models for explainable AI that include intrinsic measures, dialogic measures and impact measures. The paper outlines these different perspectives and looks at how the separation might be used for explanation evaluation bench marking and integration into design and development. We propose several avenues for future work.
Period
2021
Event title
Modeling and Reasoning in Context Workshop: Human-Centric and Contextual Systems