Discussion about this post

User's avatar
Nick Diakopoulos's avatar

So we let AI systems plead the 5th? Hmmm. IMO we can’t just talk about a narrow explanation from an ML system, agreed that that will not clarify. But rather we should look at explanations from a systems based perspective, how else could we establish any (however imperfect) narrative of causality? But this also reminds me that I need to write more distinguishing retrospective vs prospective accountability, because prospective accountability doesn’t rely so heavily on explanation…more to come.

Expand full comment
Christopher Riesbeck's avatar

Very nice broad summary of key issues. I do wonder though why we need to grapple with the question of how can AI systems explain and justify their behavior. Explanations for decisions and predictions are as suspect coming from an AI system based on machine learning models as they are when given by people. Such introspections are at best guesswork, even if we exclude prevarication. Someone's account of why they did something has ultimately had little value when constructing the causal chains of accountability. Why should it be different for AI?

Expand full comment

No posts