The full 10,000 word document can be downloaded from my Github here

Abstract

As algorithmic decisions continue to play increasingly consequential roles in our lives, it is reasonable that we are offered explanations in some cases. First, then, there must be an account of algorithmic explanation to evaluate if something is an explanation, and the quality evaluated thereafter. Most accounts of explainable artificial intelligence offer a somewhat narrow view– a specific explanation type or mechanism, which by itself doesn’t seem to be a general account. I argue for a pragmatic account of explanation, based on van Fraassen’s writing. Crucially, this makes an explanation context-dependent and gives wide scope to what can be considered a good explanation depending on the context in which it is requested. This method recognizes the social environment in which explanation questions are asked, and may unify many different existing accounts.

Overview:

In the following dissertation, I will defend an account of algorithmic explanation, which clarifies how they ought to be defined, and gives normative guidelines for making less ambiguous requests for explanations. In Part I, I will motivate why explanations are needed in some cases. Without making a strong claim about the specific circumstances where someone ought to be given an explanation for an algorithmic decision, I will make a weaker argument– that it seems there are some cases, sometimes where an explanation is desirable. Then there ought to be a definition that attempts to unify the vast array of different possible explanations. I will consider a variety of question/answer pairs that I’ve devised to show the wide range of explanations that may be requested about an algorithmic decision. After motivating the need for a more general account of algorithmic explanation that can accommodate my examples, in Part II, I will turn to examine a variety of different accounts of algorithmic explanation that can be seen in the literature on Explainable AI (XAI). Crucially, I think that there is not enough attention paid to the philosophy literature on scientific explanation. I will examine some accounts from philosophers of science and will see how well they could serve as an account of algorithmic explanation. The verdict is that none of the accounts can accommodate the wide variety of question/answer pairs! I don’t claim that they are deeply wrong, just that they are all too narrow in focus and can’t serve as a general definition that we’re after in the first place. In Part III, I will explain Bas van Fraassen’s account of Pragmatic Explanation in detail, expanding on his examples, and weaving it with question/answer pairs from earlier. His framing of an explanation as an answer to a question, and putting a focus on the ‘context’ in which it arises, is very powerful. This can account for the same question getting quite different, but seemingly correct answers. This account of explanation, without much modification, can serve as a general account of algorithmic explanation, by ensuring that the topic, contrast class, and relevance relation are all made clear from the onset. Finally, in Part IV, I suggest a framework for formulating why-questions about algorithmic explanations that forces someone to be clear about what they’re asking. I claim that in the specific case of pragmatic algorithmic explanation there are four central features– algorithmic system, action task, input, and output– that are needed in specifying the question and narrowing in on some context. I suggest that if we insist that algorithmic explanations be requested in this form, then we have taken a substantial step in clarifying what is being asked.