Explaining Explainable AI

Explaining Explainable AI

Ella Hafermalz and Marleen Huysman

KIN Center for Digital Innovation

Vrije Universiteit Amsterdam


When Google maps tells you to turn right in 200 metres, you don’t wonder ‘why’ it is giving you that instruction. The application has delivered you to your destination in the past, and there is no reason to assume it will not do so again. However, if an algorithmic system denies your bank loan, or rejects your job application, you are likely to want an explanation. Yet often, there is no clear answer available. Not even an engineer can ask a deep learning algorithm: why did you do that

Today even algorithms ‘know more than they can tell’. There is increasing awareness that this algorithmic ‘opacity’ (Burrell, 2016) is problematic. So-called ‘black box’ algorithms that work via deep learning draw on large sets of data to create their own models of reality in a way that generates remarkably accurate predictions. These models, for example in the case of neural nets, become so complicated as they optimize themselves, that even the scientists ‘in charge’ cannot say exactly why the model can for example identify a picture of a cat with such high accuracy rates. Black box algorithms can conceal biases that emerge from training data, for example when Amazon was forced to abandon its talent selection algorithm because it learned from past hiring patterns to discriminate against female applicants (Dastin, 2018). There is also concern that these systems are given too much autonomy, especially as they cannot be questioned about their actions in the case of a mistake being made. An extreme example is autonomous weapons mistakenly firing on a civilian – can the validity of the action be assessed when it is not possible to interrogate the rationale that underpinned it (Russell et al., 2015, Schulzke, 2013)? Such scenarios are prompting a response from various communities, including regulators, ethicists, and computer scientists. All recognise that developments in Artificial Intelligence (AI) are unlikely to slow down, but that there is a need to ensure that these developments remain human-centric (Michal et al., 2009, Ohsawa and Tsumoto, 2006, Rosenberg, 2016), in line with social values, and to this end, explainable (Santiago and Escrig, 2017, Doran et al., 2017). While this conversation is multi-disciplinary, the work and organizational perspective is often missing from public discourse on how to make AI more responsible. This is surprising, given that organizations are a prime application context for new AI technologies (Faraj et al., 2018, Orlikowski, 2016, Lee, 2018). 

In this discussion paper, we show that the notion of ‘explanation’ is emerging at the core of multi-disciplinary responses to the problem of opaque ‘black box’ deep learning algorithms (Burrell, 2016). We critically inspect this notion of explanation as it appears in a much-discussed EU Commission text – The Ethics Guidelines on “Trustworthy AI”, and in publicly available documents outlining a major research project funded by DARPA on “Explainable AI”. Through our initial analysis of these texts and surrounding discourses, we show firstly that there is an apparent disconnect between an ethically motivated understanding of Explainable AI and a technically motivated one. 

In particular, we ask: Who is the user of an explanation in the context of AI at work, and how does this relation change the nature of what an explanation is? And, relatedly, what is or could be the purpose of an explanation in these contexts, and what transformations does varied purposes enact upon the nature of explanation? As these unresolved questions are identifiable as such from a relational and processual perspective, we wish to point to the possibility that scholars of work and organizing have to contribute productively to the explanation-driven response to ‘inscrutable’ AI (Introna, 2016).


Burrell, J. (2016). “How the machine ‘thinks’: Understanding opacity in machine learning algorithms”, Big Data & Society, 3, 2053951715622512


Doran, D., S. Schulz and T. R. Besold (2017). “What does explainable AI really mean? A new conceptualization of perspectives”, arXiv preprint arXiv:1710.00794.


Faraj, S., S. Pachidi and K. Sayegh (2018). “Working and organizing in the age of the learning algorithm”, Information and Organization, 28, 62-70.


Introna, L. D. (2016). “Algorithms, governance, and governmentality: On governing academic writing”, Science, Technology, & Human Values, 41, 17-49.


Lee, M. K. (2018). “Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management”, Big Data & Society, 5, 2053951718756684.


Michal, P., D. Pawel, S. Wenhan, R. Rafal and A. Kenji (2009). “Towards context aware emotional intelligence in machines: computing contextual appropriateness of affective states”, In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09), pp. 1469-1474.


Ohsawa, Y. and S. Tsumoto (2006). Chance discoveries in real world decision making: data-based interaction of human intelligence and artificial intelligence, Springer.


Orlikowski, W. J. (2016). “Digital work: a research agenda”.


Rosenberg, L. (2016). “Artificial Swarm Intelligence, a Human-in-the-loop approach to AI”, In AAAI, pp. 4381-4382.


Russell, S., S. Hauert, R. Altman and M. Veloso (2015). “Ethics of artificial intelligence”, Nature, 521, 415-416.


Schulzke, M. (2013). “Autonomous weapons and distributed responsibility”, Philosophy & Technology, 26, 203-219


Santiago, D. and T. Escrig (2017) Why explainable AI must be central to responsible AI: Accenture. Available at: https://www.accenture.com/us-en/blogs/blogs-why-explainable-ai-must-central-responsible-ai (Accessed: 1/6/2019 2019).