Passive and active responsibility for AI in practice

Passive and active responsibility for AI in practice

Paul Kuyer (Dublin City University) and Otto Koppius (Erasmus University Rotterdam)

Extended abstract, submitted to the Reshaping Work conference 2020 (Organization and Management division)



The ethics of artificial intelligence (AI) as a field aims to formulate principles and practices to govern prosperous, humane, fair, trustworthy and beneficial AI. Multiple scholars have proposed core principles and values to guide AI (Floridi et al., 2014; EU, 2019). Responsibility is considered an integral part of ethical AI. However, responsibility is often defined too broadly, making it unclear how responsibility is understood in everyday practice by the designers, developers, data-scientists managers and policy officers who work with AI. 

The aim of this investigation is to develop a more nuanced theoretical conceptualization of responsibility that does justice to the empirical reality, by investigating the question: how do organisations view their responsibility for AI and advanced analytics? Bridging the gap between theory and empirics is important because it is a crucial first step towards the creation of responsible and ethical AI. 

This study investigates two organisations in a multiple case study design. The first is a Dutch bank that uses AI and advanced analytics for multiple projects. The second is a medium sized hospital that implements AI to analyse and construct medical images (chest CT scans). The hospital works closely together with a multinational technology producer to implement the AI. 

In both case studies, 16 semi-structured interviews were held with developers (data scientists, designers, engineers) as well as controllers (managers, policy officers). Based on the collected material, the situations of various stakeholders are described. These situations are then interpreted to extract empirically validated constructs.  

Our first main finding is that when asked about responsibility, most respondents understand this to be about who will be blamed for (unintended) harmful consequences. This type of responsibility is called passive responsibility. Passive responsibility is about who will be blamed or held liable for harmful consequences (v.d. Poel & Royakkers, 2011). It is a backward looking concept, linked to roles within an organisation. Respondents find it easy to formulate who was responsible in this passive sense for a given AI project or tool. 

However, when asked about larger implications of AI beyond the immediate work application, most respondents recognize a second form of responsibility, called active responsibility. Active responsibility is about anticipating possible harmful consequences to acquire positive results (Poel, vd & Royakkers, 2011), i.e. a person does not merely act to fulfil the strict duties of his or her professional role, but acts with care for the possible consequences of the work. Active responsibility is thus both a moral and a forward looking concept.

Following this analysis, four types of agents can be distinguished. 1) Leaders, who are both actively and passively responsible. Leaders take proper measures to prevent any algorithmic harms, and take the blame should they occur. 2) Hired guns, who are neither actively nor passively responsible. Hired guns have no responsibility in either sense of the word, they simply create what is asked of them. 3) Helpers, who are only responsible in the active sense. Helpers will not be (formally) blamed, but still anticipate and try to mitigate possible negative consequences. 4) Ignorants, who are passively, though not actively responsible. Ignorants will be blamed for the negative consequences of the AI tools, yet are not actively involved in creating them and do not anticipate these consequences.  

Our second main finding is that all stakeholders, when asked about active responsibility, treat it solely with respect to their own expertise. This implies that it varies per role in the organisation what it means to be actively responsible. For example, for a data scientist active responsibility is to faithfully translate project goals into code while having regard for the consequences of his design choices (methods, data inclusion and filtering, etc). For a radiologist who purchases an AI driven CT scan, active responsibility is to make sure she has sufficient knowledge of the tool and its future consequences before implementing the tool. Conversely, for the vendor of the CT scan, active responsibility is to anticipate all consequences that may arise from the use of the machine and to clearly formulate the intended use policy. 

This analysis provides an empirically grounded and nuanced way to speak about responsibility in the field of AI ethics. The analysis shows that responsibility is not solely about assigning blame (passive responsibility), and that stakeholders can actively take responsibility, even without being involved in the design of an algorithm and without referring to procedures and liabilities. However, this active responsibility is anchored to the stakeholder’s own expertise, creating potential coordination issues when stakeholder with different expertise have to develop a shared  mental model of responsibility for AI. Our proposed typology of agents can be instrumental in facilitating such a discussion about what each stakeholder in an organisation can do to anticipate the consequences of the AI. In this way, the proposed concepts will contribute to a better understanding of responsibility for AI within organisations.