Doubting the Diagnosis: How Artificial Intelligence Increases Ambiguity during Professional Decision Making

Doubting the Diagnosis: How Artificial Intelligence Increases Ambiguity during Professional Decision Making 

Sarah Lebovitz, Natalia Levina, and Hila Lifshitz-Assaf New York University, Stern School of Business 

Division: Organizations and Management 

EXTENDED ABSTRACT Technological developments in artificial intelligence (AI) promise continuous improvement in decision-making, problem-solving, and reasoning capabilities that edge closer and closer to human capabilities. In this context, AI generally refers to a wide range of emerging technologies employing algorithms and computer-aided systems that detect patterns among data sets of past behaviors and link these patterns to potential future outcomes. Particular characteristics of AI have elicited an intense response from practitioners and scholars alike. Yet amidst this exploding discourse, we lack a deep understanding of how AI impacts individuals, groups, and organizational outcomes in practice. This study addresses the growing need for phenomenon-based examination of AI adoption and use in organizational life by investigating how AI tools are used in professional judgement work by diagnostic radiologists. 

Given the increasing ability of AI to mimic human reasoning, studying contexts in which actors use such tools to accomplish work involving high degrees of human input and professional judgement may reveal particularly meaningful insights. Some scholars predict and debate how such technologies will potentially eliminate occupational categories, guided by assumptions that AI tools will be capable of performing myriad work processes whereby professional judgement is required. However, prior research on technology in organizations documents a consequential gap between the expectations for new technology and its eventual use. Therefore, it is imperative to move the scholarly discussion beyond predictions towards understanding what is happening in work contexts where AI adoption and use is already underway. 

The field of Radiology is expected to undergo dramatic transformation in the age of AI. This field is at the cutting-edge of adopting emerging technologies, including AI, which makes this setting particularly relevant for learning about the unfolding nature of AI use in professional knowledge work. This paper is based on an inductive field study within three sections of a Department of Radiology in the United States. Across all three sections, AI tools were implemented to assist radiologists in determining diagnoses, which involved high stakes decision making that had the potential to dramatically impact patients’ lives as well as radiologists’ professional reputations. We analyze the use and impacts of three specific AI tools which varied along numerous dimensions, including length of time in use, cost of error and margin for acceptable error in the decision making context, financial incentive structure, technical architecture, decision complexity, and so forth. Despite these differences, each tool similarly lacked transparency as to how they made their diagnostic determinations, which had implications for the degree to which professionals incorporated AI in their final decisions. 

We reveal how, despite expectations that AI would automate or dramatically speed up decision making, AI induced increased ambiguity and doubt as radiologists struggled to reconcile conflicting information while producing time-sensitive medical diagnoses. Lacking crucial information related to how the tool made its diagnostic reasoning, radiologists often overruled the tool during their decision making process. They weighed the potential for AI to introduce new errors against its potential to help avoid their own errors. The cost of errors in this context were high – missing cancer diagnoses, exposing patients to additional costly procedures, facing legal blowback – and professionals also bore sole legal accountability. This combination led to radiologists’ reluctance to alter professional judgement towards AI results, which they were unable to interrogate, understand, or justify. 

This study contributes to the nascent understanding of the adoption and use of emerging intelligent technologies in professional decision making, especially involving professional judgement. The implementation of AI in this context did not result in a tidy story of work automation or complete resistance, but instead, the use of such tools led to new configurations of professionals and AI working in partnership to make decisions with life or death consequences. In particular, this study illuminates how issues of transparency are particularly salient and consequential in decision making tasks that involve information generated by opaque AI tools. Lacking information intensified the degree of ambiguity they faced and was critical to their limited integration of the tools’ assessments into their decisions. Opacity is also critical to how AI tools’ perceptions and use became stabilize over time, as ongoing, situated practices accumulated. One of the hallmark features of modern AI tools is their ability to continually learn and dynamically improve their performance. However, we find that users’ perceptions crystallized early on, even though the quality of the tool significantly improved over time. This study expands our understanding of the relationship between intelligent tools’ transparency characteristics and the degree of ambiguity in decision making processes. We show that the resulting ambiguity led actors’ to generally overrule opaquely-generated outputs and rely on their own known professional judgement process, despite the underlying quality of the AI tool.