AI and Work Quality: Prospects and Promising Practices
AI and Work Quality: Prospects and Promising Practices
M. Lynne Markus, the John W. Poduska, Sr. Professor of Information and Process Management
Bentley University, firstname.lastname@example.org
Organization and Management Track
What are the prospects for high-quality work in the era of AI, robotic process automation, and algorithms? And what, if anything, can be done to improve them? Particularly of interest in this paper is the work of professionals and knowledge workers such as health care professionals, engineers, financial and marketing analysts, human resource development specialists, public safety officials, and many others, who are expected to use algorithms in the course of their employment. These workers may use algorithms to take their own decisions or to advise other people about appropriate courses of action.
High-quality work in this context can mean various things. First, it can mean that the lived experience of the workers is, if not actually improved relative to current conditions, at least not worsened. For instance, algorithms are predicted to automate undesirable tasks—repetitive, boring, dangerous, dirty, etc.—increasing the challenge, creativity, and clear contribution of the work people do. Second, it can mean that the results of the work are, if not materially improved, at least not degraded. For instance, it is desirable that algorithmic decisions and advice are not more erroneous, inappropriate, unlawfully discriminatory, or vulnerable to breaches and abuse than those arrived at by unaided workers. The relationship between these aspects of high-quality work has not been conclusively established. They may be complementary, or they may be contradictory, depending on the circumstances.
It is a widely held belief that, if algorithms and knowledge worker jobs responsibilities are effectively co-designed, the outcomes for workers, employers, and other stakeholders will all be positive. Effective co-design of work and algorithms is said to mean augmenting humans instead of automating their work, designing for human-machine collaboration, and enabling each party to perform the tasks it does best. People are said to need and deserve the ability to second-guess and override the decisions made by machines, or at the very least the opportunity to provide input to developers for improving algorithm design. A great deal is riding on this theory, so it’s important to consider where it might break down and what this could mean for better work and algorithm design.
The first point to consider is that people do not always respond positively to algorithmically inspired changes in their work. Professional translators, for example, are said to dislike the task of cleaning up machine translations. Furthermore, the ways of knowing embedded in many AI systems are distinctly different from those in traditional expert work. Analytics in health care often results in epidemiological insights that clinicians struggle to apply to individual patients. And similar cognitive worldview conflicts have been observed between bioinformaticists and bench scientists and between quantitative and qualitative researchers, among others. An unanswered sociotechnical question is whether it is desirable for data-based ways of knowing to displace cause-effect type reasoning or whether the two knowledge types can and should coexist harmoniously in organizations.
Second, any allocation of tasks to humans and machines on the basis of what each does best is inherently unstable. Under normal research and engineering practice, algorithms can improve rapidly in both task performance and in number of tasks performed. Humans are generally slower to improve on these dimensions. As a result of this and the previously mentioned point, even positive initial experiences of algorithmically assisted work might decline over time. Disaffected experts might leave their positions; vacant positions might not attract expert applicants; and organizations might have to fill jobs with less-skilled individuals.
The workers newly hired to take the place of experts may experience the work as more fulfilling than those who previously performed it. However, as non-experts in the task, they may not become more expert in the task over time. Rather, they are likely to become expert in technology use, which is not the same thing as being expert in the task. Accordingly, they may not be able to recognize algorithmic errors, to intervene to correct them, or to recommend appropriate system enhancements. Furthermore, the changes that they do make (because people routinely make changes to their work processes over time) may not be in line with the intent, spirit, or design of the algorithms they use. Worsened organizational or client outcomes might result, even when experienced job quality and importance is high.
Third, organizations routinely change the conditions of work over time as experience with technology accumulates. A common occurrence is to increase production pressures on workers. Organizations do this in two primary ways. One is by deliberately deskilling jobs. Tasks are reengineered into two (or more) levels: routine cases are delegated to lower-skilled workers and more complex cases are referred to experts. (This change can exacerbate the tendency for experts to exit organizational positions.) The second way is to increase output quotas (e.g., reduced time for physician visits). The consequence is that workers will be less able or likely than previously to take the time and effort to override algorithmic recommendations. (Organizations may not allow them to do this or may impose onerous conditions if they do.)
Performance pressures exacerbate the well documented tendency of humans to defer to machine outputs, even when the outputs contradict the evidence of human eyes, especially when machine outputs are generally reliable. Thus, regardless of experienced work quality, there is a very real possibility that organizational and client outcomes will degrade over time, unless there is a formalized organizational control loop on a much larger scale than that of day-to-day work.
As noted before, pernicious cycles can result when loss of the opportunity to make autonomous decisions weakens professionals’ lived experience of work quality and the reality of work quality for organizations and clients. Uncaught errors and unempowered workers can increase organizational pressures for fully automation, taking humans completely out of the work. That would leave it up to the data scientists, who (one hopes) are routinely monitoring for “concept drift” (and other faults in algorithm performance), to control the processes and outcomes of organizational “work.”
In short, sources of vulnerability in the optimistic theory of AI-augmented work occur at every step of a long chain of distributed organizational responsibility: 1) AI developers, who may work for vendors or client organizations, 2) internal organizational redesign experts (e.g., IT professionals and “lean engineers” or “process blackbelts”) who perform system integration or redesign work flows, 3) organizational managers, who oversee the work and the workers affected by AI, and 4) the workers themselves. At the current stage of sociotechnical evolution, no codified book of knowledge exists about how to design high-quality AI-augmented work. Meanwhile, the downward spiral of worker task skill and organizational knowledge retention is currently being accelerated by baby boomer retirements and tight labor markets.
Three clear areas for future research include: 1) analyses of organizational processes and choices (and their evolution over time) about work flows, job redesign, and quality control when algorithms are applied, 2) enhanced methods for work redesign that include not only process reengineering but also the experienced quality of work, and 3) strategies for designing algorithms that teach novices task skill, rather than simply enabling them to push buttons without it.