Artificial Intelligence, Gender and the Future of Work
Title: Artificial Intelligence, Gender and the Future of Work.
Submitted to AI@Work 2020 Conference, March 5th and 6th Amsterdam.
Author: Huon Curtis, Ph.D., University of Sydney Business School Australia.
From the nineteenth century mathematical work of Ada Lovelace through to the prominent role of mid-twentieth century ‘female computers’, women have been involved in computer science and Artificial Intelligence (AI). Yet in academic contexts, government policy, and in the labour market itself they are under-recognised, under-valued, under-paid and under-represented. This paper examines the gendered implications of AI in five areas 1) the work of computer scientists; 2) as distinctive of employment polarisation; 3) as an organisational technology; 4) as a mechanism for workforce enhancement; 5) a component of contemporary innovation policy discussions.
Section one foregrounds the practical and political issues that emerged during the 1980s at the tail-end of the last generation of AI technologies, including the under-representation of women in computing professions and gender bias in data. In more recent years, AI has been referred to as the most significant technological innovation since the steam engine, and to this extent the concerns from 30 years ago take on new relevance. It is imperative that the technological determinism that propagates masculinist approaches to technology, and validates male voices is challenged so that exclusionary versions of AI that operate to the detriment of women are not developed.
Section two uses the exemplary case study of data labellers and computer scientists to examine the long-term trend towards concurrent employment and wage polarisation, which will invariably have gendered effects. The latest generation of AI technologies, namely deep neural networks, allow for computers to ‘learn the way that children do’ and this technique is dependent on accurately labelled training data. As such the latest advancements were built on the hidden labour of data labellers, who are organised through labour platforms such as Amazon Mechanical Turk (AMT), data factories and prisons. This case study demonstrates how the development of AI is dependent upon hidden pools of exploited labour.
Section three emphasises the need to set organisational priorities when deploying AI systems to ensure gender equity in the labour market. The occlusion of gender to date has been largely a definitional problem caused by a thin conception of a computer’s functionality from within computer science with little regard to organisational, political and economic context. In contrast, this paper adopts a three-pronged definition that shows that the development and deployment of AI systems is informed through distinctly human decisions. AI systems are socially, organisationally and economically contingent and are not an historical inevitability. For instance, the tendency to equate more surveillance and tracking with more productive workplaces and higher profits has obvious exploitative potential for workers. Just as previous technological innovation has changed the organisation of bureaucratic production, so too does AI change the parameters and expectations of what firms do, and how production and services are being formulated and delivered. In this respect, those who approach work and organisations with a gender lens are instrumental to defining the problems that AI-enabled systems are deployed to solve.
Section four outlines the gendered effects of the deployment of AI by organisations, particularly in recruitment and compensation. It suggests that a failure to consider gender can expose organisations to legal and ethical risks. Examples of the use of AI in hiring and recruitment show mixed results for gender and diversity. The failure to use AI technologies to solve coordination problems between firms may be a missing link in their successful deployment.
Section five explores how public sector agencies across the globe are using the term AI to forward (often) ambitious re-orientations of industry, innovation, education and training, health, environment, security/military, and workplace policies. Many governments and organisations are proposing national strategies and principles on AI ethics and some of these include references to gender; however, many of the strategies are too broad and lack substantive commitments to gender equity. For example, the idea that AI should be developed in advancement of the “social good” or for “the benefit of humanity” is common to many principles and strategies. However, the notion of social good could be interpreted in multiple ways. Many ideological standpoints could stake a claim on the good, or to benefit humanity. Although a human rights frame is occasionally invoked, regulations of any sort are, in general, viewed as antithetical to innovation within the Silicon Valley mindset. Although many AI principles advocate for ‘fairness’ or ‘transparency’, how this intersects with existing legal protections such as anti-discrimination is an open question. It is exciting that the ethics of technology is being discussed in national and global forums, it remains to be seen how this will translate into gender equality.