Expanding and Increasingly Inscrutable University Audit Cultures: Data, Analytics, and Rising Quantification in Professors’ Work 

Expanding and Increasingly Inscrutable University Audit Cultures: Data, Analytics, and Rising Quantification in Professors’ Work 

Diane E. Bailey, Cornell University, diane.bailey@cornell.edu Ingrid Erickson, Syracuse University, imericks@syr.edu Susan S. Silbey, Massachusetts Institute of Technology, ssilbey@mit.edu Stephanie D. Teasley, University of Michigan, steasley@umich.edu

The university has evolved over the last millennium from a sheltered enclave preparing primarily Christian religious ascetics and a minority of secular aesthetes to a self-governing community of teachers and students, where administrative autonomy and curricular control remained within the province of a professorate. All along, it has been distinguished from other institutions by its relative autonomy from external control of its internal processes and standards of performance whether in pedagogy, curriculum, student achievement or faculty research. However, the increasing role of public authorities as university funders has generated targeted efforts to pierce the sanctity of university self-governance, substituting accountability through public audit systems for self-governance and peer review systems. In response, we see the rise of “audit cultures” in universities over the past three decades, bolstered by rhetorical justifications of increased transparency, efficiency and responsibility. These audit cultures not only run counter to long-held governance traditions within universities, but also increase the potential for control of a subset of knowledge workers known as the professoriate (Gill, 2019).  Audit cultures arise when accounting techniques are applied in non-accounting settings and reflect consequent new systems of beliefs, norms, behaviors, attitudes, and interrelationships (Shore, 2008; Sewell, 2005; Power, 1997). As do almost all digitized accountability systems, audit cultures rely upon the quantification of performance that operates as a form of output control (Snell, 1992), in this case as the means by which professors’ work is monitored and evaluated. Underlying quantification is the process of commensuration, or “the transformation of different qualities into a common metric” (Espeland & Stevens, 1998). Commensuration is no simple technical matter but relies throughout the steps of converting qualities into quantities on assumptions and distinctions that are more often than not unstated, tacit and, when subject to review and analysis, incoherent reproductions of historic inequalities and prejudices antithetical to innovation and creativity (Espeland & Stevens, 1998; Espeland & Sauder, 2007). Numbers are attractive substitutes for the messy ambiguities of language and qualitative judgment because they are, according to Porter (1995), a “technology of distance.” That is to say, numbers create and overcome distance, both physical and social (Espeland & Stevens, 2008).  In its most basic instantiation, the quantified output of the university is rendered two-fold within most university audit cultures: students educated (the output of teaching) and knowledge produced (the output of research). How is this university output quantified? Metrics such as graduation rates and time to graduation are used to measure success at delivering educational programs. Similarly, knowledge production is reduced to the quantification of publication counts as well as citation counts that consider the extent to which a publication is used; these metrics serve as rough proxies for nuanced assessments of the strength of a professor’s intellectual contributions. Bureaucratic mechanisms such as individual teaching assessments and performance reviews provide the typically annual events through which data for these and related metrics can be tabulated, evaluated, and discussed for each professor before aggregation to department, school, and university levels (Burrows, 2012; Ogbonna & Harris, 2004). At the  individual level, these metrics and mechanisms often prompt professors to alter their behavior to achieve higher quantified performance (Haddow & Hammarfelt, 2019; Kalfa, Wilkinson, & Gollan, 2018). When aggregated across departments, schools, and universities, they serve to modify the behavior of academic units in attempts to improve ranks in international “league tables” of top institutions (Burrows, 2012; Espeland & Sauder, 2008).  Despite the focus on and abundant critique of the quantified metrics and bureaucratic mechanisms of output control per se, scholars who study university audit cultures have given scant attention to the information technology systems that increasingly support university audit cultures (and, in so doing, transform university administration) through the supply of the data “assemblages” (Burrows, 2012: 354) that enable output control. Although the current literature discusses the techniques of control, it largely ignores these systems. In this paper, we explore how emerging data and analytics systems may enable rising quantification that measures, monitors, and controls professors’ work performance. While these systems rely mostly on simple counts (e.g., number of publications) and basic formulas (e.g., average number of citations per publication) today, their capacity to employ machine learning techniques in the future will doubtless produce new metrics whose logic is likely to be less comprehensible. Machine learning techniques, for example, do not rely on the programmer’s allocation of weights in a prescribed algorithm for reading and perhaps regressing the input variables; rather, it ultimately depends on reinforcement learning by which the program automatically and independently adjusts the weights as inputs to it vary. Often, the result of such machine learning produces no clearly discernible rule, just a set of weights and an output. For this reason, emerging data and analytics systems may prompt expanding (given the all-encompassing digital infrastructure) and increasingly inscrutable (given the use of machine learning and similar techniques) university audit cultures.  Burrows, R. (2012). Living with the h-index? Metric assemblages in the contemporary academy.  Sociological Review, 60(2), 355–372. Espeland, W. N., & Sauder, M. (2007). Rankings and reactivity: How public measures recreate  social worlds. The American Journal of Sociology, 113(1), 1–40. Espeland, W. N., & Stevens, M. L. (1998). Commensuration as a social process. Annual review  of sociology, 24(1), 313-343. Espeland, W. N., & Stevens, M. L. (2008). A sociology of quantification. European Journal of  Sociology/Archives Européennes de Sociologie, 49(3), 401-436. Gill, M. J. (2019). The Significance of suffering in organizations: Understanding variation in  workers’ responses to multiple modes of control. Academy of Management Review, 44(2), 377–404. Haddow, G., & Hammarfelt, B. 2019. Quality, impact, and quantification: Indicators and metrics  use by social scientists. Journal of the Association for Information Science and Technology, 70(1):16–26. Kalfa, S., Wilkinson, A., & Gollan, P. J. (2018). The Academic Game: Compliance and  Resistance in Universities. Work, Employment and Society, 32(2), 274–291 Ogbonna, E., & Harris, L. C. (2004). Work intensification and emotional labour among UK university lecturers: An exploratory study. Organization Studies, 25(7), 1185–1203. Porter, T. M. (1996). Trust in Numbers: The Pursuit of Objectivity in Science and Public Life Princeton, NJ: Princeton University Press. Power, M. (1997). The audit society: Rituals of verification. OUP Oxford.  Sewell, G. (2005). Nice work? Rethinking managerial control in an era of knowledge work.  Organization, 12(5), 685–704. Shore, C. (2008). Audit culture and illiberal accountability. Anthropological Theory, 8(3), 278– 
  1. Snell, S. A. (1992). Control theory in strategic human resource management: The mediating 
effect of administrative information. Academy of management Journal, 35(2), 292-327.