|

Algorithmic expertise: How practical theories about technology shape the deployment of knowledge about art 

Extended Abstract | AI@Work Sarah E. Sachs 

Algorithmic expertise: How practical theories about technology shape the deployment of knowledge about art 

This paper is from my dissertation research, in which I examined a particular type of algorithmic system—a similarity matching algorithm for art. Interested in articulation work (Star & Strauss 1999)—the “hundreds of hands” (Seaver 2013) involved in making data analytic technologies useful and relevant, I asked: How do “nontechnical” experts account for a technology that is inherently inscrutable and yet so central to their work practices? I took as a given that, as demonstrated in the STS literature, algorithms such as the one that was central to my field site are opaque and inscrutable, even to those with the technical skills to build and implement them. 

My field site, DNArt, was an art data classification system run by a for-profit start-up. The organization was configured around: 1) a growing database of more than 1 million images of art; 2) a classification scheme for the description of art; 3) metadata, or data annotations, based on that classification scheme; 4) an algorithm for matching similarity between artists and art works; and 5) a user interface, or website, for browsing and searching artists and art works. 

Together, these technologies served as a “trust device,” mobilizing image data to help end- users discover, and ultimately shape, their tastes in art (Karpik 2010). As users navigated through shifting webs of relations in the data, they learned to place their tastes within a context of symbolic value, as sketched by DNArt’s team of art experts. The goal was to channel end-users into the emerging online market for art by helping them develop the confidence necessary to embrace consumer risk. In order to do so, DNArt hired a team of part-time, remote art experts—art historians, art librarians, and working artists—to classify and annotate their art image data on a continuous basis, rendering it legible for similarity matching by their algorithm and rendering the algorithm’s output legible to those with knowledge about art. 

In this ethnography, I focused on the interactions of the team of art experts while at work. I observed 13 months of weekly meetings and new-hire trainings; conducted multiple rounds of semi- structured interviews with members of the team and supporting teams; and analyzed two years of interaction data and documentation from the team’s discussion forum, Slack Chat channel, Trello boards, Dropbox folders, and Google Docs, all of which were subjected to temporal mapping, iterative coding, and synthesis. I attended professional conferences and saw the team through a massive restructuring that revealed shifting orders of worth within the organization. 

My 2019 article, “The algorithm at work? Explanation and repair in the enactment of similarity in art,” explored how the art experts, while engaged in their everyday data practices, learned to recognize breakdowns—when their expectations about what was similar in art differed from the “most similar” relations produced by the algorithm. I found that much of what they did as art experts was distributed repair work, realigning their expectations and the algorithm’s output. Despite their remote status, this repair work relied heavily on their explanations of breakdowns in in- person and online interaction. 

The current paper seeks to unpack the concept of “practical theories” that were developed by the team, how such knowledge was attributed as expertise, and how that expertise shaped collaboration and conflict between team members. Specifically, I argue that explanations played 

Extended Abstract | AI@Work Sarah E. Sachs 

such a pivotal role in repair precisely because they were pragmatic (Pentland 1997). Meaning that, although the idea of the technical functioning of the algorithm underpinned the entire setting of repair (the idea of repair, the division of labor regarding repair, etc.), these explanations had little to do with how a software engineer might argue an algorithm ought to work and everything to do with how it actually worked in practical contexts. 

I argue that pragmatic explanations were individual situations of breakdown in which team members deployed evolving practical theories about how the algorithm works in order to inform repair. This worked as a feedback loop: Their collective theories developed over time, as the team implemented successful and failed repairs. They informed explanations for existing breakdowns and were transformed by the outcomes of attempts to repair those breakdowns. Once refined, they were then carried into the future, informing explanations for future breakdowns. 

Although repair was a distributed process, any pragmatic explanation was made explicit in the group by an individual, who, in team interactions, gained recognition on the team for her ability to produce explanations that informed successful repairs. Repeated over time, this team member was regarded on the team as an algorithmic expert, defined as an ease in deploying practical theories as explanations that lead to repair. In practice, this ease appeared as an individual’s ability to recognize the relevance of a given theory to a given situation, judge its timing in the context of the group dynamics, and present it within the constraints of the group’s interaction rituals and the sociotechnical context. 

Ultimately, I argue that algorithmic expertise was necessary in order to deploy art expertise for art data classification and annotation work. Both forms of expertise were visible as different scripts adopted by team members for explaining their expectations for the algorithm’s output and collaborating on or conflicting over repair. 

The paper raises interesting questions about status, value, and worth of particular forms of expertise and about the changing nature of knowledge work in an era of datafication. More importantly, it demonstrates the value of team-level interactional analyses (including in-person and online) for exploring these questions. Interactions continue to render explicit expectations of and explanations for action, particularly in moments of technological breakdown.