In today’s fast-paced business environment, platforms have emerged as powerful drivers of innovation, growth, and market disruption. Crafting a well-defined platform strategy is the key to staying competitive and thriving in this transformative
landscape. Our “Platform Strategy Development” workshop is a deep dive into the art and science of shaping your platform vision.
1. Platform Fundamentals
Start with a comprehensive understanding of what
platforms are and why they are reshaping industries. Gain insights into successful
platform business models and real-world case studies.
2. Value Proposition
Define your platform’s unique value proposition. Understand how to create and deliver value for different user segments, fostering engagement and loyalty.
3. Ecosystem Building
Delve into the concept of ecosystems and network effects. Discover strategies for attracting and retaining participants, partners, and developers.
4. Monetization Models
Explore various revenue-generation strategies used by
successful platforms. Choose the right pricing and monetization models for your platform venture.
5. Risk Mitigation
Identify potential challenges and risks associated with platform
development and adoption. Develop mitigation strategies to ensure platform success.
Workshop duration: 2.5 hours
Availability: In-person (in the Netherlands and abroad) and online
1. Platform Awareness
Gain a deep understanding of platform business models and why they are reshaping industries worldwide. Explore real-world examples and the potential benefits for your organisation.
2. Platform Strategy
Learn how to craft a platform strategy that aligns with your business goals and leverages your existing assets. Identify opportunities for ecosystem development and revenue generation.
3. Ecosystem Building
Explore the concept of ecosystems and how to foster partnerships, collaborations, and network effects that drive platform success.
4. Customer-Centric Design
Shift your focus towards customer-centric design principles. Understand how platforms thrive by creating value for users, participants, and stakeholders.
5. Monetization Strategies
Explore various monetization models and pricing strategies used by successful platforms. Determine the right approach for your platform venture.
In today’s AI-driven world, ethical considerations are paramount. Our workshops empower organisations to navigate AI technology responsibly.
1. AI Ethics Awareness:
Understand the importance of AI ethics for your organisation. Explore relevant ethical dilemmas and grasp the significance of ethical maturity
2. AI Ethics Management:
Address ethical challenges introduced by AI. Learn to manage issues across use-cases, guidelines, development, and monitoring.
3. AI Ethics Maturity:
Develop a maturity plan tailored to your organisation. Assess your current AI ethics maturity level, identify strengths, areas for improvement, and set goals for the future.
Workshop duration: 3 hours
Availability: In-person (in the Netherlands and abroad) and online.
Employee retention is becoming increasingly important, given the scarcity of high-skilled employees. Thus, practitioners and scholars are trying to understand and predict employee turnover as accurate as possible (Ben-Gal et al., 2021; Choudhury et al., 2021; Farrell and Rusbult, 1981; Oswald, 2020; Wang and Zhi, 2021; Yuan et al., 2021; Zhao et al., 2018). Complex machine learning (ML) models can offer powerful support in this context, as they enable high accuracy predictions (Hong et al., 2007). However, these models are often based on black-box models and provide limited interpretability of the results (Shrestha et al., 2021). Hence, to use black-box models, more interpretable approaches (Ben-Gal et al., 2021; Mitchell et al., 2001), i.e., explainable AI (XAI) methods (Guidotti et al., 2018; Hamm et al., 2021; Barredo Arrieta et al., 2020) are necessary. Understanding the models for turnover prediction – respectively the underlying patterns- is particularly important for Human Resource Management (HRM) to take appropriate countermeasures (Oswald, 2020). Therefore, we evaluate the use of three XAI methods in the context of turnover. As basis of our exemplary illustration, we use the publicly available IBM HR Analytics Employee Attrition & Performance data set (Subhash, 2017), which is a synthetic data set created by IBM that includes administrative data, performance data, data on job satisfaction, and data on individual characteristics (e.g., age and gender) of 1470 fictional employees (see Appendix for an exhaustive list of features) (Subhash, 2017). The goal of our exemplary use of XAI was to predict turnover (0 = no turnover, 1 = turnover) based on these characteristics and make the prediction interpretable.
For this purpose, we selected a sample of 7 frequently used ML models (e.g., Hastie et al., 2009; Wu et al., 2007) and evaluated their accuracy using 10-fold cross-validation (Berrar, 2018). The results regarding the achieved mean accuracy are shown in Figure 1. The Random Forest (RF) model achieves the highest accuracy. After the final model optimization, we achieve a f1-score1 of 84%. Due to the comparatively high accuracy, this model can provide a solid basis to predict employee turnover.
However, two fundamental questions remain. First, what are the general reasons for turnover in the organization? Second, how to prevent a turnover of an identified employee? Technically, these questions address two different types of explanations: Global interpretability (the average model behavior) and local interpretability (so the model’s prediction for an individual observation – here a particular employee) (Molnar, 2020). To answer these questions, we employ Local Interpretable Model-Agnostic Explanations (LIME) (Ribeiro et al., 2016), Shapley Additive Explanation (SHAP) (Lundberg and Lee, 2017), and Partial Dependence Plots (PDP) (Kamath and Liu, 2021).
For global interpretability, we use a SHAP summary plot (Molnar, 2020), as shown in Figure 2. This ranks the features according to their influence on the model output. Since the influence of the feature value is different for each instance, depending on the value of the other features, SHAP displays this with a scatter plot. For example, the feature StockOptionLevel has the highest average impact on the model, where especially a low level of stock options links to a turnover prediction of the model. In comparison, the feature TotalWorkingYears has a lower impact on the model. By analyzing these features and their model impact, the key drivers of turnover in the organization can be detected and used by the HRM for data-driven decisions (Oswald, 2020). In this case, the following question could be derived: What level of stock options is appropriate to prevent potential turnover?
Therefore, we take a more granular insight with PDP (Kamath and Liu, 2021; Molnar, 2020) to investigate the dependence between the feature StockOptionLevel and the model output (Figure 3). The plot shows that the curve flattens significantly from the StockOptionLevel of 1 onwards and thus a further increase in the level of stock options rarely changes the average response of the model’s decision.
Besides PDP, we used LIME and SHAP as they offer additional insights. Both XAI methods allow the analysis of individual observations (i.e., employees in our case) and thus enable local interpretability (Molnar, 2020). They display, in descending order, the strongest influence of the features on an individual model prediction (Figure 4). Both methods come to fairly close results and thus could be used to improve our understanding of potential reasons of a single turnover decision.
Finally, we demonstrated the use of XAI applications for decision making in HRM. XAI enables to make the decisions of complex ML models understandable. This inside can be used in two ways: First, to get a global understanding of turnover in the organization, and second, to understand possible reasons for the supposed turnover of a particular employee. This leads to the limitations with the presented XAI methods. In addition to the methodological challenges of the correct application (Choudhury et al., 2021), these methods do not provide any information on whether the taken actions actually change the decision of the model and thus, of the employees (Slack et al., 2020; Fernández-Loría et al., 2022). Hence, it is not possible to read off or determine what quantity of change, for example in the level of stock options, changes in the end the decision. However, the presented XAI approaches are a first step to get insights into complex AI models and to a data-driven assessment of which actions are most likely to be successful.
As the scope of implementation of artificial intelligence (AI) in organizations gets wider, the question of how AI will change the role of managers and their perceived power among the employees becomes more relevant. While people-focused tasks, such as emotional support, conflict management and mentoring have always been important to manager’s performance, their importance might significantly increase once AI takes over the analytical tasks (Huang & Rust, 2018).
To perform people-focused tasks well, managers require specific soft skills, such as the political skill, defined as the “ability to effectively understand others at work and to use such knowledge to influence others to act in ways that enhance one’s personal and/or organizational objectives” (Ahearn et al., 2004, p.311). Managers with well-developed political skill can facilitate a higher level of subordinates’ job performance compared to mangers with less-developed political skill. This facilitation is partially induced by manager’s perceived power (Treadway, 2011). We focus on two sources of manager’s perceived power – reward power that is based on employee’s perception that manager has the ability to control once’s rewards (French & Raven, 1959) and social power that is defined as “the global perception by a follower of his/her supervisor potential to influence important organizational actors and the organizational decision-making process” (Chénard-Poirier et al., 2021). We suggest that employee’s behavior depends on the combination of these types of power.
Assigning AI the core managerial tasks, such as employee’s performance assessment, may significantly change managers’ autonomy and, as a result, their perceived power (Jarrahi et al., 2021). We aimed to explore the extent to which the allocation of employees’ performance evaluation to AI impacts the different types of managers perceived power driven by their political skill. Our findings demonstrate the importance of managers social power and political skill in the era of algorithmic decision-making.
Technology is changing the way entrepreneurs manage their human resources. Many employers have already started to dismiss the completely human exercise of their managerial prerogatives, totally or partially delegating them to more or less smart machines. Data collected through people or workforce analytics practices are the fuel to fill the tank of algorithmic management tools, which are capable of taking automated decisions affecting the workforce. Notwithstanding the advantages in terms of increased labour productivity, recurring to technology is not always risk-free. It has already happened, also in the HR management context, that algorithms have revealed themselves as biased decision- makers. This problem has often been exacerbated by the lack of transparency characterising most part of automated decision-making processes. Moreover, this issue is worse in the employment context because it increases the already existent information asymmetries between entrepreneurs and workers. These are the main reasons why it has been underlined how workforce analytics and algorithmic management practices may implicate an augmentation of managerial prerogatives unheard in the past. It has also been stressed that this should entail an update – or even a rethinking – of employment laws that, as they are today, may be inadequate to address the issues posed by the technological revolution.
This paper tries thus to understand, mainly looking at the Italian and other EU civil-law based legal systems, whether there are rules that may foster transparency and prevent abuses of employers’ managerial prerogatives potentially arising from the increasing recourse to algorithmic management practices. In other words, this article will try to examine whether there are any existing regulatory techniques that may be helpful in alleviating the issues of lack of transparency and augmentation of managerial prerogatives. In order to perform this task, I will analyse three different case studies of algorithmic management devices developed and deployed by Amazon in the US, to understand whether the implementation of these specific tools in the EU may have been legally feasible from an employment and data protection laws perspective, analysing three discrete legal issues, which are often at stake in employment litigation:
All these regulatory techniques strongly incentivise employers to recur to only those algorithmic tools with a decision-making process that can potentially be made transparent to their employees and, in case of a trial, to employment judges. Therefore, the employment legal system already knows how to foster transparency in the workplace and consequently uncover the violation of rules that already limit abuses of managerial prerogatives by employers. In light of the pervasive use of new technological tools to manage human resources, a more massive recourse to these regulatory antibodies can constitute an effective policy recommendation to better face the challenges posed by the algorithmic revolution.
The rise of artificial intelligence (AI) is a potential source of competitive advantage for firms to shape and re- design their organizations. However, the introduction of AI within firms has raised the usual question: “Will these machines substitute or complement humans in the workforce?”. Once an organization decides to introduce an AI machine, it wants to optimize its usage in such a way that performance is maximized and costs are minimized. However, this optimization problem is more complicated than one might initially expect because an organization’s internal processes often consist of different jobs that require different skills from the employees. Routinized jobs consist of tasks that can be codified and hence automated. By contrast, non-routinized jobs can be classified in two broad sets of tasks, which are proven to be challenging to automate. The first set is about creative tasks that are ‘abstract’, and require problem-solving capabilities, intuition, creativity, and persuasion. These tasks are typically allocated to workers with high levels of education and analytical capability, and they place a premium on inductive reasoning and communication ability. The second set includes manual tasks requiring situational adaptability, visual and language recognition, and in-person interactions. Specifically, manual jobs consist of both routine tasks and non-routine tasks.
To the best of our knowledge, no existing research compares the micro-level performance and cost impacts of the introduction of an AI machine on different types of jobs. The different types of jobs will result in different human behaviors, incentives and effort because the more the job consists of routine tasks the more the worker feels her job at risk of being automated. Thus, managers face a tradeoff in their decision making. On the one hand, they want to introduce the machine to improve firm performance and reduce costs. On the other hand, the introduction of the machine may lead employees to sabotage it, thus increasing the organization’s costs. In this work, we provide an answer to these organizational problems by showing how the different types of jobs perform after the introduction of the machine. We investigate our research question by means of an agent- based model to simulate the actions and interactions of two autonomous agents (i.e., the machine and the workers within the organization) and estimate the impact of AI on different types of jobs. Agent-based models are suitable for our research purpose as large-scale longitudinal data that trace interactions between AI machines and humans are not available.
Our simulation results show that, after the introduction of the AI machine, manual jobs outperform routinized and creative jobs in terms of both performance and costs. First, in the case of routinized jobs, since around 50% humans at any simulation time aim to sabotage the machine, the AI machine cannot increase its intelligence and performance. Thus, the manager cannot fire any worker, because the performance levels of the humans will remain higher than the machine’s performance level. Second, in manual jobs there is a peak in costs just after the introduction of the machine caused by recruitment costs; however, long-term costs are the lowest because of a relatively easy replacement of low-performing humans. In comparison with routinized jobs, workers are less likely to feel their jobs at risk, and thus to sabotage the machine. The overall human- machine performance is thus driven by human incentives and human labor quality. Third, AI machine performance in creative jobs is the highest in the long run. However, the joint human-machine performance is lower than in the case of manual jobs because of less machine assistance to humans (i.e., the machine cannot perform a large part of the job). This lower complementarity translates into very low chances of sabotage but less replacement of low-performers.
This study contributes to the emerging AI literature by showing how managers can cope with tradeoffs faced when deciding whether to introduce AI in their organization. Our work provides managers with guidelines on which job types benefit more from the introduction of an AI machine, and how synergies (conflicts) between AI and jobs positively (negatively) influence an organization’s overall performance.
Agriculture is making leaps in digitalization and artificial intelligence (AI) systems with autonomous machines, sensor data, and decision support systems (Liakos et al., 2018; Smith, 2020). Understanding and improving how farmersinteract with AI requires research that looks beyond AI in laboratory settings and into the application of AI in the field (Huysman, 2020; Jussupow et al., 2021). One key issue is explainability which paves the way for successful AI deployments (Gregor & Benbasat, 1999; Thiebes et al., 2021). Explainability refers to the effectiveness of AI’s explanations (e.g., user interfaces, documentation, or manuals). This study focuses on the comprehensibility of explanations and specifically user interfaces for end-users. End-users often cannot comprehend how AI systems reach their decisions (Waardenburg et al., 2020). However, explainability is crucial for using AI in joint decision-making (Asatiani et al., 2021).
Human-AI joint decision-making happens through configurations of Human-AI agency, which are continuously and mutually shaped (Suchman, 2007, 2012). Recent research found that a translator role is required who mediates between end-user and AI system (Gal et al., 2020; Jussupow et al., 2021; Waardenburg et al., 2022). The translator role addresses comprehensibility in domain-specific contexts. What remains unclear is how human-AI joint decision-making occurs when explanations influence it. Research into how AI explanations are embedded in the organization and integrated into decision- making procedures is lacking. How humans engage with AI systems and make sense of explanations in the domain context has seen little empirical work until now (Abdul et al., 2018; Benbya et al., 2021). These issues are urgent for small businesses, where human actors rely on AI explanations. Therefore, this study asks: How do configurations of human-AI joint decision-making emerge, and how do explanations influence these configurations?
In this policy paper, we investigate the impact of Artificial Intelligence (AI) in the workplace on the quality of jobs and the wellbeing of workers. Job quality is a multidimensional concept that includes all features of jobs that impact workers objective and subjective wellbeing (Nurski & Hoffmann, 2022). While labour regulation mainly focusses on the physical and contractual working conditions, two other job quality dimensions – job content (or job design) and the social environment of work – are the main determinants of worker’s behaviour, attitude, and wellbeing at work (Humphrey et al, 2007). As we show in this paper, AI will likely impact exactly these two dimensions of job quality, necessitating closer attention of policymakers.
Jobs and their characteristics are shaped by institutional antecedents (features of the labour market and the welfare state) and organisational antecedents (features of the organisation’s structure and culture). While AI has some impact on the functioning of the labour market through its role in the matching process, most of its impact will take place inside organisations. Therefore, in this contribution, we analyse the impact of AI on job quality by investigating how it acts on the organisational antecedents at the firm-level.
We argue first that job design originates from the division of labour and specialisation in the firm, both horizontally and vertically, in the production process and the governance process (Mintzberg, 1979). Job design then further shapes the other dimensions of job quality such as the social environment and the contractual and physical working conditions. Next, we construct a framework for assessing the impact of AI on job quality through its effect on the functions of the organisation, based on six AI use cases. Besides the traditional use case of automation of production or service tasks, we find five more use cases for the automation of management activities, also known as algorithmic management: (1) algorithmic work method instructions, (2) algorithmic task coordination, (3) algorithmic scheduling, (4) algorithmic surveillance of effort and performance, and (5) algorithmic staffing (including selection and recruitment).
Through an extensive literature review, we collect existing empirical evidence on these six AI use cases, and we show how they impact each dimension of job quality. Using the job demands – control/resources model (Karasek, 1979, and Demerouti et al, 2001), we first assess how AI either increases or decreases job demands (like work intensity and complexity) and job resources (like autonomy over planning and method, and skill discretion) for each use case (see also Nurski, 2021). We then show how these changes in job design spill over to the social and physical environment of work and finally put pressure on contractual employment conditions as well. We exemplify each use case by building persona’s that bring together empirical evidence from different sources into an illustrative story, easily understandable for both policy makers and business managers.
We finish this contribution by discussing how the previously described effects of AI on job quality are not technologically predetermined but are the result of choices of the technology designers (AI developers) and job designers (managers). Certain features of technology design might moderate the job quality impact, namely transparency, fairness, and human influence (Parent-Rocheleau and Parker, 2021). We illustrate why and how technology design might fail, either through incompleteness of data or because of the designer’s intention when specifying the algorithm’s objective function. We examine how technology design and implementation can be improved through worker participation. We briefly discuss the potential pitfalls and shortcomings of the proposed AI Act (European Commission 2021a) and the proposed platform work directive
Middle management in organizations has been acknowledged as pivotal for stability and legitimizing change initiatives. Research so far has built on the assumption of vertical layers in organizations and elaborated middle management in terms of intra-organizational sensemaking, politics and information processing. In the digital era, these assumptions and the role of current theories is likely to change. Data lakes and analytics capabilities provide all layers in the organization unlimited access. And the pace of change leads to major challenges in terms of intra-organizational digital practices and extra- organizational networking. Research so far has put premium attention to strategic change and change of work at the operational level. Middle management requires more attention to keep up with major changes.
Our objective in this abstract is to explore new directions for understanding the reshaping of middle management in the digital era.
This work demonstrates an example of how arti- ficial intelligence, and specifically natural language processing (NLP), may be applied to automate as- pects of legal practice. After demonstrating this application, I consider the implications of legal automation more generally.
In common law jurisdictions, like the U.S., U.K. and Australia, judges and lawyers construct their arguments by drawing on judicial precedent from prior opinions. Judges cite precedent in their opin- ions and apply it to the facts of a case to build incrementally towards a final judgement. Lawyers use precedent in their legal briefs to argue why one party to the case should prevail.
U.S. case law currently consists of around 6.7 million published judicial opinions, written over 350 years. The process of extracting the correct precedent from this daunting corpus is a funda- mental part of legal practice. It is estimated that American law firm associates spend one-third of their working hours conducting legal research (Las- tres, 2015). Lawyers rely on legal research plat- forms to access and search legal precedent, which charge $60 to $99 per search, a cost that is ordi- narily passed on to clients (Franklin County Law Library, 2020). Access to justice continues to be a serious global problem. For example, “86% of U.S. civil legal problems reported by low-income Amer- icans received inadequate or no legal help” (Legal Services Corporation, 2017). Meanwhile, attorney fees continue to rise and are approaching $300 per hour in the U.S. (CLIO, 2020). Thus the price of legal advice is becoming increasingly unaffordable and access to justice is diminishing accordingly.
This paper presents a novel NLP approach to predicting judicial precedent relevant to a given legal argument by training NLP models on legal arguments made by U.S. federal judges. Given the time, expertise, and costs associated with identifying relevant precedent, this task represents a major barrier for widespread access to justice. The goal of this work is to aid attorneys in drafting legal briefs, reducing time and money spent on legal research while increasing access to justice and improving the quality of legal services.
Historically, mechanisation of production has always been accompanied by questions about its impact on the incentive to reallocate resources, with a natural focus on the substitutability of labour (Mokyr et al. 2015). However, labour substitution is only one of the effects of automation. In this paper, we study whether the adoption of robot technology influences the rate and direction of innovative activities.
In essence, robots are capital goods. However, contemporary robots are depicted as increasingly ‘malleable’, or flexible, capital goods – multi-purpose equipment capable of executing different tasks with little re-programming. Growing robot flexibility is a clear trend, as robot technology is augmented by other technologies characterising the fourth industrial revolution (Benassi et al. 2022; Martinelli et al. 2021), both hardware (e.g., sensors, or additive manufacturing technologies) and software (e.g., artificial intelligence algorithms). Robots become a component in larger systems, such as cyber- physical systems and advanced digital production technologies (UNIDO, 2019). As such, it is possible to hypothesise that robot adoption will induce changes in firms’ behaviours that go beyond the well- known replacement and productivity effects on employment (Autor, 2019) and that are more ‘enabling’ in nature. At the same time, current robots are “the most recent iteration of industrial automation technologies that have existed for a very long time” (Fernandez-Macias et al. 2021) that continue to operate in specific and constrained environments. Hence, their enabling capability might be limited if firms are not able (or do not plan) to exploit it. We shed some new light on this by measuring how product innovation and R&D expenditure change when robots are adopted at the firm level. Doing so, the paper contributes to the growing, yet nascent, strand of studies analysing firm-level data on robot adoption with a unique perspective on the nexus between the adoption of industrial robots and product innovation performance.
We exploit a unique dataset of Spanish firms, coming from the Survey on Firm Strategies (Encuesta Sobre Estrategias Empresariales, or ESEE) and implement an event-study approach (a generalised diff-in-diff model) to relate different indicators of product innovation to robotisation. We show that robot adoption is negatively associated to product innovation in the long term. We isolate the effect of large- vs small-scale investments mechanisation and find that the negative association with product innovation disappears for large-scale investments. Firms that are located in the top quartile of the investment distribution experience a positive increase in R&D expenditure (but not innovation), while firms in the bottom quartile display a negative relationship with both product innovation and R&D. We interpret the findings along a few lines of reasoning and converge on the idea that a conditional (on the scale of investment) substitutability exists between robotisation (process change) and the introduction of new products. In particular, implementation costs and the returns to learning- by-doing in process technology following robot adoption can divert resources away from product innovation. Furthermore, robots – even when flexible – might display enabling capabilities only when introduced in flexible production processes. More ‘classic’ and standardised mass production processes might not benefit from robots’ full potential.
We take a step further by discussing whether the types of robots under analysis are the ‘right’ robots to induce innovation. In fact, not all instances of process mechanisation and robotic equipment might be malleable enough to shape technological opportunities and to affect the incentive to engage in new product discovery, design, and development. The specific type of robots adopted do matter. In particular, innovation-inducing robots are those characterised by the feature of being research tools, invention machines, or IMIs. These types of robots are used to aid the search process over, for example, the space of materials to be employed or the space of designs to be trialled and prototyped. Industrial robots such as the majority of those captured by our data might not completely lack the capability to enable new activities; however, they are not IMIs, and have less scope for what concerns facilitating innovation-related search. New IMIs, such as certain types of AI algorithms, are mainly software technologies, which are used in knowledge-intensive domains and are not yet seamlessly integrated in the architecture and functionalities of industrial robots. By contrast, robots are employed in the manufacturing sector to increase the rate of execution and the precision of factory floor tasks under specific conditions.
To our knowledge, this paper is the first expanding the literature on automation to the microeconomics of innovation and firms’ strategic decision making. While exploratory in kind, our results suggest that non-linear mechanisms are at work within companies when robots are used to re- organise production activities. We conclude the paper discussing the implications of our findings for policy.
Big Data (BD) has the potential to help firms overcome the subjective limitations of humans and lead to ambidexterity and sustainable financial results. At the same time, firm ambidexterity is a unique precursor to adopting BD. This research-in-progress aims to reconcile these differing views by identifying the interplay between an incumbent firm’s ambidexterity, BD, and top management roles. We adopt an explorative case study design to conduct semi-structured interviews with data scientists, top managers, and decision-makers within an incumbent firm. The interviews will be transcribed and coded in qualitative data analysis software to uncover richer findings (NVivo). We aim to lay the foundation for future research avenues around BD, top management’s roles, and ambidexterity in incumbent firms.
When you try to build an IKEA cabinet, how often are you bothered by its instruction manual? While holding the wooden plank with one hand, screwdrivers in the other, you still have to find a way to flip the pages of the paper-based manual, only to discover that you have absolutely no idea which steps to follow. No matter how experienced you are with the work, the inconvenience affiliated with the paper-based instruction and the limited information provided by this type of form constrict the way we perform tasks.
Now consider the industrial context. Frontline workers, such as maintenance inspectors, face the same issue in their daily work. Workers are usually in the field with their hands full of tools, communication devices, and paper-based instruction manuals, but they still need their hands to be available for physical tasks (Coon, 2018). In addition to the inconvenience of carrying paper-based manuals, workers have to divert their attention from the task to read and reread the instructional steps, which disrupts their workflow. Moreover, while the reality is three-dimensional, the information provided by paper-based instructions is limited to two-dimensional pages; thus, extra effort is required to interpret and apply them to the working environment (Wuttke et al., 2022). These issues may seriously hinder workers’ performance. To reduce the inconvenience posed by paper-based manuals, augmented reality (AR) smart glasses (i.e., AR in short) are introduced in companies to improve workers’ performance. However, due to the novelty of AR technology, current streams of literature focus more on the technical side of AR instead of the behavioral or managerial sides (Kim et al., 2016). Therefore, in this study, we aimed to bridge this gap by investigating the effect of AR on workers’ performance and exploring its underlying mechanism as well as its implications.
Nowadays, firms increasingly use AR to improve business (Wuttke et al., 2022). For example, Boeing has reported that AR reduces 35% of the training time new employees need to learn how to assemble aircraft wings, while the logistic giant DHL has increased productivity 25% by implementing AR-guided picking (Porter & Heppelmann, 2017). From the perspective of Industry 4.0, digitalization with technologies like AR inherently offers new ways to address the future of work (Olsen & Tomlin, 2020). In these industrial cases, it is obvious that AR has the potential to benefit business. However, seldom has any research provided empirical evidence to support the effect of AR. There are some exceptions. For example, Schein and Rauschnabel (2021) studied AR in a manufacturing context, but they mainly focused on the barriers concerning AR adoption from the perspective of technology-resistance using the survey method, while our study uncovers the short-term effect of actual AR usage in a field experiment during the AR implementation phase. Moreover, Wuttke et al. (2022) and Gürerk et al. (2018) examined the learning effect of AR on workers’ performance, while our study reveals the underlying mechanism of AR usage on performance from the aspect of cognitive processing.
Essentially, AR enables futuristic ways of information-delivery and transforms analytical data into a virtual layer in the real world. For example, in the industrial maintenance context, AR improves how workers visualize and consequently access all the information, how they perceive and follow guidance and instructions, and how they interact with the working environment. Based on information processing theory and cognitive load theory, AR-based instructions require less attention-split to access information than paper-based instructions, indicating a significantly lower extraneous cognitive load. More importantly, AR can provide information within the workers’ immediate field of vision while freeing their hands, which may enhance their ability to process information and facilitate their work (Coon, 2018).
We collaborated with China Southern Airlines and conducted a field study to investigate the use of AR in an airplane maintenance context during the AR implementation phase. We designed a within-subject experiment and tested the inspection performance of 80 workers before and after using AR. All the participants were first observed during their routine inspections with paper-based instructions. After proper training of AR usage and pilot inspections (at least three times to ensure familiarity with AR-based How AR Improves Workers’ Performance 2 instructions), all the participants were observed again during inspections using AR-based instructions. By collecting third-perspective video recording data and survey data, our objective was to examine the impact of AR’s short-term usage on workers’ information processing efficiency and their inspection effectiveness.
The results of our experiment suggest that, after short-term AR usage, there is a significant increase in the effectiveness of the inspections. The effect of AR on this improvement is mediated by the efficiency of information processing. This means that inspectors process information more efficiently when using AR- based instructions (versus paper-based instructions), which improves their inspection effectiveness. This mediation effect is stronger when inspectors perceive the instructions to have a lower extraneous cognitive load, because the way inspectors perceive how instructions are presented (i.e., via AR or paper) significantly moderates the effect of using AR on the efficiency of information processing.
Our study provides managers with insights regarding how AR improves workers’ performance. Our study results indicate that there is an immediate gain in short-term performance improvement after AR is used a few times. Specifically, with a lower extraneous cognitive load in presenting information, AR enhances workers’ efficiency in information processing, consequently improving inspection effectiveness.
Research Context and Objective
In this study, we examine why AI-based algorithms frequently fail to fulfil the intended tasks coherently. Artificial intelligence (AI) has become a promising field of innovation, representing broadly “a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation” . AI includes machine learning (ML), defined as using large datasets by machines to solve various problems . ML relies on algorithms, that is, a formal set of logical rules to process data.
The rapid adoption of AI applications in many social and economic domains resides in their potential strength to efficiently recognize patterns in large datasets and predict future trends and trajectories . However, meanwhile AI-based applications increasingly pervade businesses, personal, and social lives., repeated incidents of algorithmic failure continue to manifest. For instance, “GPS provides faulty directions, or biometric systems misrecognize people” . Moreover, the increasing power and pervasiveness of AI-based algorithms risk amplifying the frequency and severity of failure occurrence . This evolution of AI could lead to considerable implications for business and the wider society [5, 6].
AI, ML, and algorithms originate in the computer science discipline, primarily addressing algorithmic failure as a technical topic. Most computer science contributions examining AI failure focus on failure detection mechanisms and technical approaches to improve algorithms. Exemplary,  classify failures of AI applications for autonomous weapons, employment automation, control failures, and self-replicating machines. Technical approaches to AI failures are algorithm-centric, assuming that improved algorithms and technical mechanisms provide solutions and safeguard against future failure occurrences. However, research shows that the success or failure of technology is not merely a technical issue but has a broader management angle .
Against this background, we investigate the failures of AI-based algorithms from a management perspective. Relatedly,  argues that technology projects fail as they do not correspond to the problems they were supposed to solve or the needs they were supposed to fulfil. Scholars commonly represent AI, ML, as the next step of technology evolution, enabling AI-based algorithms to learn from data autonomously . In management terms, it is crucial to understand if AI failures occur due to an incompatibility between given problems and AI- based algorithms. To do so, we need to view algorithmic failure in its entirety, including managerial problems and their situating context. The managerial perspective helps develop a better understanding of non-technical aspects of AI failure and, in turn, positions AI-based algorithms more effectively within an organizational technology architecture .
Consequently, we leverage in our study problem-solving literature to discern common problems for which AI-based algorithms are deployed. This theoretical lens helps to understand the distinct dimensions of based algorithms for problem-solving, such as their formulation, situating context, and temporal characteristics. In parallel, we analyze constituent features of algorithmic problem-solving capabilities, such as how their inherent rules have been formulated and how they seek to solve problems. We specifically focus on identifying compatibilities and incompatibilities between managerial problems and algorithmic problem- solving approaches.
We bring perspectives from intelligence literature to examine if the algorithm exhibits the intelligence type specific to the problem. In so doing, we develop a well-grounded understanding of why algorithms fail. It is increasingly recognized that algorithms, being a set of rules, are well-suited for formally well-defined problems and seek optimal solutions . However, despite the prevalent use of algorithms for well-defined problems, failures are common. We, through our study, highlight that problems’ formulation, situating context, and temporal nature can make them dynamic and evolutionary, thus leading to the need for distinct intelligence types relative to the ones exhibited by algorithmic capabilities, thereby leading to failures.
We aim to make several potential contributions to research and practice. First, we bring a managerial perspective to the algorithm failure literature by highlighting that problem formulation, often discussed in management, plays a central role in frequent failures of AI- based algorithms. We also contribute to problem-solving literature by conceptualizing algorithmic failures as a context and highlighting specific problem characteristics that are not relevant for traditional problems but become relevant due to AI emergence. Second, we highlight that algorithm failures cannot be effectively managed if treated as mere technical problems. Awareness of problem characteristics and incompatibilities with algorithm capabilities can help develop management practices to avoid failures. Third, our study highlights the non-technical aspects that need to be considered for designing better AI-based algorithms for practice.
We will use secondary data sources to compile documented AI-based algorithmic failures. We will follow two-step methods to analyze data. First, we will use topic-modelling techniques to identify failures’ themes. Second, we will code all information regarding problem dimensions, algorithm capabilities, and intelligence required for specific problems and intelligence exhibited by specific algorithms. Finally, we will synthesize insights from the previous two steps to develop a framework for algorithm failures and offer detailed propositions for research and practice.
Introduction and Background
The ongoing war for talents shapes the future of organisations and is a major challenge for the entire field of talent management (TM). TM thereby involves (1) recruitment, staffing, and succession planning, (2) training and development, and (3) retention management. Nonetheless, organisations struggle substantially in managing the scarcity of talents effectively. The growing capabilities of artificial intelligence (AI) show potential in creating a strategic advantage in this war for talents (Brock & von Wangenheim, 2019). Given the rapidly increasing number of studies examining how AI impacts TM, the research lacks a comprehensive review. To address this void and thus support scholars and practitioners by providing an overview of extant knowledge on this topic, we conducted a systematic literature review (SLR) on the use of AI in TM.
We based our SLR on the PRISMA approach (Page et al., 2021). To identify relevant literature, we searched in different electronic databases. This provided us with 3,714 publications from which 47 were eventually analysed in detail. We took a stepwise approach to select a conclusive final sample (see Figure 1). Based on previously defined criteria, we excluded non-fitting articles. Each step reduced our initial sample and eventually yielded a preliminary sample of 37 relevant articles. Based on these articles, we conducted backward and forward search, which resulted in additional 10 relevant articles. Consequently, in sum we received a final sample of 47 articles published between 2015 and 2021 on the influence of AI on TM.
Findings and Conclusion
We identified three overarching streams in the literature regarding AI in TM. Literature addressed AI use for recruitment, training and development, and retention management whereas the first stream was the most prominent. Regrading recruitment, literature emphasized prerequisites for and barriers to AI adoption. Cost-effectiveness of AI use, employee readiness, top-management support, and technology vendor support (Pillai & Sivathanu, 2020) are found to be crucial prerequisites for AI adoption. In contrast, security risks, privacy concerns, and high in-house technological complexity hinder AI adoption in recruitment (Pan et al., 2021; Pillai & Sivanthanu, 2020). Further, literature showed how AI can benefit recruitment efficiency across various TM activities such as outreaching to candidates, job-candidate matching, screening of candidates, candidate assessment and evaluation (Black & van Esch, 2020). For instance, machine learning (ML) algorithms trained based on video interviews, have been successfully employed to predict applicants’ personality traits (Hickman et al., 2021). However, literature also showed the importance of both, AI expertise and domain expertise for successful AI application (van den Broek et al., 2021). Despite possible use cases in training and development, like using AI to reveal potential skill gaps (Malik et al., 2020) or the introduction of AI based animated characters to provide feedback on learning progress (Vrontis et al., 2021), research in this area is scarce. In relation to retention management and the prediction of employee turnover through AI applications (e.g., ML for patterns detection and neural networks) is a well-researched phenomenon (e.g., Choudhury et al., 2021; Teng et al., 2021; Yuan et al., 2021). Additionally, AI has been employed to understand employees’ mood swings, enabling targeted countermeasures to prevent turnover (Kshetri, 2021).
The second stream comprises literature on individuals’ perception of AI application in TM. Research addressed user acceptance and willingness to use AI applications. Literature on user acceptance showed that AI in the application process is accepted when applicants are informed about its use early in the process. However, AI evaluation and selection decisions are more likely to be accepted by applicants, recruiters, and managers when matched with human decision-making (Laurim et al., 2021). Literature on user’s willingness to use AI showed contradictory results, as known AI use reduced willingness to apply for a position (Mirowska, 2020). Incorporating trendiness, fair treatment, and intrinsic rewards offered for using the AI, mitigated the resentments and increased users’ willingness to use AI (van Esch & Black, 2019).
The third stream targets algorithmic decision making in TM, addressing particularly AI bias and fairness and AI decision’s effects on users. Extant literature showed how biased decisions for example regarding gender manifested in ML training data (Köchling & Wehner, 2020). Particularly, algorithmic decisions were perceived less fair than human decisions and were perceived reductionist (Newman et al., 2020). Regarding algorithmic decision-making, literature identified a potential overconfidence in the algorithms’ objectivity, potentially even devaluing human decision- making, which was perceived subjective and deficient to the algorithms (Giermindl et al., 2021).
With our SLR, we offer a comprehensive overview of the extant studies on AI in TM. Thus, we enable scholars to identify possible future research options and practitioners to build decisions on the adoption and implementation of AI for TM on a scientific basis. Thereby, we contribute to both, research and practice.