BACK TO WHAT TRULY MATTERS:
Platforms, AI, and Youth in the Workplace
Editors: Jovana Karanovic and Jelena Sapic

Executive Summary
To inspire policy making by providing all-round view, this report presents a concise overview of major challenges and points of agreement and disagreement, in relation to three timely future of work topics: (i) the impact of the platform work directive; (ii) AI in the workplace; and (iii) youth employment and workplace well-being.
Resulting from constructive discussions facilitated among 32 organisations, representing different viewpoints and expertise, this report highlights areas that require further deliberation as well as areas of consensus. In this way the report provides a practical guide for policy makers but also union and business leaders that value multitude of perspectives.
The impact of the platform work directive
The European Commissionâs draft proposal for a new Directive is a bold move that has generally been well accepted by majority of stakeholders, from companies to unions. The critical piece of the draft proposal that ignited debate is the legal status of platform workers.
While some platform workers are genuinely self-employed, the European Commission states that many others are bogusly self-employed; as the Commission proposes, they should be reclassified as employees, enjoying the same rights.
Furthermore, there is disagreement about the tools to achieve increased workersâ protection. Generally, the criteria provided would help judges, regulators, and interested parties to properly classify platform workers; in this sense, the proposed Directive increases certainty and facilitates business planning.
Yet, while the proposed Directive increases legal certainty, labour protection coverage, transparency, and predictability in platform work, it will not solve all the problems encountered in this area of work. As the discussions revealed, there is considerable room for improvement. Below we list key policy recommendations that can improve
- Clarify employment status indicators without undermining the achievements in case law at the national level.
- Provide a ârulebookâ with criteria that platforms can use to rebut the presumption.
- Ensure that genuine self-employed individuals can maintain the same status if they wish so.
- Extend social protection to all workers, regardless of their legal status.
- Adopt a dedicated legal instrument to extend the provisions on algorithmic management to cover all workers subject to automated monitoring
AI in the workplace
Data-intensive technologies such as Artificial Intelligence (AI), augmented reality, the Internet of Things, and facerecognition systems can potentially increase organisationsâ productivity and efficiency. At the same time, they may challenge the fundamental rights of workers (e.g., anti-discrimination, dignity, privacy, collective rights) and what constitutes the notion of decent work.
AI is used across the sectors, from ride-hailing and food delivery to healthcare and the public sector. Moreover, the proliferation of remote monitoring and surveillance tools, which surged 108% in the United States and the EU in the first months of the pandemic, has led to a spike in digital monitoring, in turn causing the reduction of worker autonomy and loss of job control.
The pressing question now is whether the main stakeholders of employment relations (e.g., workers, trade unions, employers, and institutions) can transit to the future where technological developments breed higher productivity rates, as well as sustainability, equity, and fairness. Some of the recommended actions include:
- Encourage workers to exercise their data protection (privacy) rights as envisioned in GDPR (right to be informed; right of access; right to rectification; right to erasure) and non-discrimination rights.
- Ensure that the exercise of these rights is presented in understandable and explainable formats, including through toolkits and blueprints.
- Support AI design that takes into account gender, race, ethnicity, etc. Although âdatasets free of errorsâ are not feasible, biases and prejudices embedded in datasets should be exposed and minimised.
- Establish public registries of AI systems used in the world of work. These registries would enable the inspection, auditing, and dissemination of a better understanding of applied AI solutions.
Youth employment and workplace well-being
The COVID-19 pandemic has had particularly negative effects on the youth, as well as on the emotional well-being of workers in general. Global youth employment dropped almost three times more than adult employment in 2020 (decreases of 8.7 and 3.7 per cent, respectively) (ILO, 2021). The chances of young people (re)entering the labour market following the pandemic were hampered as employers focused on retaining, not recruiting workers.
The pandemic also led to one in four youth not being engaged in employment, education, or training (NEET), the highest rate in the last 15 years (ILO, 2022). When it comes to emotional well-being, the pandemic caused a 25 per cent increase in anxiety and depression globally, affecting youth and women in particular (WHO, 2022).
We recommend the following policy actions to support youth and ensure workplace well-being:
- Create public-private partnership programmes for upskilling and reskilling.
- Foster the uptake of apprenticeships to support the transition from education to work.
- Promote and support âparticipative entrepreneurshipâ, a broad concept for strategies to create collective ownership, which is possible at three axes: (i) a voice in operational decisions and (ii) strategic and (iii) financial co-ownership, which varies from profit-sharing to shareholding.
- Promote and support coop entrepreneurship as a model of solidarity, cooperation, and mutualisation of risks stemming from a wider social, economic, political, and ecological landscape.
- Enable access to mental health support programmes for all workers, regardless of their status (e.g., employee or self-employed).
Introduction
What is the impact of new forms of work on businesses and the way we work? How are data-driven technologies such as artificial intelligence (AI) changing work as we know it? What can be done to create better working conditions for current and future generations?
New technological advancements such as AI undeniably introduced a number of efficiencies at the workplace as well as novel ways of conducting work. However, they also brought new challenges, including the inflation of short work arrangements that appear more flexible and efficient yet may be infused with insecurity and precariousness.
The years 2021 and 2022 were marked by significant policy efforts on the side of the EU. In particular, efforts have been made to address regulatory challenges posed by platform workE.g., Proposal for a directive of the European Parliament and of the Council on improving working conditions in platform work., AI deployment at workE.g., Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts., youth unemploymentE.g., Initiatives within the European Year of the Youth., and post-COVID challenges to wellbeing in the workplaceE.g., EU Strategic Framework on Health and Safety at Work 2021-2027.. They are at the different level of policy cycle (see Table 1. below), determining the scope of the next actions that may have a decisive impact on the future of work.
Table 1. Policy cycle â Oxford Policy Management Approach
Policy cycle (based on the Oxford Policy Management approach) For more, see: https://www.opml.co.uk/our-approach/working-across-the-policy-cycle. | |||||
---|---|---|---|---|---|
Research | Policy options | Policy developments | Organisational reform | Capacity building | Monitoring and evaluation |
It serves to provide new knowledge and context-specific information that inform actions of policy makers. | Analyses of different policy responses that lead to the most effective reform, simultaneously benefiting the maximum number of people. | Selected policy options are further developed taking into account specific political, economic, social, and environmental context. | Changes in governmental structures and organisations are necessary to ensure increased usability of existing resources for better impact. | Sharing knowledge and experience with national and international actors through partnerships contribute to the sustainable outcomes of policy reforms. | Monitoring and evaluation of both activities and results are crucial in feeding new initiatives and actions on the ground. |
Resonating with the topics at the top of the EU agenda, the Reshaping Work Dialogue facilitated constructive discussions among 32 organisations, representing different viewpoints and expertise to further inspire policy-making, as well as provide concrete solutions to the pressing challenges, specifically regarding: (i) the impact of the platform work directive; (ii) AI in the workplace; (iii) youth employment and workplace well-being. Policy recommendations are discussed and tailored considering the current stage of the policy cycle of each topic.
Multistakeholder approach
The key added value of this report is its multistakeholder approach, a methodology pioneered by the Reshaping Work foundation, which gathers a multitude of stakeholders from companies and start-ups to unions and research institutes, ensuring that every party has a say in the debate that concerns them. As such, this approach has inclusivity and diversity at its heart.
To ensure that all the views are represented and that the report has a strong factual and scientific grounding, we relied on three sources of input. Firstly, we facilitated four roundtable discussions (between April and October 2022) among 32 organisations that represented different perspectives and/or had extensive expertise on the topics discussed. Roundtable discussions focused on addressing major challenges pertaining to platform work, AI in the workplace, and youth employment and well-being â the three topics addressed by this report. The goal was to expose this diverse set of stakeholders to different opinions, creating a safe place where views could be exchanged, positions could be negotiated, and novel solutions could come to light. Meeting minutes were taken and shared following each roundtable discussion, giving a chance to participating organisations to review them and provide comments.
Second, we conducted a systematic literature review of the topics discussed to provide a comprehensive overview of the major challenges faced, breaking new ground for policy interventions. Finally, we were informed by discussions and speeches at the Reshaping Work 2022 conference in Amsterdam, as well as the knowledge of three independent experts who peer-reviewed the report. Taken together, these steps ensured that information and policy recommendations presented in this report present balanced views, as well as highlight organisational or sectoral perspectives, building upon an array of studies and reports that precede it.
How to use this report
The report covers three topical areas: the impact of the platform work directive (Section 1), AI in the workplace (Section 2), and youth employment and well-being (Section 3). It is primarily intended for policy-makers, as it outlines points of convergence and divergence between different stakeholders and provides a resourceful repository of potential solutions. The report may also be useful to companies, unions, researchers, and international organisations that value multitude of perspectives and want to familiarise themselves with the debate surrounding these timely future of work topics. Below, we provide a summary of what each section covers and to whom it may be useful:
- Section 1 on the impact of the platform work directive (âThe EU Regulation Clearing New Paths to Platform Workâ) is useful for anyone working on public policy and legal issues concerning diverse (non-standard) forms of employment and worker rights more generally. It is also useful for policy makers that strive to create a healthy competition environment in Europe.
- Section 2 on AI in the workplace (âAI in the World of Work: Still No Silver Bullet Solution for Decent Workâ) is useful for policy makers looking to regulate the use of AI at work. It may also be of use to data scientists, unions, and legal practitioners as it points to major challenges faced with the introduction of AI at work, as well as possible ways forward that ensure decent working conditions in light of data-driven disruptions.
- Section 3 on youth employment and workplace well-being (âYouth Employment and Workplace Well-Being: Pursuing a Brighter Futureâ) is useful for policymakers and civil society organisations with youth integration in the labour market as a strategic objective. The section also covers well-being at the workplace and emphasises the importance of devoting attention to mental health risks and prevention, which may be informative for HR professionals and companies more generally.

THE EU DIRECTIVE
CLEARING NEW PATHS
TO DECENT PLATFORM
WORK

THE EU DIRECTIVE CLEARING NEW PATHS TO DECENT PLATFORM WORK
Despoina Georgiou and Jovana Karanovic

Introduction
In December 2021, the European Commission published its proposal for a new Directive on improving the working conditions in platform work (European Commission, 2021). The proposal is considered the boldest regulatory move at the international level surrounding platform work to date, possibly setting a precedent for the rest of the world to follow.
Delivering on the European Pillar of Social Rights, the European Commissionâs proposal for a new Directive aims to (i) tackle employment status misclassification; (ii) ensure fairness, transparency, and accountability in algorithmic management and; (iii) improve enforcement of the applicable rules.
Platform work can have a positive impact on the EU labour market. It can boost productivity, unleash creativity, and promote more flexible and efficient working conditions. Moreover, it can facilitate inclusivity and labour market integration by providing a source of income for people who are disadvantaged or traditionally excluded from the labour market (Andjelkovic, Jakobi, and Kovac, 2021).
At the same time, platform work can be associated with long and unsociable working hours (ILO, 2021; Eurofound, 2021) and low and unstable incomes (Piasna et al., 2022; Pulignano et al., 2021; Vandaele, 2018), leaving many platform workers exposed to physical and psychosocial health risks (Möhlmann, 2021).
Platform workers are usually not covered by labour and social protection legislation to the extent employees are. Platform workers are often classified as âself-employedâ and enjoy a limited number of rights compared to âemployeesâ or âworkersâ for the purposes of EU law. However, some categories of platform workers are under the de facto control of the platforms to which they provide their services (Aloisi & De Stefano, 2022). The Commission believes a number of workers to be bogusly self-employed (European Commission, 2021). As the proposal for the new Directive estimates, over 28 million people in the EU work through digital labour platforms, 5.5 million of whom are at risk of employment status misclassification.
Some Member States (e.g., France, Italy, and Spain) have already adopted protective legislation for certain categories of platform workers. Furthermore, employment tribunals in Spain, Italy, France, and the UK have litigated on the issue, re-classifying bogus selfemployed platform workers as âworkersâ for the purposes of national labour and social law provisions (HieĂl, 2022; Aloisi, 2022). Other countries, such as Estonia and France, provide national frameworks for self-employment.
Organisations that took part in the Reshaping Work Dialogue mostly agree that the adoption of a legislative instrument in this area of work is of paramount importance to facilitate business planning, promote development through fair competition, and provide greater certainty, clarity, and coverage, thus, preventing a race to the bottom that could lead to the erosion of workersâ rights. At the same time, most concerned parties took the view that there is significant room for improvement, yet there is disagreement on which provisions should be altered and how.
Reflecting on the latest research results, expert opinions, and roundtable discussions facilitated by Reshaping Work Dialogue, this section critically analyses the proposed Directiveâs impact and limitations, suggesting ways it can be improved to provide greater protection to vulnerable platform (and other categories of) workers.
Introducing a rebuttable presumption of âworkerâ status
One of the main aims of the proposed Directive is to tackle employment status misclassification in the platform economy. To address this issue, the proposed Directive defines a âplatform workerâ as âany person performing platform work who has an employment contract or employment relationship as defined by the law, collective agreements, or practice in force in the Member States with consideration to the case-law of the Court of Justice (âCJEUâ)â. Hence, unlike some laws introduced in certain Member States that cover only platform workers engaged in a particular sector (i.e., âthe Riders Lawâ in Spain that covers only food- or grocery-delivery platform workers), the proposed Directive covers platform workers engaged in all forms of work, whether online or onlocationFor the difference between online and on-location work, see Reshaping Work, 2021 and 2022.. This would thus cover ride-hailing workers (i.e., Uber, Bolt) but also graphic designers and household workers work via a platform (i.e., Upwork, Helpling, to name a few).
To help tribunals make the assessment, the proposed Directive introduces a legal presumption of âworkerâ status for persons controlled by a platform. The existence of this needed âcontrolâ is considered to be the case when two out of the five following criteria apply:
- The platform effectively determines or sets upper limits for the level of remuneration;
- The platform requires the person performing platform work to respect specific binding rules regarding appearance, conduct towards the recipient of the service, or performance of the work;
- The platform supervises the performance of work or verifies the quality of the results of the work, including by electronic means;
- The platform effectively restricts the freedom, including through sanctions, to organise oneâs work, in particular, the discretion to choose oneâs working hours or periods of absence, to accept or to refuse tasks or to use subcontractors or substitutes;
- The platform effectively restricts the possibility of building a client base or performing work for any third party.
At the time of writingThis could change as a result of the proposed Platform Work Directive., individuals who fulfil two of these criteria will be presumed to be under the platformâs control and will be granted âworkerâ status for the purposes of the Directive. The presumption, however, is rebuttable. Platforms can refute the presumption by proving that the contractual relationship is not an employment relationship according to national laws and the case law of the CJEU (Article 5). In making the assessment, attention should be paid to how the person actually carries out the assignment/task in question rather than to his/her formal classification in the contract (âprimacy of factsâ principle).
Most stakeholders who partook in the Reshaping Work Dialogue welcomed the proposed Directive as a step in the right direction. However, there are disagreements
Overall, most stakeholders who partook in the Reshaping Work Dialogue welcomed the proposed Directive as a step in the right direction. However, there is disagreement about the tools used to achieve increased worker protection. Generally, the criteria provided would help judges, regulators, and interested parties to properly classify platform workers; in this sense, the proposed Directive increases certainty and facilitates business planning. If adopted, the legal presumption of âworkerâ status, the reversal of the onus of proof, and the recognition of algorithmic management as a new form of control could lead to the reclassification of several bogus self-employed platform workers. However, the exact classification would be left to the national provisions due to the setting of EU labour law.
Potential loopholes and recommendations for improvement
At the same time, most stakeholders note that there is room for improvement. The breadth of the Directiveâs coverage will depend, to a large extent, on how it is transposed into the domestic jurisdictions and interpreted by the CJEU, which could be activated using requests for preliminary rulings to assess the consistency between EU law and the domestic law that transposes it. The following aspects of the proposed Directive could prove problematic and hence require further elaboration.
Unions and advocacy groups that partook in the Dialogue argue that some of the indicators used to trigger the presumption might reinforce a stricter notion of âcontrolâ than the dynamic notion of control that has gradually been accepted at the domestic level by employment tribunals in several Member States (i.e., France, Germany, Italy, and Spain). This would reduce the workersâ chances to be reclassified by a court and thus limit the level of protection afforded to platform workers. For example, the indicator of whether the platform effectively restricts a worker to choosing working hours is used to signify autonomy, even though the person might not have any real choice over these matters (e.g., food delivery riders have to work during peak hours or bad weather conditions to earn a decent living). For this reason, these stakeholders suggest re-wording or dropping these indicators of control so that they do not make access to protection more burdensome and challenging for vulnerable platform workers.
Unions and advocacy groups also fear that requiring two out of five indicators to be present for the legal presumption to apply might prove too restrictive. For this reason, they suggest that any one indicator should be enough to trigger the presumption. The European Confederation of Independent Trade Unions â CESI, for instance, points out that the issue may lie in the fact that platforms would self-categorise and assess themselves. For this reason, some unions advocate for dropping the five indicators of control altogether and instead adopting a broad presumption of employment status for all persons who perform work under the control of a platform.
An expansive approach has also been put forward by the Member of the European Parliament, Elisabetta Gualmini. In her report, Gualmini suggests expanding the indicators of âcontrolâ from five to eleven and moving them to the Preamble of the Directive (European Parliament, 2022). This way, the list of indicators becomes not exhaustive and not binding. According to her proposal, platforms would have to declare the workersâ contracts to the relevant social security bodies and labour inspectorates. If the declared contracts indicate that the individuals who provide services through them are âself-employedâ, the relevant authorities will have to assess â by applying the âprimacy of factsâ principle â whether the legal presumption of employment status applies.
On the other hand, platforms oppose broadening the presumption. As they argue, the legal presumption of âworkerâ status as it currently stands is too strong and would lead to the reclassification of many genuine self-employed individuals as âworkersâ for the purposes of the proposed Directive. For platforms, most individuals who provide services through them prefer the âself-employedâ status for the flexibility and autonomy it provides. If these persons were to be classified as âworkersâ for the purposes of the Directive, platforms say that they would be required to work specific shifts, something that would restrict their freedom to choose their hours of work. Furthermore, platforms would have to pay them a wage regardless of whether there is work available, which in turn would result in an increase in pricing to final customers (paying for unused capacity) and a lower level of service (less capacity during peak hours). For these reasons, platforms advocate against expanding the legal presumption of âworkerâ status and generally for a more balanced approach.
The proposed Directive does not specify the criteria platforms can use to rebut the presumption; thus, this would likely be left to the Member States. The problems with access to justice in these cases are well documented, as the fight to get âworkerâ status can be a lengthy and costly process. Furthermore, unions fear that platforms may exploit the unclear framework and use it as leverage to avoid providing labour and social law protections to platform workers, for instance, by tweaking their terms and conditions. To avoid this, the criteria for rebuttal need to be clarified, though such clarification will likely be at the discretion of Member States. This would help (i) provide legal certainty; (ii) increase coverage of labour protection; (iii) level the playing field across platforms; and (iv) avoid lengthy and costly disputes.
According to the proposed Directive, âthe party assuming the obligations of the employer shall be identified in accordance with national legal systemsâ. The reference to the national âemployerâ definition allows for divergent national approaches, opening the door to regulatory arbitrage but also hindering the scaling of platforms across Europe. Furthermore, this provision might be construed to mean that only one party can assume the obligations of the employer while, in reality, platform workers often work for multiple platforms (multi-homing) and might be dependent on some or even on all of them to make a living. Thus, the final instrument must clarify (i) who can act as the employer and (ii) whether multiple platforms can assume that role.
Research has shown that some platforms use complex subcontracting networks to avoid employer responsibilities and make it difficult for platform workers to claim their rights (Fairwork, 2021; Pulignano, 2022). As the proposed Directive currently covers only contractual relationships between the digital labour platform and the individual, the danger is that platform workers hired through an intermediary agency might not be covered by the Directive, as has occurred in countries such as Spain, when a similar presumption was introduced. In this instance, the contractual relationship would be between the individual and the agency, not between the individual and the platform. To prevent this from happening, the final instrument should clarify that platforms cannot absolve themselves of liability by using subcontracting chainsIn this respect, guidance can be drawn from Article 12 of the Posted Workers Enforcement Directive (2014/67/EU) to ensure that platforms and sub-contracting agencies are held jointly and severally liable for the provision of the rights provided by the proposed Directive..
The proposed Directive states that digital labour platforms will be covered only if they âinvolve, as a necessary and essential component, the organisation of work performed by individualsâ. In contrast, platforms whose primary purpose is to exploit or share assets (i.e., AirBnB) might not be covered by the proposed Directive. Legal scholars (e.g., Countouris, De Stefano, Prassl, Aloisi) have pointed out that these provisions need to be clarified so that they do not unjustifiably remove protection from vulnerable online workers.
Adequate access to social protection is important to improve the working conditions in platform work. A key tenet of the proposed Directive is that adequate access to social protection can be achieved domestically by correctly classifying the employment status of individuals performing platform work. This means that those who will be granted âworkerâ status for the purposes of the Directive will likely be given access, on a national level, to social security rights, such as healthcare, insurance against work accidents and occupational diseases, and (potentially) access to unemployment and pension benefits. Admittedly, whether adequate social protection will be achieved will also depend on the national system in place and whether workers meet local applicable minimum thresholds (of income or working hours) to be in scope. Therefore, it is likely that there will still be social protection gaps.
Baseline social protection should be granted to all individuals engaged in platform work, regardless of their employment status.
In contrast, individuals deemed âgenuinely self-employedâ will not be covered by the proposed Directive and will not be fully entitled to EU labour and social protectionUnlike âworkersâ, individuals who are classified as âself-employedâ for the purposes of EU labour and social law provisions cannot benefit from protective EU legislation regarding equal pay, working time (holidays, rest periods, overtime pay), maternity and paternity leave and pay, health and safety, information and consultation, collective redundancies, and other benefits. That does not mean, however, that Member States cannot introduce more favourable provisions that provide social security to the selfemployed. The Directive simply sets a floor of rights, not a ceiling (Article 20).. This could be particularly problematic, especially considering the precarious position of many persons engaged in platform work. As has already been noted, individuals who perform platform work are often exposed to financial, physical, and psychosocial risks (Georgiou, 2022; ILO, 2021; Piasna et al., 2022), not to mention unpredictable schedules and the shortcomings associated with output-related remuneration schemes. Taking into account the vulnerability of self-employed persons, especially in the case of exogenous events beyond their control (i.e., the pandemic), it becomes clear that they also need social protection.
The idea that the proposed Directive should be amended to provide social protection to the self-employed also raised several concerns. Malt and TaskRabbit expressed the view that their responsibilities lie solely with ensuring meaningful income that is at least minimum wage, providing fair access to work, and engaging in social dialogue. Concerns have also been raised about the EUâs lack of legal competence to regulate social security mattersSee, however, the Council Recommendation on access to social protection for workers and the self-employed..
On the other hand, many stakeholders who partook in the Dialogue (including certain platforms) criticised the Commissionâs approach for focusing too much on the âworker vs self-employedâ status while failing to raise the standard of social protection for all persons performing platform work. Indeed, most interested parties (unions, researchers, advocacy groups, businesses, and some platforms) agreed that baseline social protection should be granted to all individuals engaged in this line of work, regardless of their employment statusOn this issue, see Freedland and Kountouris (2011)..
While there is an overall consensus regarding the need to provide baseline social protection to all persons performing platform work, disagreement exists vis-Ă -vis how such a system could be financed. More particularly, platforms and insurers note that the development of social protection systems in the platform economy is not without challenges due to the specific characteristics of platform work (i.e., diversity of the workforce in the various platforms, multi-homing, cross-border activity, etc.). Industry-wide protection would need to create a mechanism to collect contributions from multiple platforms to centralise the benefits to one worker. Insurers stated that they are confident that they could support the creation of a mechanism for basic industry-wide protection. Box 1. presents alternative solutions to the current proposal.
Box 1. Alternative solutions to the platform work directive proposal
- Decoupling social protection from employment status.
- A multi-stakeholder model to finance social protection schemes could be imagined, in which all the parties concerned (i.e., platforms, consumers, workers) participate in financing the minimum level of social protection that would ensure decent living conditions for all persons performing platform work.
- Insurers recommend incentivising the uptake of private protection and pension products and schemes that provide some level of insurance coverage, for instance, via increased investment in financial education, fostering insurance innovation, and providing tax benefits.
- Broadening collective bargaining so that all persons performing platform work (regardless of their employment status) can negotiate the types of social security coverage that should be provided (the French ordinances on social dialogue for platform workers provide an example of how this could be achieved).
Enhanced algorithmic management rights
It is well-known that platforms use customer ratings, electronic monitoring, AI systems, and other technologically advanced tools (i.e., GPS, dynamic pricing software etc.) to ensure matching efficiency and assess workersâ performance. These methods â commonly referred to as âalgorithmic managementâ (De Stefano & Taes, 2021) â can constitute a form of control that can be just as, if not more, invasive than stricto sensu control exercised by managers in a traditional work setting. Albeit novel and innovative, these methods can generate negative outcomes for workers, including increased workload and stress, potentially unfair evaluations and discipline, data privacy concerns, and transparency issues (Aloisi & De Stefano, 2022; Acemoglu, 2021)For another perspective, emphasising the importance of context in which algorithmic management is applied, see: https://dl.acm.org/doi/10.1145/3492823 For a more extensive analysis on issues regarding the use of algorithmic management and AI at work, see the following chapter on âAI in the World of Work: Still No Silver Bullet Solution for Decent Workâ..
Recognising the potential pervasive effects of algorithmic management, the proposed Directive strives to increase transparency and accountability in the use of automated monitoring and decision-making systems. More particularly, it stipulates that platforms must now disclose to all persons who provide services through them the types of activities monitored and the types of decisions taken through automated decision-making systems. Platforms must also disclose the parameters factored into such automated decisionmaking, the relative weight attributed to them, and the grounds for decisions taken to restrict, suspend, or terminate individualsâ accountsSome companies have taken initiatives in this regard. For instance, see Woltâs Algorithmic Transparency report here: https://assets.ctfassets.net/23u853certza/5G5O7KFnwzDGWzE1JFwCN/8afadac22e5666af2d5a83a1f50214e3/ Wolt_Algorithmic_Transparency_Report.pdf. Furthermore, the proposed Directive provides for human oversight of the decisions taken using automated systems and introduces a right to request their review. If the decision is found to impinge upon the individualâs rights, the platform has to rectify it without undue delay or provide adequate compensation. In addition to broad access to information and review rights, the proposed Directive also provides that platforms must not process any personal data that are not strictly necessary for the performance of the contract.These provisions specify and complement existing rights in relation to the protection of personal data prescribed in the General Data Protection Regulation (GDPR).
When it comes to algorithmic management, regulatory clarity is needed as to what constitutes control to prevent unintended outcomes.
What is particularly important is that, with certain exceptions, these rights are afforded not only to workers but also to the self-employed. The recognition of the potential pervasive effects of algorithmic management and the introduction of new material rights for all individuals who provide services through platforms are welcomed by all stakeholders who took part in the Reshaping Work Dialogue. These provisions will not only increase transparency and accountability vis-Ă -vis the use of algorithms to manage and control workers but will also level the playing field by requiring platforms to provide some basic rights to all persons who offer services through them regardless of their employment status (some of these rights are already part of the GDPR).
Provisions on algorithmic management should be extended to cover all individuals who are subject to such systems and not just those working in the platform economy.
At the same time, the following criticisms have been raised. First, researchers have criticised the proposed Directive for adopting an ambiguous approach vis-Ă -vis control. For instance, Aloisi and Georgiou note, âthe text risks being self-defeating: how can it be acceptable that platform workers are subject to intense algorithmic management and yet still be classified as self-employed when control is the key trigger of the presumption of employment status?â (Aloisi & Georgiou, 2022). The final text, thus, should clarify what constitutes control to prevent unintended outcomes.
Second, unions such as UNI Europa have criticised the provisions on algorithmic management for capturing only fully-automated monitoring and decision-making systems. As they say, platforms could use semi-automated systems (currently not captured by the proposed Directive or the GDPR) to avoid legal obligations, even though such systems might still be damaging to workersâ rightsOn the view that the proposed Directive captures semi-automated systems, see Aloisi and Potocka-Sionek (2022)..
Third, various stakeholders (including platforms, the business sector, unions, and advocacy groups) have criticised the proposed Directive for creating a divide between the platform and other categories of workers. Nowadays, digital features and AI control and monitoring systems are increasingly being used not only in the context of the platform economy but also in traditional work settings (i.e., brick-and-mortar businesses). The provisions on algorithmic management could lead to a case in which persons performing platform work would, in principle, be protected to a much greater extent than workers and data subjects in conventional sectors. For this reason, they argue that the provisions on algorithmic management should be extended to cover all individuals who are subject to such systems and not just those working in the platform economy (the AI Act aims to address some of these; see section 2, âAI in the Workplace: Still No Silver Bullet Solution for Decent Workâ). A similar proposal was also made by the Member of the European Parliament, Gualmini, in her report on the proposed Directive.
Fourth, while most stakeholders welcome the introduction of access to information rights, they note that these rights will fall short of achieving the desired outcomes if they are not supplemented by provisions to increase digital literacy. Without specialised knowledge, most individuals cannot understand the function or assess the impact of algorithms on their workNotably, the proposed Directive also provides for the possibility of assistance by an expert chosen by platform workers or their representatives to examine the matter subject to information and consultation and formulate an informed opinion (Article 9). (i.e., technical aspects, bias). To mitigate these issues, the following measures have been suggested by different stakeholders:
- The inclusion of software engineers into policy debates;
- The establishment of a standard prescribing what aspects of algorithmic management need to be explained and to what extent;
- The strengthening of capacities and resources of labour inspectorates and data protection authorities to assess the information and enforce the regulation;
- The promotion of social dialogue on algorithmic management and the empowerment of unions to negotiate the adoption of technological tools with monitoring and decision-making potential (see below).
Finally, unions argue that the provisions concerning health and safety at workArticle 7(2). that are currently directed only at platform workers should be extended to cover all persons who provide services through platforms, regardless of their employment status. Aloisi and Georgiou (2022) uphold this view in their commentary on the proposed Directive.
Information and consultation and other collective rights
The proposed Directive also introduces collective rights regarding information and consultation on adopting automated monitoring and decision-making systems and implementing substantial changes related to their use. More particularly, it is stipulated that platforms must inform and consult with workersâ representatives if they wish to introduce new automated monitoring and decision-making systems or make substantial changes to those systemsArticle 9.. This way, the proposed Directive increases transparency and fosters social dialogue on algorithmic management, precipitating better working conditions.
Adequate frameworks that enable genuinely self-employed platform workers to bargain collectively must be implemented within the Member States.
Other collective rights introduced include (i) a provision that enables representatives of persons performing platform work to bring claims and engage in judicial and administrative procedures on their behalf and (ii) an obligation that platforms must create communication channels for persons performing platform work. These provisions have been put in place to help platform workers overcome geographical, procedural, and cost-related obstacles in defending their rights and to communicate with each other with a view to defending their interests.
Unions, researchers, and advocacy groups welcome the introduction of collective rights but emphasise the need to further amplify workersâ voices by strengthening the provisions on social dialogue, which is supported by researchers (De Stefano & Aloisi, 2019; Vandaele, 2021). Social dialogue helps mitigate imbalances of power in the workplace, thus increasing democracy at work.
Furthermore, it has been suggested that a provision should be added that requires digital labour platforms to refrain from any act that could directly or indirectly undermine the right to unionise or to join a trade union or discriminates against individuals performing platform work who participate or wish to participate in collective bargaining. Finally, the Directive should clarify that workersâ representatives must be designated by elections open to all persons performing platform work.
It should also be noted that the Commission has recently adopted new Guidelines that reduce the tension between competition law and collective bargaining for many solo selfemployed individuals, including persons who provide services through platforms. Genuinely self-employed platform workers can now bargain collectively for the amelioration of their working conditions without the fear of breaching competition law provisions. However, adequate frameworks that enable this must be implemented within the Member States. Most stakeholders welcome the adoption of the new Guidelines and suggest that they could be used as a basis for the proposed Directive. More particularly, researchers and advocacy groups have suggested that the proposed Directive would benefit from introducing protective provisions for the categories of self-employed persons covered by covered by the Guidelines, such as (i) economically dependent self-employed workers and (ii) those who work âside-by-sideâ with employed workers. These categories of genuinely self-employed individuals could also be recognised in the proposed Directive as vulnerable and be afforded a similar level of protection as platform workers.
Declaration of platform work and access to relevant information
Difficulties in enforcement and lack of traceability and transparency are also thought to exacerbate poor working conditions and inadequate access to social protection. National authorities do not always have sufficient access to data on digital labour platforms and the people working through them. The problem of traceability is especially relevant when platforms operate in several Member States, making it unclear where platform work is performed and by whom.
To address this issue, the proposed Directive stipulates that platforms must declare platform work to the relevant national authorities and make available to them and to representatives of persons performing platform work key information regarding the number of people working through digital labour platforms, their employment status, and their standard terms and conditions. Most stakeholders welcome these provisions, as they will help national authorities ensure compliance with labour rights and collect social security contributions, improving the working conditions for platform work.
Policy recommendations
Overall, while the proposed Directive increases legal certainty, labour protection coverage, transparency, and predictability in platform work, it will not solve all the problems encountered in this area of work. As the discussions revealed, there is considerable room for improvement. The final text is currently being discussed in the European Parliament and the Council. Below are some policy pointers that can improve upon the proposed instrument, providing more robust protection to a greater number of platform workers:
- Clarify employment status indicators without undermining the achievements in case law at the national level.
- Provide a ârulebookâ with criteria that platforms can use to rebut the presumption.
- Clarify whether multiple platforms can act as the employer and address possibilities of the use of sub-contracting networks.
- Ensure that genuine self-employed individuals can maintain the same status if they wish so.
- Extend social protection to all workers, regardless of their legal status.
- Adopt a dedicated legal instrument to extend the provisions on algorithmic management to cover all workers subject to automated monitoring and decision-making systems (not just those working in the platform economy).
- Promote social dialogue on algorithmic management and amplify workersâ voices by strengthening the provisions on information and consultation.
- Facilitate collaboration between labour inspectorates and administrative bodies to address cross-country issues and strengthen enforcement.

AI IN THE WORLD OF
WORK: STILL NO SILVER
BULLET SOLUTION FOR
DECENT WORK

AI IN THE WORLD OF WORK: STILL NO SILVER BULLET SOLUTION FOR DECENT WORK
Tiago Vieira and Jelena Sapic

Introduction
It is predicted that by 2030, data production will be ten times bigger than today (Balnojan, 2020). This exponential growth of data as an essential resource for the twenty-firstcentury economy stimulates a plethora of questions, such as what it will be used for, what entities will control it and ensure its ethical and lawful application, and how it will impact work organisationThe questions tackle the green transition as well. The upsurge in data production requires new data storage centres, consuming additional energy and creating a larger carbon footprint; all together potentially retarding sustainable development. AI NOW Institute (Crawford et al., 2019) shared that a training of one data-based (AI) model produced carbon emissions approximately equal to 125 round-trip flights between New York and Beijing..
In the context of work, data-intensive technologies such as AI, augmented reality, the Internet of Things, and face-recognition systems can potentially increase productivity and efficiency in organisationsCommonly-quoted examples include AI-powered solutions processing billions of transactions in a secure, timely, and safe manner, or AI-enabled systems automating tedious tasks.. At the same time, they may challenge the fundamental rights of workers (e.g., anti-discrimination, dignity, privacy, collective rights) and what constitutes the notion of a decent work agenda. Several media and research reports document AI applications across the sectors, from ride-hailing and food delivery to healthcare and the public sector (for a review, see Wood, 2020). Moreover, the proliferation of remote monitoring and surveillance tools, which surged 108% in the US and the EU in the first months of the pandemic (Ball, 2021), has led to a spike in digital monitoring, causing the reduction of worker autonomy and loss of job control (Aloisi & De Stefano, 2022; Baiocco et al., 2022).
AI relies on processingTerm âdata processingâ is borrowed from the legal field (e.g., the GDPR) denoting any action executed in relation to personal data. vast amounts of data through machine (self-)learning, which results in automated or semi-automated predictions, suggestions, and decisions (Waardenburg et al., 2021). Despite AI systems determining work experience, a recent study on algorithmic management tools in the context of employment (HolubovĂĄ, 2022) found that one-third of surveyed workers in the EU Member States did not know whether such tools were deployed or notThe study was conducted in all EU Member States from January to February 2022. In total, it covered 1,395 employees, out of which 94% are trade union members., highlighting new power imbalances.
Surprising or not, the transformational wave spurred by the deployment of AI systems in the workplace is still in its early stages (McKinsey, 2022). If it continues to stimulate disruptions impeding worker integrity and productivity, AI may, in the long run, cripple the effects of the UN 2030 Agenda for Sustainable Development, in particular sustainable development goals eight (âDecent Work and Economic Growthâ), five (âGender Equalityâ), and ten (âReduced Inequalitiesâ).
Therefore, the pressing question now is whether the main stakeholders of employment relations (e.g., workers, trade unions, employers, and institutions) can transit to the future where technological developments breed higher productivity rates, as well as sustainability, equity, and fairness. A successful transition depends on:
- the capacities of workers and trade unions to keep ensuring decent work and protect labour and data rights in the AI-powered workplace;
- companiesâ readiness to generate economic value, expand to new markets, and sustain competitive advantages while complying with existing regulations and upholding the AI ethical principles;
- policy makersâ ability to generate trust and reliability, align regulatory regimes, prevent regulatory arbitrage, and balance the interests of workers, consumers, and usersThese points are based on a key lecture delivered by Peter Bloom at the ILO Workshop (October 24, 2022) on âProposal for an ILO Policy Observatory on Work in the Digital Economyâ..
The Reshaping Work Dialogue hosted a series of discussions about AI in the workplace. Drawing upon these insights and literature in the field, this report section explores the challenges imposed by AI deployment through the lenses of fair treatment, autonomy, occupational safety and health, and collective bargaining and representationThese dimensions are adapted from the International Labour Organisationâs Decent Work Agenda and have been broadened to include emerging AI-induced risks to worker well-being. (see Table 2.). By doing so, the section aims to contribute to ongoing debates on how to ensure that data-driven transformations, specifically related to AI, are aligned with prosperous societies and workersâ well-being, as envisioned by the UN 2030 Agenda.
Table 2. Four main dimensions in assessing AIâs impact in the workplace
Fair Treatment | Work Autonomy | Occupational Safety and Health | Social Dialogue |
---|---|---|---|
Everyone should be given equal rights and opportunities, irrespective of gender, age, ethnicity, religion, or ideological beliefs. Discriminatory practices disadvantage individuals, organisations, and, eventually societies as a whole. | Thriving working environments are characterised by a sense of reciprocal fair treatment and, in the particular case of workers, autonomy, self-determination, creativity, etc. This favours commitment, which normally brings better results to companies. | Safe and healthy workplaces preserve workersâ physical and mental health, which, in turn, breeds security and trust, leading to higher productivity rates. | Social dialogue is recognised as a key mechanism in achieving a human-centred future of work. It is also established as the means to democratise employment relations, echoing ILO recommendations on the importance of workersâ participation. |
AI can be designed and used to mitigate biases and prejudices ingrained in human decisionmaking processes, which stem from gender, race, ethnicity, religion, or ideological beliefs (Li et al., 2020; Pisanelli, 2022). However, in some cases AI applications have exacerbated discriminatory practices by perpetuating existing patterns. For instance, Amazonâs nowretired algorithm employed to rank the CVs systematically favoured applicants where âmaleâ characteristics were more tangible over comparable applicants where allegedly feminine features were more present (Dastin, 2018).
A question on how to identify and correct bias in humans before it spills into AI deployment and data processing is one of the most urgent matters. One nonexhaustive pathway to do so is hiring diverse coding teams.
Such unethical practices can be traced to several factors: (i) biased datasets, (ii) biased trainers and programmers, and (iii) algorithmic blindness to workersâ individual traits.
In the jargon of programmers, it is the data that feeds algorithms that allows them to formulate predictions/decisions (Jarrahi et al., 2021). Historical training data dictates the outcomes, which can reflect and retrench societal shortcomings (Reshaping Work, 2022a). In that sense, algorithms will always be, to some extent, a by-product or an exacerbation of ex-ante conditions.
If previous datasets are not available, AI trainers â humans in charge of training algorithms and generating training data â may transpose their own prejudices into a brand-new AI model. Echoing these concerns, the Reshaping Work Dialogue participants highlighted the necessity of discussing and evaluating (i) what data is used as an input for software and (ii) what consequences it has on workers as a result.
AI developers decide what matters and how much relative weight each variable has. Their biases or preferences (Zeide, 2023), particularly in non-diverse coding teams, are likely to be reproduced and hence determine the end results. Such circumstances create a risk of embedding that bias into technology. Companies that took part in the Reshaping Work Dialogue considered a question on how to identify and correct bias in humans before it spills into AI deployment and data processing as one of the most urgent matters. One nonexhaustive pathway to do so is hiring diverse coding teams.
Last but not least, algorithmic blindness is associated with the inner workings themselves. Algorithms are programmed to reward work performance above a computationally defined threshold and discipline any conduct below this threshold. They do not consider individual specificities that may lead to lower performances. Such AI deployment has at least two consequences: (i) people with disabilities, pregnant women, or workers with health conditions who do not meet predefined quantified requirements and averages may face unfavourable outcomes (Moore, 2020); (ii) the pressure to meet quantifiable variables disregards dimensions of organisational culture that are crucial for success (Evans & Kitchin, 2018; Leicht-Deobald et al., 2019).
Beyond bias-related factors, discrimination may stem from workersâ performance assessment through customer ratings. Customer ratings are utilised by an increasing number of firms (mainly digital labour platforms) to make managerial decisions (Aloisi & De Stefano, 2020). In some instances, shifts are allocated to workers based on their ratings; in other cases, workersâ ratings determine how high up on a list of potential task/ service providers they appear, which affects their probability of being hired for a given order. By playing a role in work distributions, ratings outsource middle-manager powers to customers (Rosenblat, 2018).
The challenge here is slightly different from the points discussed above. It does not entail asking AI to solve structural and socially embedded different forms of prejudice but to understand that automated, unsupervised decision-making mechanisms that sanction workers based solely on clientsâ assessment are highly likely to harm those who are already victims of discrimination due to their race or gender (Rosenblat, 2018).
The complexity brought by the deployment of AI systems in the workplace presents a risk to ensuring fair treatment of workers. What makes it even more complex than in the past is the apparently neutral façade of technologies that, when combined with the opacity and complexity of the processes at play, make discrimination much harder to discern and repel (Kelly-Lyth, 2021), as well as make it more difficult to hold anyone accountable.
Along these lines, participants in the Reshaping Work Dialogue agreed that AI systems foster possibilities to enhance the quality and productivity at work. WEC-Europe shares that the recruitment industry is interested in the application of AI. They see it as an opportunity to increase fairness, especially for removing unconscious bias, and to help automate tedious processes. Other benefits include improvements in user experience. Thanks to chatbots, applicants receive a reply outlining deadlines and next steps (which, otherwise would not be possible with hundreds of submitted applications). For this to happen, however, all the Reshaping Work Dialogue participants agree that co-creation and human supervision throughout the process are available means to mitigate risks of AI opaqueness and unaccountability.
Co-creation and human supervision throughout the process are available means to mitigate risks of AI opaqueness and unaccountability.
It is essential to provide software developers, engineers, and data scientists with guidelines to develop and deploy AI systems with potential socioeconomic (and environmental) risks in mind.
Reshaping Work Dialogue participants also find it essential to provide software developers, engineers, and data scientists with guidelines to develop and deploy AI systems with potential socio-economic (and environmental) risks in mind. Taking things, a step further, companies could ensure continuous human supervision. For instance, Fairwork Foundation worked with Amazon Flex on a model where humans work hand in hand with algorithms. However, the Fairwork interviews show that one of the challenges platforms face in securing the human supervision is finding shift workers available in the evenings, at night, or during the weekends.
AI systems process a vast amount of data, and depending on its design, AI deployment can either support or cripple work autonomy and flexibility.
One way this can occur is through gamification. As hinted by the expression, gamification refers to the introduction of game-like elements in the labour process. Surpassing obstacles, attaining goals, and collecting rewards are all manifestations of gamification. The ability of games to elicit consent and increase productivity has long been recognised. However, in recent years, AI seems to have exponentiated the use and complexity of gamification practices. For example, workersâ access to shifts may be determined by the points they can earn across different parameters that assess their performance (Van Doorn & Chen, 2021; Vieira, 2020).
Research has suggested that these game-like designs of labour processes may coerce workers into investing more effort than the accomplishment of their tasks would otherwise entail (Gandini, 2019). Also, often, rewards are meaningless (e.g., digital badges), if not perverse (e.g., having the possibility to work in the following week) (Ivanova et al., 2018), which may lead to behaviours of self-exploitation where workers take on risks for the sake of satisfying the criteria of the game (Vieira, 2020).
Parallel to gamification techniques, AI has been used to prolong working days through what has become widely known as ânudgingâ (Thaler & Sunstein, 2009). For example, companies are prompting messages (i.e., nudges, often in the form of app notifications) dedicated to tempting workers to keep on working when they intend to stop or to return to work if they are inactive (Shapiro, 2020). The process of nudging allows firms to deploy a semi-automated mechanism that assists them in expanding the labour supply to meet customersâ demands. All this occurs in an apparently non-disciplinary way, as nudges often take the form of economic incentives rather than explicit commands (Parth & Bathini, 2021).
On the other hand, companies that took part in the Reshaping Work Dialogue point to good practices in which safety of both workers and customers is prioritised. For example, some ride-hailing companies have a mandatory log-off after ten hours of driving, requiring drivers to rest for at least six hours before getting back on the road. This example shows that ânudgingâ can also be used to encourage workers to take actions that promote their physical and mental well-being.
While a number of positive developments are under way, there are a few risks in the context of platform work that need to be taken into consideration.
First, workers largely operate as independent contractors, i.e., they earn per task accomplished or per hour; thus, waiting time across platform work is generally not accounted for. While workers might seem free to ignore nudges, there is an obvious underlying incentive to follow them to maximise income (Van Doorn, 2017). For instance, ride-hailing and food delivery processes offer workers higher compensation if they work during periods of high demand (e.g., peak hours, under poor weather conditions when the workforce supply drops).
Second, intensive data collection allows deployers of nudges to estimate the amount of time in which workers with similar behavioural patterns will be inclined to answer positively to an appeal to work extra hours. In other words, the fares paid by companies are individually customised according to each work profile. This results in unequal and potentially discriminatory remuneration for the execution of similar tasks (on the notion of wage discrimination, see Dubal, 2023).
Third, nudges rely on an information asymmetry between the nudger and the nudged. This means that when they are nudged, workers cannot assess the extent to which they will be able to collect the promised rewards. There may well be a massive response to the nudge, and whatever bonus mobilised workers in the first place are pulverised by an oversupply of labour (Rosenblat, 2018).
Box 2. Voluntary initiatives addressing AI challenges
International organisations (e.g., OECD, UNESCO, WEF), companies (e.g., IBM, Microsoft, Wolt), and unions (e.g., UNI Global) developed voluntary (e.g., self-regulatory, non-governmental) initiatives in the context of AI deployment in the world of work. These include AI ethical principles, guides, and codes of conduct. The AI principles usually include explainability, fairness, robustness, transparency, and accountability. Codes of conduct, on the other hand, stipulate desirable organisational behaviour in certain domains, i.e., AI development and deployment, following shared values.
These voluntary, self-regulatory initiatives are welcomed as (complimentary) means to mitigate risks and encourage informed debate within and between organisations. They are appreciated in light of a lack of existing standards and long regulatory processes, often lagging behind the rapid advancements of data-driven technologies.
That said, there are several ways they can be improved (Ponce Del Castillo, 2020). As AI ethical principles are broadly defined, in practice, they may create misinterpretations, weakening the necessary actions. In addition, principles should be depicted in the order of importance, making it clear how potential conflict between principles should be addressed (e.g., what matters more, fairness or accountability). Next, we need to develop guidance on how to translate these principles into concrete courses of action within an organisation. Lastly, it would be important to develop criteria to evaluate impact of such initiatives.
The deployment of data-driven technologies such as AI in the world of work aims at ensuring better work organisation, for instance, by increasing efficiency and productivity (e.g., through automation of task allocation), improving decision-making (through people analytics, or AI-powered prediction models), and advancing worker health, safety, and overall well-being (by data processing to establish risks and concerns or by developing digital well-being tools) (EU OSHA, 2022). However, at the same time, AI deployment paves the path to emerging occupational safety and health (OSH) risks. These new risks include, but are not limited to, the intensification of work and the dehumanisation and datafication of workers (for the full list of new risks, see EU OSHA, 2022).
Widespread usage of wearable devices instructing and directing workers may strip away any cognitive or emotional engagement in the workplace.
The intensification of work is a result of AI systems optimising and speeding up work performance. The systems instruct and direct workers on how to complete tasks most efficiently, or they measure break time and track worker activity. Applied techniques can potentially induce extensions of working hours or changes in working rhythms that infringe upon workersâ well-being (Delfanti, 2019; Levy, 2015).
A typical example comes from warehouses where workers are equipped with hand-held âscan gunsâ that combine barcode scanners with motion and location tracking. Such devices automatically assign optimal items to workers and direct them around warehouses based on their location, increasing productivity and efficiency. As with any other work equipment, risks are associated with it either being excessively used (e.g., causing ergonomic issues) or unexpectedly malfunctioning (e.g., causing stress due to inability to meet pre-set targets).
Widespread usage of wearable devices instructing and directing workers may strip away any cognitive or emotional engagement in the workplace. AI deployment may further lead to de-skilling, turning work into excessively repetitive tasks with no room for creativity or autonomy (Wood et al., 2019).
Technological developments have allowed for unprecedented data processing, including sensitive data such as workersâ biometrics (Kellogg et al., 2020; Moore, 2020). The boundary between data necessary to carry out tasks and details from private lives has become blurry. This may lead to breaches in workersâ privacy and intrusions into their private lives. In such datafied workplaces, workers are simultaneously data producers, data subjects, and technology users.
Nevertheless, there have been some efforts aimed at empowering workers through data collection and ownership. Following sets of principles for worker data rights and ethical AI, the UNI Global Unionâs Young Workersâ Lab, together with the Guardian Project and OkThanks, launched an app called WeClockIt, which provides workers with an opportunity to collect and own their work-related data, putting them in control of who has access to the data. The app then enables workers to âquantifyâ their work (e.g., working and commuting time, fees, etc.) for the purpose of collective bargaining.
Other examples include development of toolkits that inform workers about their rights. Uber, for instance, has launched a privacy centre enabling workers to check how their data is being used, limit the scope of data collection, and download their data. The centre also provides information about the algorithms and explanation on other applicationâs features.
Using technology in a way that does not serve the best interests of workers and companies may diminish trust between the two. This may challenge the equilibrium that underlies the employment relationship as we have known it over the last decades (Moore, 2020; Upchurch, 2018).
Discussions within the Reshaping Work Dialogue pointed towards work-democracy extension, considering that current debates surrounding, for instance, algorithmic management, should include drivers and riders (irrespective of contractual status) as well as managers and other white-collar employees.
There are solid reasons for supporting the demand for more democracy in workplaces. From both normative and empirical standpoints, democratising work seems to ensure fairer, safer, and more productive work environments (Frega et al., 2019; Landemore & Ferraras, 2016). However, as insightfully noted by Aloisi and Gramano (2019), âfeedback mechanisms, surveillance, and data â now perceived as essential organisational components â are altering the balance of power in the workplaceâ (p. 106).
Consequently, as highlighted by several stakeholders, positive steps taken in consolidating social dialogue as a cornerstone of contemporary employment relations are now confronted with new challenges (Senatori, 2020). For instance, Unionen pointed out that the introduction of AI systems, without negotiating such organisational changes in accordance with collective bargaining agreements, in all likelihood constitutes a failure to uphold said agreements in the Swedish context. Further risks to a social dialogue are clustered into three groups below.
First, AI systems collect and process vast amounts of data, such as geolocation and movement tracking, voice recording, or remotely taking snapshots of workersâ screens. Even if systems are not intentionally designed for this purpose, they may be â and have already been â used to identify workersâ efforts to organise/unionise (Aloisi & De Stefano, 2022). This usage of data-driven technologies undermines fundamental worker rights.
Second, these are systems of extreme complexity, which demand significant degree of understanding of computational processes to make meaningful assessments and contributions for improvements. In other words, even if workers may experience the realisation of some of the earlier described risks inherent in AI systems, it may not be likely that workersâ representatives will discern what technical changes could alter such experiences.
In 2022 Reshaping Work Dialogue hosted a workshop gathering public policy officers, legal experts, work worker representatives, and data scientists to cross-fertilise perspectives and needs in promoting ethical AI deploymentFor a detailed discussion on the proposed platform work directive, see section 1 of this report âThe EU Directive Clearing New Paths to Decent Platform Workâ.. As a result, a guide summarising actions required for enhancing ethical principles on the organisational level was published (Reshaping Work, 2022b).
The necessity of support in addressing the technical side of AI systems is tacitly confirmed in the proposed EU Directive on improving working conditions in platforms by the enshrinement of workersâ right to expert assistance, at the expense of employers, to assess the deployment of algorithms (Aloisi & Potocka-Sionek, 2022). A complementary approach can include a blueprint for collective bargaining regarding AI deployment. Such a step forward was made by the Spanish Government when it launched a guide to support the implementation of the recently enshrined duty of algorithmic information in work settings (MITES, 2022).
Third, the previous point notwithstanding, AI systems may be of such complexity that their explainability may render human scrutiny impossible from the start or after self-learnt transformations (Burrell, 2016). This can occur, for example, due to multiple sources of code (Kitchin, 2017), which can become hard to interpret. This reinforces the call for audits and certifications of AI systems in highly sensitive or high-risk areas.
In the context of the EULegal documents, such as the Directive on the Security of Network and Information Systems or EU Cybersecurity Act, that tackle certain aspects of AI systems are left out of the discussion below., the General Data Protection Regulation (GDPR)GDPR was enacted in May 2018 and represents an updated version of the 1995 Data Protection Directive, fit for (back then, the development of) a data-driven ecosystem., an omnibus legislative act (e.g., irrespective of areas) (Voss, 2021), appears to be a useful mechanism in the world of work. The GDPR envisions an obligation for companies (as data processors) to inform workers (as data subjects) about data processing and its legal basis. On the other hand, workers have the right to access their data, and in the case of automated data processing, they can obtain human intervention (Article 23). As data subjects, they can also ask that personal information be corrected or deleted.
In addition, Article 88 of the GDPR allows EU Member States to âprovide for more specific rules to ensure the protection of the rights and freedoms in respect of the processing of employeesâ personal data in the employment contextâ. In line with this is Article 20 of the Commissionâs proposal for a platform work directive that empowers Member States to apply or introduce laws, regulations, administrative provisions, or collective agreements that are more favourable to platform workers.
The expansion of worker data processing has open discussions as to whether this framework is the best fit for the so-called âsecond machine ageâ (Brynjolfsson and McAfee, 2014). At the time of writing of this, two new legislative pieces were being negotiated at the EU level: (i) a proposed directive to improve the working conditions in platform work; and (ii) Artificial Intelligence ActThe third one â Proposal on Regulation of Machinery Production â has been left out of the scope of this section. (hereafter: the AI Act).
The Commissionâs initial proposal outlines the enforcement of transparency, fairness, and accountability in the algorithmic management of platform work as one of the aimsFor more details on the impact of platform work directive, see the section 1 of this report.. It further envisions informing workers and their representatives about activities subjected to fully- and semi-automated decision-making. Commentaries see the initial proposal as not only being complementary to GDPR but providing greater protection to workers.
The AI Act focuses on AI systems being placed in the market or put in service (Ponce Del Castillo, 2021)Given the complexity of AI systems, regulatory efforts could be oriented to address AI development, usage, and technology per se, or sector deployment. Grounds for the EU AI Act lie in Article 114 of the Treaty on the Functioning of the European Union, which provides for the adoption of measures to ensure the establishment and functioning of the internal market, meaning the EU approach assesses AI as a product. and adopts a risk-based approach, differentiating between solutions encompassing minimal, limited, high, and unacceptable risk. The AI Act draft establishes the usage of AI systems in the context of work and employment as high-risk to fundamental worker rights, health, and safety (for an overview, see Cefaliello & Kullmann, 2022). It establishes high risk for AI products applied in the area of recruitment (e.g., vacancy promotion, application screening, and evaluation) and in decision-making (e.g., ending contractual relationships, task allocation, promotion).
High-risk AI systems can be used in accordance with an ex-ante assessment, and if certain criteria are met (e.g., high-quality datasets that minimise discrimination and risks; users are informed in a clear and appropriate manner; adequate human oversight is in place; a high level of robustness, security, and accuracy; etc.). Those developments (which are subjected to further changes at the time of writing as a part of the EU trialogue) are going in a direction that will drive safer usage of AI in the world of work.

Develop data and algorithmic literacy programmes targeting workers and other key stakeholders to stimulate informed decision-making at different levels and encourage discussion that can stimulate responsible and sustainable innovation.
Ensure the right to information and co-determination when introducing or updating new datadriven technologies.
Policy recommendations
We have divided policy interventions into two groups: (i) the protection of worker data and (ii) good governance of AI systems in the world of work.
When it comes to the protection of worker data, we recommend:
- Encourage workers to exercise their data protection (privacy) rights as envisioned in GDPR (right to be informed; right of access; right to rectification; right to erasure) and non-discrimination rights. This can be achieved through media campaigns, companies informing workers about their data privacy rights as a part of onboarding, ongoing trade unionsâ education programmes, and joint stakeholdersâ efforts to map (e.g., make visible) AI deployment in workplaces.
- Ensure that workersâ rights and the way to exercise them is presented in understandable and explainable formats, including through toolkits and blueprints. For these purposes, an EU-wide working group could be established with the competency to define standards on how technical products (e.g., data, algorithms) should be organised and delivered to nontechnical actors.
- Develop data and algorithmic literacy programmes targeting workers and other key stakeholders to stimulate informed decision-making at different levels and encourage discussion that can stimulate responsible and sustainable innovation.
When it comes to good governance of AI systems in the world of work, we recommend:
- Raise awareness of societal, economic, and psychological consequences of AI deployment in the world of work. So far, AI systems have been dominantly designed to improve efficiency and productivity; yet their deployment (re) created inequalities and OSH risks that could be mitigated on a technical level.
- Support AI design through public-private partnerships and initiatives so the design takes into account gender, race, ethnicity, etc. Although âdatasets free of errorsâ are not feasible, biases and prejudices embedded in datasets should be exposed and minimised.
- Establish public registries of AI systems used in the world of work. These registries would enable inspection, audits, and a better understanding of applied AI solutions in the workplace, yet they would not include confidential company information or any information that could jeopardise companiesâ competitive position.
- Ensure the right to information and co-determination when introducing or updating new data-driven technologies (otherwise, workers are mere technology users). As part of the right to information and promotion of workersâ involvement, regulatory provisions should determine that any AI technologies should encourage validation in the set-up of social dialogue prior to its deployment (even if approved by state agencies) and during its life cycle.

YOUTH EMPLOYMENT
AND WORKPLACE
WELL-BEING: PURSUING
A BRIGHTER FUTURE

YOUTH EMPLOYMENT AND WORKPLACE WELL-BEING: PURSUING A BRIGHTER FUTURE
Jelena Sapic and Jovana Karanovic

Introduction
The future of work depends not only on the deployment and regulation of digital technologies, such as digital labour platforms and AI, but also on advancing youth employment and workplace well-being. The active participation of young people in the labour market contributes to a sense of dignity and better integration into society, while overall worker well-being promotes productivity and job satisfaction. Both of these important topics are currently being challenged by a wider economic, political, environmental, and social landscape.
The COVID-19 pandemic has had particularly negative effects on the youth, as well as on the emotional well-being of workers in general. Global youth employment dropped almost three times more than adult employment in 2020 (decreases of 8.7 and 3.7 per cent, respectively) (ILO, 2021). The chances of young people (re)entering the labour market following the pandemic were hampered, as employers focused on retaining, not recruiting workers. The pandemic also led to one in four youth not being engaged in employment, education, or training (NEET), the highest rate in the last 15 years (ILO, 2022).
When it comes to emotional well-being, the pandemic caused a 25 per cent increase in anxiety and depression globally, affecting youth and women in particular (WHO, 2022). Indeed, people under 25 have a higher risk of suffering from depression than any other age group. According to the Eurofound 2021 report, among the youth, NEETs are the most exposed to depression and social exclusion. On the other hand, women are vulnerable to stressors such as loneliness, financial worries, and work-life imbalance. Prolonged workplace stress leads to burnout, manifesting in feelings of low energy and depletion, work disengagement, and lack of purpose and achievement.
This already complex situation concerning youth employment and workplace well-being is further exacerbated by the conflict in Ukraine, twin transitionThis refers to an integrative approach aimed at aligning digital and green agendas to achieve sustainability and spur digital innovation., demographics, and migrant changes. Together, these challenges will continue to have an impact on micro, meso, and macro levels:
- Workers face a number of uncertainties stemming from the ever-evolving labour market (micro level). Young people are concerned with the availability of good-quality jobs. Furthermore, an uncertain economic situation may lead to the feeling of limited or no control over work and life prospects.
- Companies and societies grapple with a protracted period of recovery, which may increase or create new inequalities (meso level). Without supportive programmes and structures, workplace stress creates a wider cost in society through increased unemployment, loss of skilled labour, and reduced tax revenue.
- The prospect of meeting goals set by broader policy frameworks, such as the UNâs 2030 Agenda for Sustainable Development, has become too ambitious to succeed (macro level). The global landscape has affected efforts to achieve sustainable development goals (SDGs), in particular SDG 3 (âGood Health and Well-beingâ), SDG 5 (âGender Equalityâ), and SDG 8 (âDecent Work and Economic Growthâ).
Considering the recent turn of events, it comes as no surprise that human-centred recovery is at the top of the policy agenda.
This section will address how youth can best be integrated into the labour market, as well as how workplace well-being, with a specific focus on mental health as the prevailing issue at the moment, can be achieved.
Building upon the Reshaping Work Dialogue 2022 discussions, the section is structured as follows: we start with an overview of youth employment from two perspectives: (i) youth participation in diverse (non-standard) forms of work; and (ii) youth resilience in the labour market. Then, our focus shifts to workplace well-being, exploring (i) how burnout affects workers and (ii) what preventive measures can be promoted at various levels to reverse current high levels of exposure to stressors.
Youth and diverse forms of work
Newcomers to the labour market are likely to begin with some type of diverse (nonstandard) form of work, such as part-time and/or temporary work. These forms of work have lower entry barriers, provide more flexibility, and offer an opportunity to combine paid work with studying. Nevertheless, these forms of work are characterised by lower levels of employment and social protection than standard employment (i.e., permanent work), which may lead to long-term precarity.
Prior to recent disruptions in the labour market caused by the COVID-19 pandemic, among others, youth were prominent in temporary employment. An ILO study (2016) found that 54% of young people aged 15-24 held fixed-term contracts (compared to 15% of those aged 25-49). Although widespread, temporary employment does not protect employees from unforeseen circumstances such as the COVID-19 pandemic. In addition, it does not provide access to the necessary social protection during health crises or unemployment insurance in case of sudden job termination.
Reshaping Work Dialogue participants emphasised another issue: youth often do not stay long enough at jobs to obtain permanent contracts. De Werkvereniging, a Dutch organisation for modern workers, questioned whether young people truly strive to transition into standard employment, considering that their values and career visions may differ vastly from those of preceding generations. De Werkvereniging further notes that the youth in the Netherlands usually start their career with an employer to gain practical knowledge and skills, with the end goal being self-employment.
Traditional companies are being challenged to reinvent themselves to remain desirable employers. New generations value flexibility, remote work, shared values, support for mental health, inclusivity, diversity, and higher pay. They also welcome opportunities for learning and professional growth.
Other participants of Reshaping Work Dialogue agreed that traditional companies are being challenged to re-invent themselves to remain desirable employers. Box 3. summarises some of the inputs as to what employers can do to attract and retain young workers.
Box 3. What do the youth want?
Media coverage often discusses youth aspirations through the lenses of generational preferences. In that vein, new entrants to the labour market, born between the late 1990s and early 2010s, are referred to as Gen Z. Their perspective on the future of work is crucial, as they will soon (by 2025) consist of approximately a quarter (27 per cent) of the total labour force in OECD countries (OECD, 2022).
According to Forbes (Staglin, 2022), Gen Zers emphasise flexibility, remote work, shared values, support for mental health, inclusivity, diversity, and higher pay. They also welcome opportunities for learning and professional growth. A Gen Zerâs open letter to future employers, published by the Harvard Business Review, additionally underlined the importance of taking a stand for social justice, having a work environment that will support Gen Z to express themselves and their identities, and the importance of having a positive impact in society (Greene, 2021).
The COVID-19 pandemic has reinforced the pre-existing shift from long-term employment towards more short-term task-based jobs, also referred to as platform work or gig work. The percentage of young workers undertaking these kinds of jobs is around twice as high as that of older workers in the EU (ILO, 2022). The youth is also more likely than their older counterparts to engage in this type of work on a full-time basis (Pinedo Caro, OâHiggins & Berg, 2021). While platform work has the potential to promote youth employment, it still requires policy actions to ensure such work arrangements align with a decent work agenda (ILO, 2022).
The International Labour Organisation established online platform work (e.g., graphic design, IT development) mediated by online labour platforms (e.g., Upwork, Toptal) as particularly appealing to youth because it offers higher hourly wages, ingrains a learningby- doing approach, has large returns for experience, and provides flexibility (ILO, 2022). Moreover, online or on-location platform work (e.g., ride-hailing, food delivery) acts as an enabler of young migrant workersâ participation in the labour market. Recognising this potential, some platform companies have partnered with local non-governmental organisations to support migrant integration into the labour market, providing them with administrative support and language courses.
At the same time, platform work poses a number of challenges. While migrants and other young people may overcome some structural challenges through platform work, they still face uncertainties regarding work availability, limited access to social protection, and a lesser degree of occupational safety and health standards. In addition, workersâ skills, portfolios, and working histories are not easily transferable, hindering career mobility. Online platform work does not support any returns to education and widens gender inequality (e.g., young womenâs pay rates are around 20 per cent lower than those of young men) (ILO, 2022).
Last but not least, before the COVID-19 pandemic, many young people were engaged in sectors such as trade, transport, and accommodation, which have been particularly affected by the pandemic, resulting in a loss of employment (ILO, 2022). Other sectors (e.g., manufacturing, communications, care, professional and financial services) also recorded a decline in youth employment. Analyses indicate that the sectoral activity and, thus, labour demand will recover in the post-pandemic period, yet it may take a long time.
Cooperatives can a significant role in overcoming uncertainties and supporting workers in flexible working arrangements, including the youth.
This points to the fact that young people may need to reconsider their career choices and explore new employment opportunities. Moreover, coops can play an important part in assisting youth with labour-market integration, as witnessed following the 2008 economic crisis (see Box 4. for more info).
Box 4. How can coops help with youth labour-market integration?
In the aftermath of the economic crisis in 2008, cooperatives played a significant role in overcoming uncertainties and supporting workers in flexible working arrangements. They did so by assisting workers in their relationship with clients, providing them with a contract, legal and administrative support, and upskilling and reskilling programs. In some cases (e.g., Doc Servizi in Italy), cooperatives even engaged in collective bargaining.
Cera Coop, for instance, provides training and workshop sessions designed for young people. In their experience, young adults who are inclined toward the coop idea are likely to organise themselves in a worker cooperative. Yet, there are examples of gathering those interested in entrepreneurship and running their own businesses through a coop model.
Following the successful models of Smart Coop, which now has a membership network in eight EU Member States, a cooperative enterprise from Belgium (called Having a Harbour, in English) has gathered different youth clubs and organisations to empower young people in the direction of coop entrepreneurship. Young entrepreneurs have a âsafe harbourâ in this coop to start off their business by using their offices, getting administrative support, and engaging in the coop training sessions. To benefit from these opportunities, young people no older than 30 years only have to become a coop member (e.g., shareholder).
Youth and labour market resilience
Job and occupation prospects are already changing because of digital technologies, the development of which has been accelerated by the COVID-19 pandemic. Early analyses indicate that less than 20 per cent of jobs will completely disappear (World Bank, 2016) or that at least one-third of existing occupations will be automated by 2030 (McKinsey Institute, 2017). The latest estimates say that by 2025, 85 million jobs may be lost, and 97 million new ones gained because of automation (WEF, 2020).
Equipping young people with the right skill set at the onset of (twin) transition is crucial to ensure a future resilient workforce. Educational institutions (e.g., vocational schools and universities) traditionally prepared the youth for the labour market. Yet, these institutions are being confronted with lengthy procedures to introduce a new course or degree program (due to accreditation and approval processes, etc.), making them lag behind the pace of technological advancements. Such a gap between the slow adaptation of universities and fast market changes has led to the emergence of non-formal (non-degree) programs, extending the responsibility for education to entire ecosystems (e.g., companies, NGOs, cooperatives, unions).
Furthermore, recruitment practices nowadays do not always inquire about the candidateâs educational background but instead rely on new innovative ways to test skills. For example, they use gamification to assess oneâs problem-solving capabilities, creative thinking, and time management, among other things. In light of this, it is understandable that students have been opting for more generic degrees.
To some extent, apprenticeships can play a significant role in filling the gap and mitigating the pressure on young workers. In the UK, 18-year-olds (and, on occasion, those as young as 16, although this is not the norm) can choose to continue their education through university or start an apprenticeship to acquire more commercial experience. In France, on the other hand, apprenticeships are more about social mobility, with the aim of supporting socio-economically disadvantaged students.
A bottom-up approach to learning, up-skilling, and re-skilling may particularly benefit young people.
Reshaping Work Dialogue participants emphasised that a bottom-up approach to learning may particularly benefit young people. For instance, Zurich Insurance is piloting a platform that helps all employees in the organisation conduct their own skills diagnostic relative to their career ambitions. The platform facilitates individual skills passports, prompting individuals to take control of their skills development and encouraging them to take advantage of a whole range of learning opportunities and digital solutions available across the organisation.
A safe and healthy working environment constitutes a fundamental right and a key factor in boosting productivity, creativity, and peopleâs relationships. The deterioration of these qualities in the short and long term undermines oneâs self-esteem and emotional wellbeing. Recent developments, including digital technology and remote work â to name a few â are changing how work is organised.
As a result, people work outside the office at irregular hours, nurturing an âalways-onâ culture, which accelerates existing risks to the workerâs well-being (e.g., heavy workload, long working hours, lack of control and autonomy at work) or creates new ones (e.g., intensification of work, âdataficationâ of workers) (EU-OSHA, 2022).
In this time of digital transformation, promoting digital well-being is equally important to emotional, physical, financial, and social well-being.
One particular issue pertaining to well-being of workers across age groups is burnout, which the World Health Organisation classifies as a syndrome closely related to chronic workplace stress and which is not managed effectively (e.g., high work-related demands accompanied by a loss of job control and autonomy). A central challenge is that it takes time outside of work to recover from burnout, which may cause loss of earnings, especially in the case of diverse forms of work. Once a person is ready to return to the workplace, the integration process is yet another challenge. While colleagues may be understanding, the entire business culture is based on going the extra mile to earn promotions, benefits, etc.
The stigma surrounding burnout implies a weak personality, unable to cope with stress and work pressure. Despite this dominant narrative, burnout can happen to anyone and requires more deliberative space both in public and in company policies. In this vein, the Reshaping Work Dialogue participants acknowledged the necessity of empowering workers and providing support packages.
Companies can organise training sessions and workshops to raise awareness, as well as implement preventive measures, such as consultations and coaching sessions. The companies can also encourage co-created programmes between management and workers oriented towards improving well-being.
In this time of digital transformation, promoting digital well-being is equally important to emotional, physical, financial, and social well-being. Zurich Insurance launched a âDigital Downtimeâ pilot in Spain that included disconnecting the VPN at 17:00 every evening and over weekends. The company was interested to see if deliberately disconnecting people could have a positive impact. The first results tend in that direction, suggesting digital downtime as a potential roadmap for others to follow.
The EU policy framework on youth is ingrained in the EU Youth Strategy 2019-2027This document should be differentiated from the Council of Europe Youth Sector Strategy 2030.. Unlike the previous strategy, which emerged during the 2008 economic crisis and had an emphasis on the youth being an important economic resource, the current one institutes young people as bearers of the future democracy (Hofmann-van de Poll & Williamson, 2022).
To meet this strategic goal, specific youth-oriented mechanisms are being implemented, such as Erasmus+ Youth (2021-2027); European Solidarity Corps (2021-2027); Aim, Learn, Master, Achieve (ALMA), designed for NEETs; and Youth Employment Support. Moreover, the EU institutions announced 2022 as the European Year of Youth, offering young people opportunities to learn, acquire new work-related experience, and engage in societyIn 2022, almost 12,000 events and activities placing the youth at the center took place across 72 countries; 26 policy dialogues hosted by various European Commissioners provided young people with a space to discuss the Commissionâs actions and plans; youth made up one third of citizensâ panels during the Conference on the Future of Europe (Chatzimichail, 2023).. The EU recognised that the youth had already paid higher costs during the previous crisis and decided to honour and support their efforts to cope with the ongoing crisis.
One of the most well-known youth-oriented mechanisms is the European Youth Guarantee (YG). The YG is a labour market policy that ensures Member Statesâ commitment to assisting young people in finding an adequate job within four months of becoming unemployed or leaving formal education (Escudero & Mourelo, 2017). The initial YG covered those under the age of 25, while the reinforced YG was extended to include those under the age of 30.
Complementing these tailored-made policy responses (including the social agenda grounded in the European Pillar of Social Rights and its Action Plan), the EU is preparing to establish a social taxonomy that will classify economic activities based on their contributions to the EU social goals, including quality of working conditions, youth employment, etc. As such, social taxonomy will be a common ground for various stakeholders (e.g., businesses, policy makers, investors), guiding them towards social sustainability. Likewise, it is expected to strengthen cooperation between the public and private sectors in addressing topics like youth employment.
Drawing upon its economic and social agenda and lessons from the recent health crisis, the EU increased efforts to promote mental health and well-being. A program called âHealthier together â EU non-communicable diseases initiativeâ represents one of the first EU-wide initiatives to prioritise mental health (Holmgaard Mersh, 2022). But it will not remain the only one. In her annual address, the President of the European Commission announced that the EUâs mental health strategy and action plan would be presented in 2023. These will be an important building block in the recently launched European Health Union, a framework containing different instruments and tools to mitigate risks to public health.
Looking specifically at workplace well-being, the EU adopted the âEuropean Strategic Framework on Health and Safety at Work 2021-2027: Occupational safety and health in a changing world of workâ (European Commission, 2021). The document outlines several priorities for the Commission, including modernising the OSH legislative framework related to digital transformation (e.g., review of the Workplaces Directive and the Display Screen Equipment Directive by 2023) and running awareness campaigns intended to mitigate psycho-social and ergonomic risks. It also envisions the role of the Commission in relation to Member States and social partners (e.g., update of existing legal frameworks and agreements to include new OSH risks). Last but not least, the Framework foresees close cooperation with the ILO and WHO in supporting the integration of the right to safe and healthy working conditions into the ILO framework of fundamental principles and rights at work.
Policy recommendations
Youth employment
While we focus on various policies that can support youth, it is important to see these recommendations in light of broader discussion presented in this report, specifically in relation to good-quality jobs. Upskilling and various other support programs are meaningless if we do not increase the availability of good-quality jobs in the first place.
- Include young people in discussions on good quality jobs for the postpandemic recovery period.
- Create programmes that will support youth not engaged in education, employment, or training.
- Create public-private partnership programmes for upskilling and reskilling.
- Foster the uptake of apprenticeships to support the transition from education to work and match emerging talent with companiesâ skills needs.
- Promote and support âparticipative entrepreneurshipâ, a broad concept for strategies to create collective ownership, which is possible at three axes: (i) a voice in operational decisions and (ii) strategic and (iii) financial coownership, which varies from profit-sharing to shareholding.
- Promote and support coop entrepreneurship as a model of solidarity, cooperation, and mutualisation of risks stemming from a wider social, economic, political, and ecological landscape. Such a model of empowering workers through cooperatives, especially in diverse forms of work, could be developed to resemble the Smart Coop network or Cera Coop strategies in coop entrepreneurship.
Workplace well-being
- Enable access to mental health support programmes for all workers, regardless of their status (e.g., employee or self-employed).
- Promote the development of support programmes such as coaching and other preventative actions that assist workers in dealing with stressors. This can be achieved through public-private partnerships, disclosure of worksustainability measures, best practice exchange, and/or subsidies.
- Stimulate conversations through public-private partnerships about emotional well-being and mental health, simultaneously empowering workers to conduct these sensitive discussions.
- Develop awareness campaigns aimed at destigmatising mental health issues through public-private partnerships.
Implications for companies
- Create transparent well-being policies and programmes to open discussion about the importance of emotional well-being and mental health, which may also attract more young workers.
- Explore suitable measures to address new risks stemming from the digitalisation of workplaces.
- Explore and develop measures to validate and make worker on-job experience transferable.
- Explore and develop measures to improve working conditions to improve worker well-being and hence productivity.