A draft EU regulation on artificial intelligence risks exclusion of social partners and non-compliance with data protection requirements.
Discussions on a European regulation on artificial intelligence have increased since the European Commission published its proposal in April. In the process, we wrote about potential threats to labor and employment rights. The text, of course, is not final and it is possible that the final version will differ significantly – there is indeed room for improvement.
Amid calls from employers for a “laser-precise” definition, which would be susceptible to bypass and unable to keep pace with technical developments, AI is described in the project with reference to some of its functions: prediction, optimization, customization and allocation of resources. Due to the digital acceleration induced by the pandemic, almost everyone is now aware of the promises and dangers of the AI-based tools adopted to perform decision-making tasks in a wide variety of fields and, in particularly sensitive contexts, as working ecosystems.
The draft regulation mentions both “AI systems intended to be used for the recruitment or selection of natural persons, in particular for the publication of job offers, the preselection or screening of applications, the evaluation candidates during interviews or tests ”and“ AI intended to be used for decision-making on the promotion and termination of work-related contractual relationships, for assignment of tasks and for monitoring and review ‘evaluation of the performance and behavior of people in such relationships’. This broadly encompasses the managerial functions entrusted to data-driven management models, often grouped under the popular phrase of “algorithmic bosses”.
The commission recognizes that these “high risk” AI systems “present significant risks to human health and safety or fundamental rights”. In the project, however, a “laissez-faire” approach prevails, and such systems simply need to comply with a “set of mandatory horizontal requirements for trusted AI”. These include ‘appropriate governance and data management practices’, insert-like documentation proving compliance with applicable rules, transparency of procedures, human oversight and’ an appropriate level of precision, robustness and cybersecurity ”. However, the practice relies mainly on self-certification through the ex ante “Conformity assessment procedures” carried out by the service providers themselves or, in a few cases, issued by standardization bodies.
Ceiling rather than floor
We had expressed the fear that the regulation would become a ceiling for labor protection rather than a floor. Its legal basis for liberalization is Article 114 of the Treaty on the Functioning of the European Union, relating to harmonization in the internal market. This could be used to override existing national regulations providing for the involvement of social partners before the introduction of any technological tool capable of monitoring workers’ performance. This is especially true when AI applications are integrated into ordinary tools already in use in the workplace to protect company assets, assess work performance and track productivity, or report behaviors. deviant.
Several Member States have devised a dedicated model for surveillance technology in employment, in which worker representatives or public bodies are required to be involved and, in some cases, can exercise veto powers. While most ordinary surveillance systems often have to pass a co-determination phase before being introduced into professional circles, the model envisioned by the AI regulation risks displacing all of these procedural protections. In fact, they could be interpreted as exorbitant and disproportionate to the guarantees provided and thus hamper the freedom to provide the AI-related services that the instrument aims to promote. Thus, more protective national laws risk being watered down if they are read as incompatible with the objectives of regulatory harmonization.
The regulation should specifically provide that it is without prejudice to any current or future labor and employment protection aimed at governing the introduction and use of any AI-compatible tool in European workplaces. This would prevent regulations from being used to lower or repeal labor standards, undermining privacy, freedom of expression, human dignity and equality.
More precise rules
Help our mission drive progressive political debates
Social Europe is a independent editor and we believe in content that is available for free. For this model to be sustainable, we depend on the solidarity of our loyal readers – we depend on you. Thank you for supporting our work by becoming a member of Social Europe for less than 5 euros per month. Thank you very much for your support!
Such a provision would be in line with the Commission’s assertion that its proposal “is without prejudice to and complements the General Data Protection Regulation”. A key provision of the GDPR, Article 88, allows Member States, “by law or collective agreements”, to “lay down more specific rules to ensure the protection of rights and freedoms with regard to the processing of personal data of employees in the context of employment ”.
The GDPR refers “in particular” to “the purposes of recruitment, the execution of the employment contract, including the fulfillment of obligations provided for by law or by collective agreements, the management, planning and organization of the work. work, equality and diversity in the workplace, health and safety at work ”, as well as“ the protection of the property of the employer or the client and for the purposes of the exercise and enjoyment, to individual or collective title, rights and benefits linked to employment, and for the purpose of terminating the employment relationship ”.
Above all, according to Article 88, these rules “will include appropriate and specific measures to protect the human dignity, the legitimate interests and the fundamental rights of the data subject, in particular with regard to the transparency of the processing, the transfer of data of a personal nature within a group of companies, or a group of companies engaged in a common economic activity and workplace control systems ”.
This aims to allow responsive solutions to the emergence of new instruments and practices that can significantly affect workers. The reference to “law or collective agreements” also provides an opportunity to bring workers’ representatives to the table.
National data protection authorities read it in conjunction with Article 5 of the GDPR, which establishes its principles of “lawfulness, fairness and transparency” as well as other guiding principles. In particular, compliance with national provisions on employee monitoring and privacy regulations was seen as a prerequisite for complying with the principle of lawful processing, thus strengthening the integration between the GDPR and national rules.
The purposes for which Article 88 encourages Member States to adopt specific labor and employment protection correspond, or even go beyond, to the potential use of the tools envisaged by the IA Regulation. Therefore, interpreting the draft as preventing national measures providing for specific labor and employment guarantees would be incompatible with Article 88 of the GDPR, which explicitly authorizes such measures. And if that were the case, the AI regulation would not work “without prejudice” to the GDPR, as the commission claims.
To promote legal certainty, for the benefit of service providers and users, an explicit provision excluding any national labor and employment regulations being ‘dismembered‘under the AI regulation would be appropriate. In addition, AI-based management tools pose serious risks to workers’ rights, far beyond the realm of privacy. Therefore, consistency must be ensured with the Charter of Fundamental Rights of the EU and secondary EU law on consumer protection, non-discrimination and equality, health and safety.
As citizens, unions, data protection authorities and litigants gradually master the ability to use GDPR strategically to tame algorithmic bosses, it is crucial to explore the full potential of the rights it confers. . Its role would be seriously compromised if the proposed AI regulation were to become operational as an instrument of secondary EU law. On the contrary, an act of AI must be seen as a piece of a complex and multidimensional puzzle, the dual purpose of which is to maintain data flows while ensuring full respect for the rights of European citizens and workers.