AI systems & GDPR rules – How do they fit together? – Part II

In this constantly evolving tech landscape, artificial intelligence („AI”) is also transforming the employment scene, redefining roles and interactions within the workplace. As algorithms and human expertise join forces, AI boosts its efficiency and the level of precision, while human intuition still thrives in uncertain situations. The result? A dynamic, hybrid workforce where cutting-edge technology and human insight work hand in hand, driving productivity and also shaping the future of work.  

This is the second part, realized by bpv GRIGORESCU STEFANICA lawyers, Diana Ciubotaru (Associate) and Silvana Curteanu (Associate), on the AI & GDPR interplay, where we delve deep into the most significant topics concerning the AI & GDPR combo within the employment relationships and pointing out the requirements incumbent on the employer as a data controller and deployer[1] when using AI tools.

Don’t forget to also check the first part of this article, where we assessed the data protection background and its algorithmic readiness by overviewing the main relevant GDPR provisions related to the development and deployment of AI systems and how the AI Act impacts the GDPR’s rules.

GDPR, AI and WoW (the World of Work)

(i) The first steps and the compliance concerns

AI systems in recruitment and human resources (”HR”) processes offer significant benefits, such as accelerating the process of recruitment and hiring and improving candidate communication. Despite these advantages, the human-oriented field of employment brings a certain degree of reluctance to fully rely on using AI systems for recruitment processes from start to finish. Recent developments in AI include tools like virtual assistants, which can source resumes, contact candidates, and conduct interviews using machine learning (ML) and platforms such as VaaS (Voice as a System). A survey[2] shows that 62% of HR professionals anticipate certain recruitment stages to be fully automated by AI (e.g., candidate application and selection for the relevant position).

While AI tools can streamline tasks and provide data-driven insights, they may also raise compliance concerns, particularly under the European Regulation on Artificial Intelligence („AI Act”). Under this recent and highly debated regulation, AI systems used in employment decisions (e.g., AI platforms making employment decisions on task allocation, promotion and termination of employment relationships, AI tools used for monitoring or evaluating the employees and their performance) are classified as high-risk AI systems. In this context, the compliance concern remains: how can an employer benefit from all the AI-based tools and facilities while still being GDPR compliant?

So, let’s shed some light on certain practical steps to be followed by employers when using AI tools or when acting as AI deployers.

(ii) Practical steps for employers

Regardless of the purposes for which the AI tools are used, there are no exceptions from the GDPR requirements for technology enthusiast employers. Here’s a breakdown of the key actions that may contribute to data privacy compliance:

1. Identify and document a lawful basis for processing

Firstly, the employer must identify the appropriate legal basis (as provided by Art. 6 of the GDPR). Generally, for processing employees’ data, the most common lawful bases may include:

▸ Execution of a contract: may be applicable when AI tools are used to manage certain aspects related to employment relationships (g., AI-driven payroll systems that automate salary calculations, deductions, and benefits management, AI tools that assess employee performance metrics to ensure they meet the requirements and obligations outlined in their employment contracts, such as achieving certain productivity targets, etc.).
▸ Compliance with a legal obligation of the employer: for example, complying with occupational health and safety regulations to ensure a safe working environment. AI-based tools can help evaluate compliance and alert employers and management when a safety breach occurs (g., identifying employees not wearing protective gear when it is mandatory, etc.).
▸ Legitimate interest: it is often used, but it must always involve the performance of the “balancing test” to analyze if the employer’s interests override the rights and freedoms of the employees.
▸ Consent: due to the power imbalances between employers and employees, relying on this legal basis is tricky in employment contexts, but it may apply in case of the voluntary participation of the employees in optional programs within the company (g., wellness or mental health programs that use AI tools to provide personalized support or recommendations, such as fitness apps or stress management tools, AI-based tools that analyze employee behavior for providing personalized feedback, coaching, or career development plans, etc.).

In such cases, the employer must ensure that the employees have the possibility to withdraw their consent for such processing without facing negative consequences.

Related to the manner of the documentation of the lawful basis chosen for carrying out the processing activities, employers should clearly record (in a record of the processing activities) the lawful basis for each processing activity to hold evidence in this regard.

2. Conduct a Data Protection Impact Assessment (DPIA)

▸ Targeting a purpose: after establishing the concrete purpose of the processing, by conducting a DPIA, the employer can assess the potential risks associated with processing personal data using AI-based tools in day-to-day activities.
▸ When to conduct it: the DPIA must be conducted before implementing or using the AI-based solutions, especially in scenarios in which the processing involves systematic monitoring, large-scale data, or sensitive data (g., biometrics or health data).
▸ The key elements to be included in the DPIA are:

 the description of the processing activities object to analysis.
 the assessment of the necessity and proportionality of carrying out the activities.
 the evaluation of risks to the individuals’ rights and freedoms.
 the measures to mitigate risks implemented by the employer.

The mirror image of DPIA regarding AI rules is represented by the Fundamental Rights Impact Assessment (FRIA) regulated by Art. 27 of the AI Act. FRIA needs to be conducted before deploying a high-risk AI system by the deployers that are bodies governed by public law or are private entities providing public services (e.g., private hospitals or clinics providing public health services on the basis of public-private partnerships, bus, tram or metro operators operating on the basis of a concession contract with public authorities, etc.).Similarly to DPIA-related requirements, art. 27 of the AI Act provides the mandatory elements that a FRIA must include.

3. Ensuring transparency by providing clear and complex information to employees

In practice, the transparency principle is effectively implemented by data controllers by providing data subjects privacy notices that include the information stipulated by Art. 13 and 14 of the GDPR. When it comes to using AI solutions, any employer should ensure that these privacy notices also refer to the mere interaction with an AI system while allowing the data subjects (i.e., the employees) to understand, as the case may be, how the AI systems make decisions about them, how their data are used to test and/or train a certain system and the eventual outcome of such AI-powered processing. Broadly, in the privacy notices, it is essential to address, among others, the following:

– what data is collected;
– the fact that the employees shall interact with an AI system (mandatory in case of high-risk AI systems);
– how data is processed using AI-based tools;
– what is the purpose of the processing activities;
– the rights of employees regarding AI-based processing.

The employer must ensure that the elements above are explained in a simple and comprehensive manner, especially considering any automated decision-making and profiling that may impact the employment relationship.

4. Ensure data minimization and purpose limitation principle

Regardless of deploying or simply using AI-based tools, employers must effectively respect the principles of data minimization and purpose limitations provided by the GDPR. In this regard, employers must:

▸ Limit data collection: only collect the data that is necessary for the specific purposes for which the AI-based tool is used or deployed within the company.
▸ Explicitly communicate the specific purposes: clearly define and communicate to data subjects (e., any person within the company) the purposes for which data will be used and ensure AI systems are not repurposing data beyond the initial intentions.

5. Implement robust security measures

Similar to the GDPR’s requirements, given that the use of AI-based tools within employment relationships involves the processing of employees’ personal data, employers must also implement efficient security measures. This may include:

▸ Technical safeguards: frequently used technical and organizational measures are data encryption, use of access controls, and employing secure storage solutions.
▸ Conducting regular security assessments: for example, regularly auditing AI systems (deployed or used) to ensure they are secure and identify any potential vulnerabilities.
▸ Implement a security incident response plan: employers should draft an internal policy or a protocol for responding to data breaches, including how to notify affected employees and relevant authorities in such scenarios.

6. Paying increased attention to automated individual decision-making and profiling

In this case, every employer using AI-based tools within its relationships with employees must address these specific issues which constantly raise problems for data subjects (i.e., the employees). Thus, it is important to:

▸ Ensure human oversight: implement the appropriate measures so that decisions impacting employees are not solely adopted automatically following the results generated by the AI tool and that human review is provided.
▸ Properly inform the employees: if using AI tools for automated individual decision-making (g., hiring decisions, which automatically evaluate the employees, automatically allocating tasks based on individual behavior, personal traits or characteristics), employees have the right to be informed of how the decisions are made.
▸ Make sure the employees know their rights: employees must be informed about their right to object to automated individual decision-making[3] or profiling[4] and how to challenge such decisions (Art. 21 and Art. 22 of the GDPR are relevant in this case). You can find out more about automated individual decision-making and profiling in relation to this matter in Part III of our article.

7. Check the AI contractors carefully and conduct due diligence

These preventive measures may include the following:

▸ Assess third-party providers/contractors: employers should ensure that AI vendors comply with data protection regulations and have implemented appropriate security measures in their AI solutions.
▸ Concluding data processing agreements (DPAs): employers should sign contracts with vendors that include data protection clauses specifying the roles and responsibilities of each party under the specific conditions laid down by Art. 28 of the GDPR.
▸ Conducting regular audits: employers should monitor third-party compliance, especially with regard to cloud-based AI solutions.

8. Ensure data accuracy and fairness principles

The GDPR principle of data accuracy must be observed when it comes to personal data used as an input for AI systems, especially considering the potentially harmful outcomes of training AI with inaccurate data by singling out people “in a discriminatory or otherwise incorrect or unjust manner[5]. Therefore, even if employers act as AI-systems deployers or simply as users of AI-based solutions, employers should:

▸ Conduct regular data quality checks: respectively, if the data used or introduced in the AI models is accurate, up-to-date, and relevant for the purposes pursued.
▸ Conduct audits to identify potential biases: employers should evaluate AI systems for potential biases that may occur in decision-making processes (g., in the context of hiring – there may be AI-based recruitment tools, which may generate biases against women).
▸ Propose corrective measures, if necessary: implementing mechanisms to correct any biases or inaccuracies identified during the audits.

9. Organize trainings for instructing employees and managers

To keep up with technological developments while acting in full compliance with applicable legal provisions, the investment in human resources and know-how within a company is paramount. Thus, among others, employers should train their employees on:

▸ AI Systems: they should ensure that staff, especially those involved in managing AI tools, understand their responsibilities regarding data privacy.
▸ Data protection awareness: employers should train staff on the effective implementation and application of the GDPR principles (or local data protection laws), such as the fulfillment of data minimization principle, purpose limitation, and lawful processing.

10. Constantly review and update data protection policies

Some of the most common practices that may help employers comply with data protection principles and rules when using (or even deploying) AI-based solutions or tools can include:

▸ Regular policies review: periodically update the internal data protection policies to account for changes in AI technology or regulatory requirements.
▸ Proper documentation: this can consist of keeping records of policy changes and ensuring they are accessible to employees.
▸ Conducting internal audits: conduct regular internal audits to ensure compliance with data protection policies and practices can be a business-saving solution.

11. Prioritizing and always respecting employees’ rights

When it comes to data subjects’ rights (i.e., the employees), employers, as data controllers, must:

▸ Enable access, rectification, and erasure of data: employers must ensure that employees can access their data, request corrections, or ask for data to be deleted.
▸ Data portability: if relevant and upon request, employers must provide employees with their data in a structured, commonly used and machine-readable format.
▸ Respond to all justified requests: employers must establish and implement a process for responding promptly to data access or deletion requests from employees or any type of request submitted by employees under the GDPR’s provisions.

(iii) The conclusion?

As can be observed, using AI-based platforms or tools is undoubtedly a powerful asset, but only if used responsibly. By implementing transparent policies, prioritizing data minimizations, and embracing a privacy-by-design approach, employers can turn the use of AI into a robust ally in their compliance journey, boosting at the same time the efficiency of various tasks and interactions conducted by employees.

Stay tuned for the third and last part of our article!  

Remember to subscribe to our newsletter to stay updated on the latest legal developments.

[1] „Deployer” – means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

[2] https://www.tidio.com/blog/ai-recruitment/

[3] Decisions made about individuals solely by automated means, without any human involvement. This typically involves the use of algorithms or artificial intelligence (AI) to process personal data and make decisions based on that data.

[4] The automated processing of personal data to assess or predict various characteristics of an individual. The goal is often to categorize people based on specific traits or behaviors, allowing organizations to make decisions or target individuals in specific ways (e.g., regarding the work performance, economic situation, health, behavior, interests, location).

[5]  Recital (59) of the AI Act

 

AI systems & GDPR rules – How do they fit together? – Part I

Starting from August 1, 2024, we are beginning a new chapter in terms of technology governance, marked by the entry into force of the European Regulation on artificial intelligence (the “AI Act” or the “Regulation”)[1]. The AI Act marks a pivotal moment in in how we conceptualize, develop, and deploy artificial intelligence (“AI”), aiming to achieve a delicate balance: promoting technological advancement while safeguarding fundamental human rights.

In this article, wrote by our colleagues Bianca Ciubotaru, Associate, and Silvana Curteanu, Associate, we aim to explore the intersection between AI innovation and big data use with personal data protection compliance, particularly under the General Data Protection Regulation (GDPR)[2]. Our analysis is structured in three parts:

1. In this first part, we shall assess the data protection background and its algorithmic readiness by overviewing the main relevant GDPR provisions related to the development and deployment of AI, and how the AI Act impacts the GDPR’s rules.

2. In the second part, we shall delve deep into the most significant topics concerning the AI & GDPR combo within the employment relationships and the requirements incumbent on the employer as a data controller and deployer when using AI tools.

3. In the third and final part, we shall discuss the importance of human involvement in AI processing activities, and the classification as „high-risk” systems of certain AI tools used in the employment areas, before wrapping up and laying down our conclusions.

AI Act – From ink on paper to real-world

The journey of the AI Act from ink on paper to real-world impact unfolds through a series of meticulous steps, and the effective application of its provisions will then take place in four stages, as follows:

Who falls under the purview of the AI Act?

The AI Act applies to a broad range of entities involved in developing, deploying, and using AI systems within the European Union (“EU”). Specifically, it applies to:

AI Providers: organisations and individuals placing on the market AI systems or general-purpose AI models in the EU, irrespective of whether those providers are established or located within the EU or in a third country;

AI Deployers: entities that use AI systems in their professional activities, including businesses, public authorities, and other organisations, particularly when using high-risk AI applications, that have their place of establishment or are located within the EU;

AI Providers and Deployers: entities that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the EU;

Importers and Distributors of AI systems placing AI systems on the EU market;

Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;

Exemptions

The AI Act does not apply in case of:

systems that do not qualify as AI (g., technologies that do not involve machine learning, natural language processing, or similar AI techniques);

military applications used exclusively for military purposes;

research and development activities that imply the use of AI systems solely for research and development purposes, particularly in a non-commercial context;

personal use: AI systems used by individuals for personal, non-professional purposes (g., personal assistants or home automation systems that do not impact others);

certain public services that do not involve high-risk AI applications, depending on the specific context and use case;

small enterprises: while the AI Act applies to all organisations, there may be certain provisions or requirements that are relaxed or not enforced for small enterprises, particularly in terms of compliance burdens;

free and open-source software (unless their use would classify them as a prohibited or high-risk AI system, or their use would subject them to transparency obligations).

As AI tools become increasingly integrated into our daily activities, ensuring compliance with the GDPR is paramount. The interference between the GDPR rules and AI Act provisions is essential, given that the AI Act also provides a framework for a new category of so-called general-purpose AI models[1] – the typical examples of such a category are the generative AI models, which are characterized by their ability to easily respond to a wide range of distinct tasks and, some of them, on their self-train capability.

Yet, the looming question remains: Will the majority of human tasks be automated by AI? Can AI enhance or replace human reasoning and empathy? Or is a harmonious coexistence possible?

A GDPR perspective – general outlines

As it can be easily observed, the GDPR is AI-neutral and does not specifically regulate the collection and processing of personal data by AI systems. The AI Act complements the GDPR by establishing conditions for developing and deploying trusted AI systems, introducing some novelties related to personal data and certain processing activities.

Regardless of the type of activity they carry out which involves using AI systems or tools, AI providers and deployers shall be required to map their obligations very carefully, particularly considering the interactions between the GDPR and the AI Act and their combined applicability. For example, both regulations address the concept of “automated decision-making” but from different angles, which are undeniably intertwined.

Similarly, while AI models are trained with large amounts of data, the GDPR addresses large-scale automated processing of personal data. Specifically, profiling and automated decision-making processes are some of the key privacy issues within the use of AI to provide a prediction or recommendation about individuals (this subject will be further detailed in Part III of our article).

But how do we know which regulation applies in specific cases?

In a nutshell, the AI Act applies exclusively to AI systems and models, and the GDPR applies to any processing of personal data. Thus, there can potentially be four scenarios:

The impact of the AI Act on data protection

Although the objective of both frameworks is a common one – to protect the fundamental rights of natural persons (including the processing of personal data by AI systems lawfully) – there are inevitably certain “tensions” between the provisions of the GDPR and the AI Act.

At first glance, it might seem challenging to align the principles of purpose limitation, data minimisation and the restrictions on automated decision-making processes provided by the GDPR, on the one hand, with the processing of vast quantities of personal data for insufficiently precise purposes in the context of the AI Act’s scope, on the other hand.

Nevertheless, data protection principles can be interpreted and applied consistently with the benefits of AI techniques and the use of big data, for example[1]:

the principle of purpose limitation allows the re-use of personal data when compatible with the original purpose;

the principle of data minimisation might be applied by reducing the identifiability of data rather than the quantity, for example, through pseudonymisation;

the GDPR’s prohibition of automated decision-making comes together, generally, with specific nuances and exceptions, thus not hindering the AI’s further development or deployment in this respect.

Gaining a comprehensive approach to both frameworks is crucial in fostering a culture of compliance, transparency and trust in various sectors of activity (including among employees and employment relationships). Balancing privacy and technological innovation is an attainable goal, and there is a significant complementarity between the GDPR and the AI Act.

Briefly, the most relevant elements of the GDPR that can be interconnected with the use of AI systems or other related automated technologies (including in the context of employee`s data processing activities – this subject will be further detailed in Part II of our article) are:

(i) the right to be informed & the right of access

Articles 13, 14, and 15 of the GDPR require data controllers to provide detailed information about their processing activities, particularly when using automated decision-making or profiling. This includes explaining the logic behind such processes and their potential consequences for individuals. Similarly, the AI Act emphasizes transparency concerning both the development and use of AI systems. The transparency measures in the GDPR are complemented by additional requirements in the AI Act, reinforcing the need for clear information about AI-related data processing, such as:

▸ AI providers must inform individuals when they interact with an AI system if this interaction is not immediately apparent;

▸ AI deployers must inform individuals about the operation of AI systems used for emotion recognition or biometric categorization;

▸ AI providers must provide and publicly share a detailed summary of the training data used for general-purpose AI models;

▸ AI deployers who are also employers must notify workers’ representatives and affected employees about the use of high-risk AI systems in the workplace.

(ii) the right to object

As Article 21 of the GDPR outlines, individuals may object to processing their personal data, specifically when it comes to profiling.

Thus, the GDPR already provides users (of an AI systems/tool) with the ability to object to the use of their personal information, for example, in the case of training AI systems. The right to object ensures that individuals can prevent their data from being used in AI model development, especially when it involves profiling or automated decision-making.

Therefore, the existing GDPR framework sufficiently protects user privacy in the context of AI usage, eliminating the need for the AI Act to create a separate, additional mechanism for users to oppose using their personal data.

(iii) automated decision-making

Article 22 and Recital 71 of the GDPR grant individuals the right not to be subject to decisions made solely by automated processing (including profiling), particularly when these decisions have legal or similarly significant impacts.

The effective exercise of this right, including the eventual challenge of such automated decision, requires and entails human intervention. Building upon this, when it comes to high-risk AI systems, Article 14 of the AI Act provides for an additional layer of human oversight in the design and development especially of high-risk AI systems, to prevent risks to health, safety, or fundamental rights. Consequently, in terms of human intervention in automated decision-making, low or minimal-risk AI systems are subject only to the GDPR requirements.

(iv) risk assessment and documentation requirements

Article 35 of the GDPR requires a Data Protection Impact Assessment (DPIA) for high-risk data processing, such as large-scale profiling with AI (e.g., profiling of candidates/employees using an AI systems). Similarly, the AI Act mandates a conformity assessment for AI providers and a Fundamental Rights Impact Assessment (FRIA) for AI deployers.

FRIA requires evaluating AI system risks to individuals and determining mitigation strategies. While DPIA focuses on data processing impacts, FRIA assesses broader consequences of AI usage on individual rights. Article 27 para. (4) of the AI Act emphasizes their complementary nature, not as bureaucratic overlap but as distinct yet interconnected assessments addressing different aspects of AI implementation and its effects on fundamental rights.

The AI Act brings some novelties impacting personal data

 Undoubtedly, the AI Act and the GDPR complete each other. However, given that the two legal frameworks do not regulate the same objects, they do not need the same approach. Thus, in certain respects, the AI Act offers a more flexible regime.

For example, the AI Act provides:

that law enforcement authorities may use real-time remote biometric identification systems[2] in public spaces under exceptional circumstances, such as locating trafficking victims, preventing imminent threats, or identifying criminal suspects (e., Article 5 of the AI Act). These cases can be seen as exceptions to Article 9 of the GDPR, which generally prohibits the collection and processing of biometric data.

the providers of high-risk AI systems may process special categories of personal data if it is strictly necessary for detecting and correcting biases (e., Article 10 of the AI Act). This processing must comply with the strict conditions set out in Article 9 of the GDPR.

the re-use of sensitive data, such as genetic, biometric, and health data, within AI regulatory sandboxes[3] to support the development of systems with significant public interest (g., the health system). These sandboxes will be supervised by a dedicated authority (Article 59 of the AI Act).

What about the sanctions?

Even if the two regulations complement each other, the sanctions for non-compliance with their provisions differ.

The main applicable sanctions are as follows:

Stay tuned for Part II of our article!

[[]“The impact of the General Data Protection Regulation (GDPR) on artificial intelligence”, available at EPRS_STU(2020)641530_EN.pdf (europa.eu)

[2] Art. 3 item (42) of AI Act: “real-time remote biometric identification system” means a remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention.

[3] As per Article 57 of the AI Act, Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational by 2 August 2026. (…) AI regulatory sandboxes shall provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the providers or prospective providers and the competent authority. Such sandboxes may include testing in real world conditions supervised therein.

[4] Art. 3 item (63) of AI Act: “general-purpose AI model” means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.

[5] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689.

[6] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), available at https://eur-lex.europa.eu/eli/reg/2016/679/oj.