bpv Huegel advises founder and shareholders on the sale of all shares in EVK DI Kerschhaggl GmbH to Headwall Photonics, Inc.

17 January 2025. The bpv Huegel team led by Elke Napokoj advised the founder and the shareholders of EVK DI Kerschhaggl GmbH (“EVK”). The bpv Huegel team provided comprehensive advice including deal structuring, contract drafting, contract negotiations and all steps up to the closing.

EVK is an Austria-based technology company specializing in industrial sensor-based sorting and inspection systems. Among other applications, EVK’s innovative technology is used in food processing, plastics recycling and material sorting.

Headwall Photonics, Inc. (“Headwall”), part of the Headwall Group and a portfolio company of Arsenal Capital Partners, an American private equity fund, is a global leader in high-performance spectral imaging solutions and optical components.

EVK’s innovative hyperspectral and inductive sensor technologies as well as data analysis expertise complement Headwall Group’s existing products and commitment to advancing hyperspectral imaging applications and AI-driven interpretation software in machine vision and remote sensing markets.

The transaction was closed on 31 December 2024.

Advisors to EVK: bpv Huegel – Elke Napokoj (Lead, Corporate/M&A), Victoria Huf (Corporate/M&A), Sonja Dürager (IP/IT), Astrid Ablasser-Neuhuber (Competition Law), Gerhard Fussenegger (Competition Law), Sebastian Reiter (Competition Law), Walter Niedermüller (Labour Law), Raphael Lehner (Corporate/M&A).

EVK M&A Team: Rabel & Partner GmbH Wirtschaftsprüfungs- und Steuerberatungsgesellschaft – Markus Pellet.

Advisors to the Buyer: Schönherr Rechtsanwälte.

Press release

 

bpv Huegel advises RWA eGen on the purchase of the shares in RWA AG held by BayWa AG

BayWa AG is selling its international shareholding in RWA AG to co-shareholder RWA eGen as part of its transformation concept.

08 January 2025. A transaction team of bpv Huegel advised RWA Raiffeisen Ware Austria Handel und Vermögensverwaltung eGen (RWA eGen) on the acquisition of shares in RWA Raiffeisen Ware Austria Aktiengesellschaft (RWA AG) from BayWa Aktiengesellschaft (BayWa AG). The sale of key international holdings such as RWA AG is part of the transformation concept of stock-listed BayWa AG.

RWA eGen is acquiring the approximately 47.53% stake in RWA AG at a purchase price of EUR 176 million, thereby increasing its current stake of around 49.99% in RWA AG. On 27 December 2024, the share purchase agreement was concluded between BayWa AG, its wholly owned subsidiaries BayWa Austria Holding GmbH and BayWa Pensionsverwaltung GmbH on the one hand, and a holding company of RWA eGen on the other. RWA eGen also holds the majority stake in Raiffeisen Agrar Invest AG, which is the second-largest shareholder in BayWa AG with a stake of around 28.3%. The closing of the share purchase agreement is subject to, inter alia, merger control approvals.

RWA AG operates as a producer, service provider and retailer in the business areas of agriculture, technology, energy, building materials and home & garden. As the umbrella organisation of the Austrian Lagerhaus cooperatives, RWA AG provides them with a comprehensive range of services in the aforementioned areas. In addition, RWA AG holds a wide range of participations and subsidiaries in Austria and selected Eastern European countries.

The transaction team at bpv Huegel, led by partners Christoph Nauer (Corporate/M&A, Capital Markets), Thomas Lettau (Corporate/M&A) and Astrid Ablasser-Neuhuber (Merger Control), included Nico Wolski (Tax), Johannes Mitterecker (Corporate/M&A), Ingo Braun (Finance & Regulatory), Roland Juill (Corporate/M&A, Capital Markets), Barbara Valente, Anna Zirkler, Daniel Maurer, Patrick Nutz-Fallheier (all Corporate/M&A), Stefan Holzweber and Philipp Stengg (both Merger Control).

RWA eGen was advised on German law by FPS Rechtsanwälte, Frankfurt (Daniel Herper). BayWa AG was advised by a team from Jones Day, Munich (Maximilian P. Krause, Alexander Ballmann, Jürgen Beninca).

Press release

 

AI systems & GDPR rules – How do they fit together? – Part II

In this constantly evolving tech landscape, artificial intelligence („AI”) is also transforming the employment scene, redefining roles and interactions within the workplace. As algorithms and human expertise join forces, AI boosts its efficiency and the level of precision, while human intuition still thrives in uncertain situations. The result? A dynamic, hybrid workforce where cutting-edge technology and human insight work hand in hand, driving productivity and also shaping the future of work.  

This is the second part, realized by bpv GRIGORESCU STEFANICA lawyers, Diana Ciubotaru (Associate) and Silvana Curteanu (Associate), on the AI & GDPR interplay, where we delve deep into the most significant topics concerning the AI & GDPR combo within the employment relationships and pointing out the requirements incumbent on the employer as a data controller and deployer[1] when using AI tools.

Don’t forget to also check the first part of this article, where we assessed the data protection background and its algorithmic readiness by overviewing the main relevant GDPR provisions related to the development and deployment of AI systems and how the AI Act impacts the GDPR’s rules.

GDPR, AI and WoW (the World of Work)

(i) The first steps and the compliance concerns

AI systems in recruitment and human resources (”HR”) processes offer significant benefits, such as accelerating the process of recruitment and hiring and improving candidate communication. Despite these advantages, the human-oriented field of employment brings a certain degree of reluctance to fully rely on using AI systems for recruitment processes from start to finish. Recent developments in AI include tools like virtual assistants, which can source resumes, contact candidates, and conduct interviews using machine learning (ML) and platforms such as VaaS (Voice as a System). A survey[2] shows that 62% of HR professionals anticipate certain recruitment stages to be fully automated by AI (e.g., candidate application and selection for the relevant position).

While AI tools can streamline tasks and provide data-driven insights, they may also raise compliance concerns, particularly under the European Regulation on Artificial Intelligence („AI Act”). Under this recent and highly debated regulation, AI systems used in employment decisions (e.g., AI platforms making employment decisions on task allocation, promotion and termination of employment relationships, AI tools used for monitoring or evaluating the employees and their performance) are classified as high-risk AI systems. In this context, the compliance concern remains: how can an employer benefit from all the AI-based tools and facilities while still being GDPR compliant?

So, let’s shed some light on certain practical steps to be followed by employers when using AI tools or when acting as AI deployers.

(ii) Practical steps for employers

Regardless of the purposes for which the AI tools are used, there are no exceptions from the GDPR requirements for technology enthusiast employers. Here’s a breakdown of the key actions that may contribute to data privacy compliance:

1. Identify and document a lawful basis for processing

Firstly, the employer must identify the appropriate legal basis (as provided by Art. 6 of the GDPR). Generally, for processing employees’ data, the most common lawful bases may include:

▸ Execution of a contract: may be applicable when AI tools are used to manage certain aspects related to employment relationships (g., AI-driven payroll systems that automate salary calculations, deductions, and benefits management, AI tools that assess employee performance metrics to ensure they meet the requirements and obligations outlined in their employment contracts, such as achieving certain productivity targets, etc.).
▸ Compliance with a legal obligation of the employer: for example, complying with occupational health and safety regulations to ensure a safe working environment. AI-based tools can help evaluate compliance and alert employers and management when a safety breach occurs (g., identifying employees not wearing protective gear when it is mandatory, etc.).
▸ Legitimate interest: it is often used, but it must always involve the performance of the “balancing test” to analyze if the employer’s interests override the rights and freedoms of the employees.
▸ Consent: due to the power imbalances between employers and employees, relying on this legal basis is tricky in employment contexts, but it may apply in case of the voluntary participation of the employees in optional programs within the company (g., wellness or mental health programs that use AI tools to provide personalized support or recommendations, such as fitness apps or stress management tools, AI-based tools that analyze employee behavior for providing personalized feedback, coaching, or career development plans, etc.).

In such cases, the employer must ensure that the employees have the possibility to withdraw their consent for such processing without facing negative consequences.

Related to the manner of the documentation of the lawful basis chosen for carrying out the processing activities, employers should clearly record (in a record of the processing activities) the lawful basis for each processing activity to hold evidence in this regard.

2. Conduct a Data Protection Impact Assessment (DPIA)

▸ Targeting a purpose: after establishing the concrete purpose of the processing, by conducting a DPIA, the employer can assess the potential risks associated with processing personal data using AI-based tools in day-to-day activities.
▸ When to conduct it: the DPIA must be conducted before implementing or using the AI-based solutions, especially in scenarios in which the processing involves systematic monitoring, large-scale data, or sensitive data (g., biometrics or health data).
▸ The key elements to be included in the DPIA are:

 the description of the processing activities object to analysis.
 the assessment of the necessity and proportionality of carrying out the activities.
 the evaluation of risks to the individuals’ rights and freedoms.
 the measures to mitigate risks implemented by the employer.

The mirror image of DPIA regarding AI rules is represented by the Fundamental Rights Impact Assessment (FRIA) regulated by Art. 27 of the AI Act. FRIA needs to be conducted before deploying a high-risk AI system by the deployers that are bodies governed by public law or are private entities providing public services (e.g., private hospitals or clinics providing public health services on the basis of public-private partnerships, bus, tram or metro operators operating on the basis of a concession contract with public authorities, etc.).Similarly to DPIA-related requirements, art. 27 of the AI Act provides the mandatory elements that a FRIA must include.

3. Ensuring transparency by providing clear and complex information to employees

In practice, the transparency principle is effectively implemented by data controllers by providing data subjects privacy notices that include the information stipulated by Art. 13 and 14 of the GDPR. When it comes to using AI solutions, any employer should ensure that these privacy notices also refer to the mere interaction with an AI system while allowing the data subjects (i.e., the employees) to understand, as the case may be, how the AI systems make decisions about them, how their data are used to test and/or train a certain system and the eventual outcome of such AI-powered processing. Broadly, in the privacy notices, it is essential to address, among others, the following:

– what data is collected;
– the fact that the employees shall interact with an AI system (mandatory in case of high-risk AI systems);
– how data is processed using AI-based tools;
– what is the purpose of the processing activities;
– the rights of employees regarding AI-based processing.

The employer must ensure that the elements above are explained in a simple and comprehensive manner, especially considering any automated decision-making and profiling that may impact the employment relationship.

4. Ensure data minimization and purpose limitation principle

Regardless of deploying or simply using AI-based tools, employers must effectively respect the principles of data minimization and purpose limitations provided by the GDPR. In this regard, employers must:

▸ Limit data collection: only collect the data that is necessary for the specific purposes for which the AI-based tool is used or deployed within the company.
▸ Explicitly communicate the specific purposes: clearly define and communicate to data subjects (e., any person within the company) the purposes for which data will be used and ensure AI systems are not repurposing data beyond the initial intentions.

5. Implement robust security measures

Similar to the GDPR’s requirements, given that the use of AI-based tools within employment relationships involves the processing of employees’ personal data, employers must also implement efficient security measures. This may include:

▸ Technical safeguards: frequently used technical and organizational measures are data encryption, use of access controls, and employing secure storage solutions.
▸ Conducting regular security assessments: for example, regularly auditing AI systems (deployed or used) to ensure they are secure and identify any potential vulnerabilities.
▸ Implement a security incident response plan: employers should draft an internal policy or a protocol for responding to data breaches, including how to notify affected employees and relevant authorities in such scenarios.

6. Paying increased attention to automated individual decision-making and profiling

In this case, every employer using AI-based tools within its relationships with employees must address these specific issues which constantly raise problems for data subjects (i.e., the employees). Thus, it is important to:

▸ Ensure human oversight: implement the appropriate measures so that decisions impacting employees are not solely adopted automatically following the results generated by the AI tool and that human review is provided.
▸ Properly inform the employees: if using AI tools for automated individual decision-making (g., hiring decisions, which automatically evaluate the employees, automatically allocating tasks based on individual behavior, personal traits or characteristics), employees have the right to be informed of how the decisions are made.
▸ Make sure the employees know their rights: employees must be informed about their right to object to automated individual decision-making[3] or profiling[4] and how to challenge such decisions (Art. 21 and Art. 22 of the GDPR are relevant in this case). You can find out more about automated individual decision-making and profiling in relation to this matter in Part III of our article.

7. Check the AI contractors carefully and conduct due diligence

These preventive measures may include the following:

▸ Assess third-party providers/contractors: employers should ensure that AI vendors comply with data protection regulations and have implemented appropriate security measures in their AI solutions.
▸ Concluding data processing agreements (DPAs): employers should sign contracts with vendors that include data protection clauses specifying the roles and responsibilities of each party under the specific conditions laid down by Art. 28 of the GDPR.
▸ Conducting regular audits: employers should monitor third-party compliance, especially with regard to cloud-based AI solutions.

8. Ensure data accuracy and fairness principles

The GDPR principle of data accuracy must be observed when it comes to personal data used as an input for AI systems, especially considering the potentially harmful outcomes of training AI with inaccurate data by singling out people “in a discriminatory or otherwise incorrect or unjust manner[5]. Therefore, even if employers act as AI-systems deployers or simply as users of AI-based solutions, employers should:

▸ Conduct regular data quality checks: respectively, if the data used or introduced in the AI models is accurate, up-to-date, and relevant for the purposes pursued.
▸ Conduct audits to identify potential biases: employers should evaluate AI systems for potential biases that may occur in decision-making processes (g., in the context of hiring – there may be AI-based recruitment tools, which may generate biases against women).
▸ Propose corrective measures, if necessary: implementing mechanisms to correct any biases or inaccuracies identified during the audits.

9. Organize trainings for instructing employees and managers

To keep up with technological developments while acting in full compliance with applicable legal provisions, the investment in human resources and know-how within a company is paramount. Thus, among others, employers should train their employees on:

▸ AI Systems: they should ensure that staff, especially those involved in managing AI tools, understand their responsibilities regarding data privacy.
▸ Data protection awareness: employers should train staff on the effective implementation and application of the GDPR principles (or local data protection laws), such as the fulfillment of data minimization principle, purpose limitation, and lawful processing.

10. Constantly review and update data protection policies

Some of the most common practices that may help employers comply with data protection principles and rules when using (or even deploying) AI-based solutions or tools can include:

▸ Regular policies review: periodically update the internal data protection policies to account for changes in AI technology or regulatory requirements.
▸ Proper documentation: this can consist of keeping records of policy changes and ensuring they are accessible to employees.
▸ Conducting internal audits: conduct regular internal audits to ensure compliance with data protection policies and practices can be a business-saving solution.

11. Prioritizing and always respecting employees’ rights

When it comes to data subjects’ rights (i.e., the employees), employers, as data controllers, must:

▸ Enable access, rectification, and erasure of data: employers must ensure that employees can access their data, request corrections, or ask for data to be deleted.
▸ Data portability: if relevant and upon request, employers must provide employees with their data in a structured, commonly used and machine-readable format.
▸ Respond to all justified requests: employers must establish and implement a process for responding promptly to data access or deletion requests from employees or any type of request submitted by employees under the GDPR’s provisions.

(iii) The conclusion?

As can be observed, using AI-based platforms or tools is undoubtedly a powerful asset, but only if used responsibly. By implementing transparent policies, prioritizing data minimizations, and embracing a privacy-by-design approach, employers can turn the use of AI into a robust ally in their compliance journey, boosting at the same time the efficiency of various tasks and interactions conducted by employees.

Stay tuned for the third and last part of our article!  

Remember to subscribe to our newsletter to stay updated on the latest legal developments.

[1] „Deployer” – means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

[2] https://www.tidio.com/blog/ai-recruitment/

[3] Decisions made about individuals solely by automated means, without any human involvement. This typically involves the use of algorithms or artificial intelligence (AI) to process personal data and make decisions based on that data.

[4] The automated processing of personal data to assess or predict various characteristics of an individual. The goal is often to categorize people based on specific traits or behaviors, allowing organizations to make decisions or target individuals in specific ways (e.g., regarding the work performance, economic situation, health, behavior, interests, location).

[5]  Recital (59) of the AI Act

 

bpv Huegel advised IMMOFINANZ on the squeeze-out and delisting of S IMMO

IMMOFINANZ takes further step to optimise group structure. IMMOFINANZ Group holds 100% of the shares in S IMMO following completion of the squeeze-out.

03 December 2024. In October this year, the Shareholders’ Meeting of S IMMO AG resolved upon the squeeze-out of minority shareholders in exchange for cash compensation in accordance with the Austrian Squeeze-out Act. The squeeze-out took effect upon entry in the commercial register on 3 December 2024. The S IMMO shares of the minority shareholders will be transferred to IMMOFINANZ AG as the main shareholder. At the same time, S IMMO’s listing on the Vienna Stock Exchange ended.

bpv Huegel advised IMMOFINANZ on the entire squeeze-out process and delisting.

IMMOFINANZ Group is a commercial real estate group whose activities are focused on the office and retail segments of eight core markets in Europe: Austria, Germany, Poland, Czech Republic, Slovakia, Hungary, Romania and the Adriatic region. Its core business includes the management and development of real estate. IMMOFINANZ Group owns real estate assets worth around EUR 8.0 billion, which are spread across approximately 470 properties. The company is listed on the Vienna (leading index ATX) and Warsaw stock exchanges. Further information: https://www.immofinanz.com.

The bpv Huegel team was led by Christoph Nauer and Roland Juill (both Corporate/M&A, Capital Markets) and included Barbara Valente (Corporate/M&A, Capital Markets), Nicolas Wolski (Tax Law), Lucas Hora (Tax Law) and Daniel Maurer (Corporate/M&A, Capital Markets).

IMMOFINANZ has engaged PwC Advisory Services GmbH (Viktoria Gass, Matthias Eicher) for the valuation. BDO Austria GmbH Wirtschaftsprüfungs- und Steuerberatungsgesellschaft (Kurt Schweighart and Raffaela Uhl) acted as court appointed expert auditor. S IMMO was advised by DORDA (Christoph Brogyányi and Andreas Mayr).

bpv Huegel’s corporate and capital markets team advised IMMOFINANZ during the squeeze-out process to increase its stake in S IMMO – acquisition of approx. 38% of S IMMO shares from CPI Property Group SA for a purchase price of approx. EUR 608.5 million. Through this transaction, together with the squeeze-out, IMMOFINANZ Group now acquires all shares in S IMMO.

Press release

Nicolas Wolski (lawyer and tax advisor) will head the tier 1 tax practice at bpv Huegel

Vienna, 04 November 2024. The experienced tax partner Nicolas Wolski (42) will take over as Head of Tax at bpv Huegel with November 2024.

Nicolas has been a leading expert in tax law at bpv Huegel for six years. He also has many years of experience working for major international law firms, including Freshfields Bruckhaus Deringer, Graf von Westphalen and the US law firm Willkie Farr & Gallagher. Nicolas is dual-qualified as a lawyer and tax advisor in both Austria and Germany.

Nicolas has worked closely with the former head of the practice, Gerald Schachner, for the past few years. Gerald will leave his position at bpv Huegel after 14 years at the end of October 2024 to set-up his own law firm.

We are looking forward to continuing to work with Nicolas in his new role. As Head of Tax, he will lead the further development of the practice group. Our goal is to give it an even stronger international focus. I would also like to thank our partner and friend Gerald for his significant contribution to the successful development of bpv Huegel’s tax practice,” said Christoph Nauer, Co-Managing Partner at bpv Huegel.

Nicolas will continue to be supported in his new role by Kornelia Wittmann, also tax partner. She has been with bpv Huegel for over twelve years and previously worked for Big Four tax advisory firms for many years. She is also dual qualified as a tax advisor and lawyer in several jurisdictions.

The tax practice of bpv Huegel is a leading practice and has top positions in national and international rankings such as JUVE, ITR World Tax, Chambers Europe and Legal 500. As recently as September 2024, the ITR tax team was named “Tax Litigation Law Firm of the Year – Austria” and “Transfer Pricing Law Firm of the Year – Austria”. 40 years ago, bpv Huegel was one of the first Austrian law firms to focus on integrated tax advice.

I would like to thank my partners for their trust. It is of course an honour to take over the lead of the practice group from Gerald. It’s unfortunate that he is leaving. We as a team, but also I personally, are very grateful to him for his always respectful and friendly support, especially in my early years at bpv Huegel. I am looking forward to my new role”, said Nicolas Wolski, new Head of Tax at bpv Huegel.

Press release

AI systems & GDPR rules – How do they fit together? – Part I

Starting from August 1, 2024, we are beginning a new chapter in terms of technology governance, marked by the entry into force of the European Regulation on artificial intelligence (the “AI Act” or the “Regulation”)[1]. The AI Act marks a pivotal moment in in how we conceptualize, develop, and deploy artificial intelligence (“AI”), aiming to achieve a delicate balance: promoting technological advancement while safeguarding fundamental human rights.

In this article, wrote by our colleagues Bianca Ciubotaru, Associate, and Silvana Curteanu, Associate, we aim to explore the intersection between AI innovation and big data use with personal data protection compliance, particularly under the General Data Protection Regulation (GDPR)[2]. Our analysis is structured in three parts:

1. In this first part, we shall assess the data protection background and its algorithmic readiness by overviewing the main relevant GDPR provisions related to the development and deployment of AI, and how the AI Act impacts the GDPR’s rules.

2. In the second part, we shall delve deep into the most significant topics concerning the AI & GDPR combo within the employment relationships and the requirements incumbent on the employer as a data controller and deployer when using AI tools.

3. In the third and final part, we shall discuss the importance of human involvement in AI processing activities, and the classification as „high-risk” systems of certain AI tools used in the employment areas, before wrapping up and laying down our conclusions.

AI Act – From ink on paper to real-world

The journey of the AI Act from ink on paper to real-world impact unfolds through a series of meticulous steps, and the effective application of its provisions will then take place in four stages, as follows:

Who falls under the purview of the AI Act?

The AI Act applies to a broad range of entities involved in developing, deploying, and using AI systems within the European Union (“EU”). Specifically, it applies to:

AI Providers: organisations and individuals placing on the market AI systems or general-purpose AI models in the EU, irrespective of whether those providers are established or located within the EU or in a third country;

AI Deployers: entities that use AI systems in their professional activities, including businesses, public authorities, and other organisations, particularly when using high-risk AI applications, that have their place of establishment or are located within the EU;

AI Providers and Deployers: entities that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the EU;

Importers and Distributors of AI systems placing AI systems on the EU market;

Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;

Exemptions

The AI Act does not apply in case of:

systems that do not qualify as AI (g., technologies that do not involve machine learning, natural language processing, or similar AI techniques);

military applications used exclusively for military purposes;

research and development activities that imply the use of AI systems solely for research and development purposes, particularly in a non-commercial context;

personal use: AI systems used by individuals for personal, non-professional purposes (g., personal assistants or home automation systems that do not impact others);

certain public services that do not involve high-risk AI applications, depending on the specific context and use case;

small enterprises: while the AI Act applies to all organisations, there may be certain provisions or requirements that are relaxed or not enforced for small enterprises, particularly in terms of compliance burdens;

free and open-source software (unless their use would classify them as a prohibited or high-risk AI system, or their use would subject them to transparency obligations).

As AI tools become increasingly integrated into our daily activities, ensuring compliance with the GDPR is paramount. The interference between the GDPR rules and AI Act provisions is essential, given that the AI Act also provides a framework for a new category of so-called general-purpose AI models[1] – the typical examples of such a category are the generative AI models, which are characterized by their ability to easily respond to a wide range of distinct tasks and, some of them, on their self-train capability.

Yet, the looming question remains: Will the majority of human tasks be automated by AI? Can AI enhance or replace human reasoning and empathy? Or is a harmonious coexistence possible?

A GDPR perspective – general outlines

As it can be easily observed, the GDPR is AI-neutral and does not specifically regulate the collection and processing of personal data by AI systems. The AI Act complements the GDPR by establishing conditions for developing and deploying trusted AI systems, introducing some novelties related to personal data and certain processing activities.

Regardless of the type of activity they carry out which involves using AI systems or tools, AI providers and deployers shall be required to map their obligations very carefully, particularly considering the interactions between the GDPR and the AI Act and their combined applicability. For example, both regulations address the concept of “automated decision-making” but from different angles, which are undeniably intertwined.

Similarly, while AI models are trained with large amounts of data, the GDPR addresses large-scale automated processing of personal data. Specifically, profiling and automated decision-making processes are some of the key privacy issues within the use of AI to provide a prediction or recommendation about individuals (this subject will be further detailed in Part III of our article).

But how do we know which regulation applies in specific cases?

In a nutshell, the AI Act applies exclusively to AI systems and models, and the GDPR applies to any processing of personal data. Thus, there can potentially be four scenarios:

The impact of the AI Act on data protection

Although the objective of both frameworks is a common one – to protect the fundamental rights of natural persons (including the processing of personal data by AI systems lawfully) – there are inevitably certain “tensions” between the provisions of the GDPR and the AI Act.

At first glance, it might seem challenging to align the principles of purpose limitation, data minimisation and the restrictions on automated decision-making processes provided by the GDPR, on the one hand, with the processing of vast quantities of personal data for insufficiently precise purposes in the context of the AI Act’s scope, on the other hand.

Nevertheless, data protection principles can be interpreted and applied consistently with the benefits of AI techniques and the use of big data, for example[1]:

the principle of purpose limitation allows the re-use of personal data when compatible with the original purpose;

the principle of data minimisation might be applied by reducing the identifiability of data rather than the quantity, for example, through pseudonymisation;

the GDPR’s prohibition of automated decision-making comes together, generally, with specific nuances and exceptions, thus not hindering the AI’s further development or deployment in this respect.

Gaining a comprehensive approach to both frameworks is crucial in fostering a culture of compliance, transparency and trust in various sectors of activity (including among employees and employment relationships). Balancing privacy and technological innovation is an attainable goal, and there is a significant complementarity between the GDPR and the AI Act.

Briefly, the most relevant elements of the GDPR that can be interconnected with the use of AI systems or other related automated technologies (including in the context of employee`s data processing activities – this subject will be further detailed in Part II of our article) are:

(i) the right to be informed & the right of access

Articles 13, 14, and 15 of the GDPR require data controllers to provide detailed information about their processing activities, particularly when using automated decision-making or profiling. This includes explaining the logic behind such processes and their potential consequences for individuals. Similarly, the AI Act emphasizes transparency concerning both the development and use of AI systems. The transparency measures in the GDPR are complemented by additional requirements in the AI Act, reinforcing the need for clear information about AI-related data processing, such as:

▸ AI providers must inform individuals when they interact with an AI system if this interaction is not immediately apparent;

▸ AI deployers must inform individuals about the operation of AI systems used for emotion recognition or biometric categorization;

▸ AI providers must provide and publicly share a detailed summary of the training data used for general-purpose AI models;

▸ AI deployers who are also employers must notify workers’ representatives and affected employees about the use of high-risk AI systems in the workplace.

(ii) the right to object

As Article 21 of the GDPR outlines, individuals may object to processing their personal data, specifically when it comes to profiling.

Thus, the GDPR already provides users (of an AI systems/tool) with the ability to object to the use of their personal information, for example, in the case of training AI systems. The right to object ensures that individuals can prevent their data from being used in AI model development, especially when it involves profiling or automated decision-making.

Therefore, the existing GDPR framework sufficiently protects user privacy in the context of AI usage, eliminating the need for the AI Act to create a separate, additional mechanism for users to oppose using their personal data.

(iii) automated decision-making

Article 22 and Recital 71 of the GDPR grant individuals the right not to be subject to decisions made solely by automated processing (including profiling), particularly when these decisions have legal or similarly significant impacts.

The effective exercise of this right, including the eventual challenge of such automated decision, requires and entails human intervention. Building upon this, when it comes to high-risk AI systems, Article 14 of the AI Act provides for an additional layer of human oversight in the design and development especially of high-risk AI systems, to prevent risks to health, safety, or fundamental rights. Consequently, in terms of human intervention in automated decision-making, low or minimal-risk AI systems are subject only to the GDPR requirements.

(iv) risk assessment and documentation requirements

Article 35 of the GDPR requires a Data Protection Impact Assessment (DPIA) for high-risk data processing, such as large-scale profiling with AI (e.g., profiling of candidates/employees using an AI systems). Similarly, the AI Act mandates a conformity assessment for AI providers and a Fundamental Rights Impact Assessment (FRIA) for AI deployers.

FRIA requires evaluating AI system risks to individuals and determining mitigation strategies. While DPIA focuses on data processing impacts, FRIA assesses broader consequences of AI usage on individual rights. Article 27 para. (4) of the AI Act emphasizes their complementary nature, not as bureaucratic overlap but as distinct yet interconnected assessments addressing different aspects of AI implementation and its effects on fundamental rights.

The AI Act brings some novelties impacting personal data

 Undoubtedly, the AI Act and the GDPR complete each other. However, given that the two legal frameworks do not regulate the same objects, they do not need the same approach. Thus, in certain respects, the AI Act offers a more flexible regime.

For example, the AI Act provides:

that law enforcement authorities may use real-time remote biometric identification systems[2] in public spaces under exceptional circumstances, such as locating trafficking victims, preventing imminent threats, or identifying criminal suspects (e., Article 5 of the AI Act). These cases can be seen as exceptions to Article 9 of the GDPR, which generally prohibits the collection and processing of biometric data.

the providers of high-risk AI systems may process special categories of personal data if it is strictly necessary for detecting and correcting biases (e., Article 10 of the AI Act). This processing must comply with the strict conditions set out in Article 9 of the GDPR.

the re-use of sensitive data, such as genetic, biometric, and health data, within AI regulatory sandboxes[3] to support the development of systems with significant public interest (g., the health system). These sandboxes will be supervised by a dedicated authority (Article 59 of the AI Act).

What about the sanctions?

Even if the two regulations complement each other, the sanctions for non-compliance with their provisions differ.

The main applicable sanctions are as follows:

Stay tuned for Part II of our article!

[[]“The impact of the General Data Protection Regulation (GDPR) on artificial intelligence”, available at EPRS_STU(2020)641530_EN.pdf (europa.eu)

[2] Art. 3 item (42) of AI Act: “real-time remote biometric identification system” means a remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention.

[3] As per Article 57 of the AI Act, Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level, which shall be operational by 2 August 2026. (…) AI regulatory sandboxes shall provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the providers or prospective providers and the competent authority. Such sandboxes may include testing in real world conditions supervised therein.

[4] Art. 3 item (63) of AI Act: “general-purpose AI model” means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.

[5] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689.

[6] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), available at https://eur-lex.europa.eu/eli/reg/2016/679/oj.

Slovak transaction avoidance rules


Our Slovak partner, Martin Provazník, participated in preparing a comparative analysis in the joint chapter called Avoidance Actions and Proposed EU-level Harmonisation for the Yearbook 2023 INSOL EUROPE, with its contribution on the Slovak avoidance action regime.

The chapter was created in cooperation with Incoronata Cruciano (Germany), Klaudia Frątczak-Kospin (Poland), Paul Johnson (UK), Stéphanie Oneyser (Switzerland) and Pieter Wouters (Belgium).

You can read it here ➢ https://lnkd.in/eFtbnr-W

New options to resolve the company’s financial situation

Companies whose financial situation is on a negative trend have a new option to address their financial situation and their impending bankruptcy. The Act on Resolution of Imminent Insolvency introduces a new option of preventive restructuring, which the debtor can choose to be between public or non-public.

What is preventive restructuring?

Preventive restructuring involves the re-negotiation, i.e. reopening and renegotiating the contractual relations under which the debtor company is obliged to pay its creditors. However, the prerequisite is that the debtor must credibly demonstrate to its creditors that it is at risk of insolvency in the next 12 months, i.e. that it is at risk of having to file for bankruptcy. The aim of the whole formal process is to reach a new agreement between the debtor company and the creditors on how the company’s debt will be repaid. On the other hand, the debtor company has to demonstrate to its creditors that a possible deferral of repayments, or forgiveness of part of the debt, or another proposal to resolve the debtor’s financial situation, is better than any other alternative that the debtor expects in the future (analysis of creditors’ best interest) and at the same time that the debtor’s business is viable (viability analysis).

Financial ratios – when is bankruptcy imminent?

Preventive restructuring can be used if a company is at risk of bankruptcy in the next 12 months, i.e. if the difference between the amount of its outstanding monetary liabilities and its monetary assets (the ‘coverage gap’) is at risk of being more than a tenth of the amount of its outstanding monetary liabilities.

current liabilities – cash assets > (current liabilities)/10

When should you start preventive restructuring?

The debtor’s statutory body is obliged to monitor the company’s financial situation with professional care. If it determines with professional care that the company is at risk of future insolvency, it has the option, but not the obligation, to resolve the company’s impending insolvency through a preventive restructuring. If the statutory body does not have sufficient professional knowledge or experience, it is obliged to seek the assistance of an expert to assess whether the debtor is at risk of insolvency and what measures need to be taken to overcome the impending insolvency.

The debtor’s adviser

The role of the debtor’s adviser is to analyse the situation of impending insolvency and to propose a solution – a restructuring plan. The debtor’s adviser must have appropriate knowledge of economics and the law as well as sufficient technical equipment and staff. In addition, the adviser must enjoy the confidence of the relevant creditors, or else the creditors may not approve the restructuring plan.

How preventive restructuring is carried out

The whole procedure consists of two main parts. First, the debtor prepares for the restructuring, during which the debtor’s advisor analyses the current financial situation and its expected development and starts the communication with the selected creditors. At this stage, the debtor must develop a draft restructuring plan. Subsequently, a proposal for a public preventive restructuring or a proposal for a non-public preventive restructuring will be filed.

Temporary protection

The debtor company has the right to apply for temporary protection along with authorisation of a (public) preventive restructuring. In particular, enforcement of a decision (debt recovery) and enforcement of a pledge cannot be made against the debtor, and the debtor is also not obliged to file for bankruptcy and is entitled to give priority to the payment of new obligations over old obligations. Temporary protection may be granted for a period of three months and may be extended for a further three months. Temporary protection must be agreed to in advance by the statutory creditors.

Difference between public and non-public preventive restructuring

A public preventive restructuring is a new debt repayment agreement with all creditors under which the debtor can apply for interim protection, which is a formal process involving not only the court but also a court-appointed trustee, in addition to the adviser, and the court will form a creditors’ committee from the list of creditors. By contrast, a non-public preventive restructuring is a new debt repayment agreement with only selected creditors, who must be supervised by the National Bank of Slovakia (e.g. banks and leasing companies). While the court will not allow a public preventive restructuring if the debtor company is bankrupt or if, for example, enforcement proceedings or the enforcement of a pledge is underway against the debtor, there are no such requirements for a non-public preventive restructuring. During a public preventive restructuring, formal acts such as the informational meeting of creditors, meetings of the creditors’ committee and the approval meeting take place. A non-public preventive restructuring does not have such formal processes and much depends on the communication between the debtor, the advisor and the creditors concerned. The restructuring plan resulting from both the public and non-public preventive restructuring process must be reviewed and subsequently confirmed by the court.

The restructuring plan

A debtor’s restructuring plan contains, in particular, measures aimed at averting the debtor’s insolvency and ensuring the viability of the debtor’s business. These include, in particular, the restructuring of liabilities (deferment of repayment, partial forgiveness, alteration of security), a change in the debtor’s asset or capital structure, or a restructuring of human resources or a change in the debtor’s management and control. The financing of these measures must also be addressed in the restructuring plan.

Although the whole recovery (restructuring) process is formally concluded with the adoption of a restructuring plan, the outcome of the whole procedure will depend on whether the restructuring plan adopted succeeds in averting the imminent insolvency of the company.

Advantages of preventive restructuring

Preventive restructuring, like traditional restructuring, involves temporary protection from creditors. However, unlike formal restructuring, it is a much more flexible and quick process. Preventive restructuring also provides a platform for intensive and effective communication with creditors, which can be crucial in resolving insolvency.

JUDr. Martin Provazník, partner bpv Braun Partners