
Technologies based on artificial intelligence are advancing at an incredibly fast pace, while the law struggles to keep up. This is evident from the commitment made by the President of the European Commission, Ursula von der Leyen, over two years ago to propose legislation for a coordinated European approach to the human and ethical aspects of artificial intelligence. As a result, the European Commission presented a proposal for a Regulation on harmonized rules regarding artificial intelligence (the so-called AI Act) to the European Parliament and the Council of the EU. On June 14, 2023, the European Parliament adopted the text of the Regulation at first reading, and its final adoption is expected by the end of the year.
In an address, Sam Altman, the CEO of OpenAI (the creator of ChatGPT), called for the establishment of a legal framework to regulate the use of artificial intelligence. However, he also pointed out that the European Union’s proposal may be challenging to comply with.
With the Regulation on Artificial Intelligence (AI), the European Union is making its first attempt to lay the foundation for a global legal framework for secure, reliable, and ethical artificial intelligence that protects fundamental human rights and freedoms, as well as their health and safety. The challenge in this endeavor lies in striking a balance between technological development and the implementation of legal mechanisms that do not overly restrict or block it, while still providing the necessary safeguards to ensure that technologies are used safely and in accordance with the law.
According to the Regulation, “Artificial Intelligence system” (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and can, for explicit or implicit purposes, generate results such as forecasts, recommendations, or decisions that affect the physical or virtual environment. This abstract definition has undergone significant changes from the initial proposal by the Commission to cover a wide range of systems that use artificial intelligence in various forms and ways.
1. The Regulation introduces a set of core principles that regulate artificial intelligence systems and which developers and users of such systems should adhere to, namely:
- Human oversight and control, meaning that systems should be developed and used as tools to serve humans, and they can be controlled by them while respecting human dignity and personal autonomy.
- Technical robustness and safety, ensuring that systems are developed and used in a way that minimizes potential harm, including being resilient to unlawful use.
- Privacy and data protection, ensuring that systems are developed and used in compliance with existing rules on privacy and data protection.
- Transparency, ensuring that systems are developed and used in a way that allows traceability and creates clarity about the nature of the system and its capabilities and limitations.
- Diversity, non-discrimination, and fairness, meaning that systems are developed and used in a way that includes various participants and promotes equal access, gender equality, and cultural diversity.
- Social and environmental well-being, meaning that AI systems are developed and used in a sustainable and environmentally-friendly manner.
2. The Regulation also imposes transparency obligations on some AI systems concerning their interaction with individuals.
AI systems designed to interact with individuals should be designed and developed in such a way that the individual is informed that they are interacting with an AI system in a timely, clear, and understandable manner, unless it is obvious from the circumstances and context of use. When appropriate, the information should include which AI functions are enabled, whether there is human oversight, and who is responsible for the decision-making process, as well as the existing rights and processes that allow individuals or their representatives to object to the application of such systems to them and seek judicial protection against decisions made by or resulting from AI systems.
Explicit provisions are made for AI systems that create or manipulate text, audio, or visual content. They should disclose in an appropriate manner, clearly and in due time if the content was created or manipulated by AI, including when possible, the person who created or manipulated the content, provided that:
- The content would falsely appear as authentic or genuine, and
- It includes representations of people saying or doing things they did not say or do in reality (so-called deep fakes).
Other scenarios requiring transparency obligations are also included, including those concerning high-risk AI systems.
3. Risk Classification. High-Risk AI Systems
To provide flexibility, the Regulation introduces a risk classification system that determines the level of risk when applying an AI system. Four risk categories are defined: unacceptable, high, limited, and minimal.
A system is considered high-risk if the following two conditions are met:
The AI system is intended to be used as a safety component in a product or is the product itself falling within the scope of Union harmonization legislation listed in Annex II (e.g., legislation concerning machinery, children’s toys, personal protective equipment, etc.).
The product, for which the AI system serves as a safety component, or the AI system itself as a product, must undergo a third-party conformity assessment related to health and safety risks before being placed on the market or put into service in compliance with the Union harmonization legislation listed in Annex
In this category, generally, AI systems used in products falling within the scope of Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety are included.
Additionally, any AI system falling within one of the critical areas and use cases mentioned in Annex III will be considered high-risk if it poses significant risks to the health, safety, or fundamental rights of individuals. These areas include biometric identification and categorization of individuals, management and operation of critical infrastructure, education and vocational training, employment, law enforcement, and more.
All high-risk AI systems must undergo an assessment before being placed on the market and throughout their lifecycle.
4. Prohibited Practices in the Field of AI
The Regulation explicitly prohibits the following practices in the field of artificial intelligence, which are classified as unacceptable:
The deployment or use of a system that utilizes subliminal techniques beyond a person’s consciousness or manipulative techniques aimed at significantly altering the behavior of an individual or a group of people, substantially reducing their ability to make informed decisions. This results in individuals making decisions they would not have otherwise taken, causing significant harm to them, other individuals, or groups.
This prohibition does not apply to systems intended for approved therapeutic purposes based on informed consent of the respective individual or their legal representative.
The placing on the market, putting into service, or use of an AI system that exploits any of the specific vulnerabilities of a specific individual or group of people, including characteristics related to known or predicted personal traits of that individual (or group). This exploitation leads to substantial distortions in the behavior of the individual (on their own or as part of that group), causing or likely to cause significant harm.
The placing on the market, putting into service, or use of AI systems for biometric categorization that categorize individuals based on sensitive or protected attributes or characteristics or on the basis of inferences made about these attributes or characteristics. This prohibition does not apply to AI systems intended for approved therapeutic purposes based on the specific informed consent of the individuals subject to their impact or, where applicable, their legal representatives.
The placing on the market, putting into service, or use of an AI system for the purpose of evaluating or classifying the trustworthiness of individuals over a specific period based on their social behavior or known or predicted personal or personality characteristics, where the social scoring leads to one or both of the following:
Resulting in prejudicial or unfavorable treatment of certain individuals or groups in social contexts unrelated to those in which the data was originally generated or collected.
Resulting in prejudicial or unfavorable treatment of certain individuals or groups that is unjustified or disproportionate to their social behavior or the weight of the characteristics attributed to them.
The use of real-time remote biometric identification systems in publicly accessible spaces.
5. Establishment of a Specialized EU Body
The creation of the European Artificial Intelligence Agency (the AI Agency) composed of representatives from Member States and the Commission is proposed. The AI Board will facilitate the smooth, effective, and harmonized application of the Regulation, contributing to effective cooperation between national supervisory authorities and the Commission and providing advice and expert expertise to the Commission.
At the national level, Member States will have to designate one or more national competent authorities and, among them, the national supervisory authority for the purposes of overseeing the application and implementation of the Regulation.
6. Sanctions
Each Member State has the operational autonomy to establish rules regarding sanctions (including administrative fines) for violations of the Regulation. The maximum amounts of administrative fines are as follows:
- Up to 40 million euros or 6% of the total annual worldwide turnover for the previous financial year (whichever amount is higher) for non-compliance with the prohibitions on practices in the field of artificial intelligence.
- Up to 20 million euros or 4% of the total annual worldwide turnover for the previous financial year (whichever amount is higher) for non-compliance of AI systems with the requirements under Article 10 and 13 of the Regulation, which introduce requirements regarding the placing on the market and putting into service of high-risk AI systems.
- Up to 10 million euros or 2% of the total annual worldwide turnover for the previous financial year (whichever amount is higher) for other non-compliances with requirements or obligations under the Regulation.
- Up to 5 million euros or 1% of the total annual worldwide turnover for the previous financial year (whichever amount is higher) for providing incorrect, incomplete, or misleading information in response to requests from the relevant authorities.
*In this article, the provisions are discussed in the form adopted by the European Parliament at the first reading.
Author: Attorney Zlatka Kotsalova