.

Publicado em Nov. 13, 2023

Regulatory endeavors and compliance for high-risk AI

.


On June 14, 2023, the European Parliament's latest proposal for drafting theArtificial Intelligence Act (AI Act or AIA), which aims to establish obligations for suppliers, distributors and users of Artificial Intelligence systems. This proposal arises following growing regulatory intervention by the EU in the technological area, necessary given the exponential development and use of AI systems.

The AI ​​Act establishes different levels of requirements, distinguishing between prohibited-use AI systems, high-risk AI systems, and systems that do not fall into any of these categories, but that may nevertheless be subject to compliance obligations. transparency.

Regarding prohibited artificial intelligence practices, the AI ​​Act provides for the following situations:

i) Use of subliminal techniques, with the aim of distorting the behavior of a person or group of people, damaging their ability to make an informed decision, and leading the person to make a decision that they would not otherwise have made, in a in a way that causes or is likely to cause harm to that person, another person, or a group of people.

ii) Exploitation of vulnerabilities of a person, or a group of people, related to personality traits, economic or social situation, age, or physical or mental disabilities, with the aim of distorting their behavior, in a way that causes or is likely to cause harm to that person, another person, or a group of people.


  1. Systems that categorize people according to sensitive or protected attributes, or according to characteristics based on these attributes (exceptions are AI systems used for approved therapeutic purposes, and based on the consent of the individual or their legal guardian)


iii) Systems of social credit, evaluation or classification of people, based on their social behavior, or on factual, inferred or predictable characteristics of the person or their personality, which result in the unfavorable treatment of people or groups, unrelated to the data generated or collected, or that results in unjustified or disproportionate treatment, in relation to the severity of the behavior;

iv) Use of biometric identification systems in real time, in spaces accessible to the public (systems that assess the risk of an offense occurring, based on the person's profile; systems for creating or expanding facial recognition databases; systems that infer emotions in the areas of law enforcement, border control, and in work or educational institutions; systems for analyzing footage in spaces open to the public);

Regarding classification as high-risk AI systems, the AI ​​Act provides for the following situations:

i) Systems used as safety components of a product, and for which a conformity opinion, related to health and safety risks, from a third party is required to place the product on the market;

ii) AI systems that fall into the categories set out in Annex III:


  • Biometric or biometrics-based systems
  • Systems used to make inferences based on biometric data, including emotion recognition systems;
  • Security component related to land, rail or air traffic, critical digital infrastructure, or the supply of water, gas, heat or electricity;
  • Access to education or employment;
  • Access to public or private services and social benefits;
  • Access to health and life insurance;
  • Systems that establish priority in resolving emergency services;
  • Social credit systems;
  • Use by public authorities of AI systems such as polygraphs, or for assessing the reliability of evidence and defining profiles, in the course of an investigation, or in criminal statistics;
  • Systems for border control, and in the areas of migration and asylum;
  • Assistance in legal decisions;
  • Systems with the purpose of influencing the outcome of elections or referenda;
  • Systems used for recommendations on very large social media platforms;


In any of these categories, classification as a high-risk system also depends on the existence of a significant risk of harm to people's health, safety or fundamental rights.

The AI ​​Act reinforces the obligations of suppliers of high-risk AI systems, providing, in particular, the following obligations:


  • Implementation of a system risk management system;
  • Assessment of the impact on fundamental rights;
  • System transparency;
  • Human supervision;
  • Implementation of a quality management system;
  • Maintenance of automatically generated records;
  • Registration of the system in the EU database;
  • Affixing the CE marking to the system;


In particular, regarding transparency obligations, the following requirements are stipulated for high-risk systems:

  • Creation of instructions for use: the user is required to be able to interpret and explain the system's output, know how it works and the data it processes;
  • Information about the identity of the supplier, the characteristics and limitations of the system, as well as the risks to health, safety and fundamental rights;
  • Information on human supervision, maintenance and assistance measures for the system;


In cases where the system is not considered a high-risk system, there may still be some transparency obligations. However, the requirement for obligations differs depending on the purpose of the AI ​​system.

For AI systems designed to interact with people, the AI ​​Act only requires that the user be informed that they are interacting with an AI system. However, if relevant, information regarding the functions that use AI, the human supervision mechanism, the person responsible for the decision process, and the existing rights and processes that allow opposition to the application of these systems must be transmitted.

In AI systems that generate or manipulate images, audio or video that appear to be authentic (deep fake), and that constitute representations of people appearing to carry out actions that they have not actually carried out, it is determined that it is revealed to the user that the content was generated or manipulated artificially, as well as the name of the person who generated or manipulated the content, when possible ( Exceptions are made to cases authorized by law, or the exercise of freedom of expression, or freedom of arts and sciences).

As for emotion recognition systems, or biometric categorization systems, it is stipulated that the user is informed that they are interacting with an AI system, and also about the system's operating process. It is also required to obtain consent for the processing of biometric data.

The most recent AI Act proposal also provides for a regime of specific obligations for suppliers of afoundation model, stipulating the following obligations:


  • Demonstrate mitigation of system risks to health, safety, fundamental rights, environment and democracy;
  • Implement measures to ensure the adequacy of databases and avoid bias;
  • Promote Cybersecurity;
  • Increase energy efficiency;
  • Create instructions for use;
  • Develop a quality maintenance system;
  • Register thefoundation model in the EU database;


Knowledge and implementation of the obligations set out in the AI ​​Act, whether by suppliers or distributors, is essential, considering the growing importance that this will acquire, as we witness the development of Artificial Intelligence systems.

It is important to highlight that, as has already happened with several EU diplomas, the AI ​​Act could inspire the creation of legislation on AI systems in several countries, becoming a standard of compliance for AI systems at a global level.


BY EDUARDO MAGRANI, SENIOR CONSULTANT IN THE TMT AREA OF CCA LAW FIRM



274 reads 72 Likes

About the author

Eduardo Magrani

Eduardo Magrani

Senior Consultant CCA Law Firm

Presidente do Instituto Nacional de Proteção de Dados (INPD). Doutor e Mestre em Direito Constitucional pela Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), com validação pela Universidade Nova de Lisboa. Affiliate no Berkman Klein Center for Internet & Society na Universidade de Harvard. Pós-Doutor na Universidade Técnica de Munique (TUM), trabalhando com proteção de dados e inteligência artificial no Munich Center for Technology and Society. Sócio no Demarest Advogados nas áreas de Privacidade, Tecnologia e Cybersegurança e Propriedade Intelectual.

Read too

274 reads 72 Likes
Café com o Presidente
Próximo Evento
March 21, 2024

Café com o Presidente

Uma conversa com o Presidente da I2AI - Onédio S. Seabra Júnior - para falarmos sobre os temas mais quentes de Transformação Digital e Inteligência Artificial, num bate-papo informal com