
Skills for educating in the age of AI and the challenges of the Brazilian context
Ana Paula Almeida, Andreza Garcia Lopes, Maria Clara Martins Rocha e Maria Regina Lins
On June 14, 2023, the European Parliament's latest proposal for drafting theArtificial Intelligence Act (AI Act or AIA), which aims to establish obligations for suppliers, distributors and users of Artificial Intelligence systems. This proposal arises following growing regulatory intervention by the EU in the technological area, necessary given the exponential development and use of AI systems.
The AI Act establishes different levels of requirements, distinguishing between prohibited-use AI systems, high-risk AI systems, and systems that do not fall into any of these categories, but that may nevertheless be subject to compliance obligations. transparency.
Regarding prohibited artificial intelligence practices, the AI Act provides for the following situations:
i) Use of subliminal techniques, with the aim of distorting the behavior of a person or group of people, damaging their ability to make an informed decision, and leading the person to make a decision that they would not otherwise have made, in a in a way that causes or is likely to cause harm to that person, another person, or a group of people.
ii) Exploitation of vulnerabilities of a person, or a group of people, related to personality traits, economic or social situation, age, or physical or mental disabilities, with the aim of distorting their behavior, in a way that causes or is likely to cause harm to that person, another person, or a group of people.
iii) Systems of social credit, evaluation or classification of people, based on their social behavior, or on factual, inferred or predictable characteristics of the person or their personality, which result in the unfavorable treatment of people or groups, unrelated to the data generated or collected, or that results in unjustified or disproportionate treatment, in relation to the severity of the behavior;
iv) Use of biometric identification systems in real time, in spaces accessible to the public (systems that assess the risk of an offense occurring, based on the person's profile; systems for creating or expanding facial recognition databases; systems that infer emotions in the areas of law enforcement, border control, and in work or educational institutions; systems for analyzing footage in spaces open to the public);
Regarding classification as high-risk AI systems, the AI Act provides for the following situations:
i) Systems used as safety components of a product, and for which a conformity opinion, related to health and safety risks, from a third party is required to place the product on the market;
ii) AI systems that fall into the categories set out in Annex III:
In any of these categories, classification as a high-risk system also depends on the existence of a significant risk of harm to people's health, safety or fundamental rights.
The AI Act reinforces the obligations of suppliers of high-risk AI systems, providing, in particular, the following obligations:
In particular, regarding transparency obligations, the following requirements are stipulated for high-risk systems:
In cases where the system is not considered a high-risk system, there may still be some transparency obligations. However, the requirement for obligations differs depending on the purpose of the AI system.
For AI systems designed to interact with people, the AI Act only requires that the user be informed that they are interacting with an AI system. However, if relevant, information regarding the functions that use AI, the human supervision mechanism, the person responsible for the decision process, and the existing rights and processes that allow opposition to the application of these systems must be transmitted.
In AI systems that generate or manipulate images, audio or video that appear to be authentic (deep fake), and that constitute representations of people appearing to carry out actions that they have not actually carried out, it is determined that it is revealed to the user that the content was generated or manipulated artificially, as well as the name of the person who generated or manipulated the content, when possible ( Exceptions are made to cases authorized by law, or the exercise of freedom of expression, or freedom of arts and sciences).
As for emotion recognition systems, or biometric categorization systems, it is stipulated that the user is informed that they are interacting with an AI system, and also about the system's operating process. It is also required to obtain consent for the processing of biometric data.
The most recent AI Act proposal also provides for a regime of specific obligations for suppliers of afoundation model, stipulating the following obligations:
Knowledge and implementation of the obligations set out in the AI Act, whether by suppliers or distributors, is essential, considering the growing importance that this will acquire, as we witness the development of Artificial Intelligence systems.
It is important to highlight that, as has already happened with several EU diplomas, the AI Act could inspire the creation of legislation on AI systems in several countries, becoming a standard of compliance for AI systems at a global level.
BY EDUARDO MAGRANI, SENIOR CONSULTANT IN THE TMT AREA OF CCA LAW FIRM
Presidente do Instituto Nacional de Proteção de Dados (INPD). Doutor e Mestre em Direito Constitucional pela Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), com validação pela Universidade Nova de Lisboa. Affiliate no Berkman Klein Center for Internet & Society na Universidade de Harvard. Pós-Doutor na Universidade Técnica de Munique (TUM), trabalhando com proteção de dados e inteligência artificial no Munich Center for Technology and Society. Sócio no Demarest Advogados nas áreas de Privacidade, Tecnologia e Cybersegurança e Propriedade Intelectual.
Ana Paula Almeida, Andreza Garcia Lopes, Maria Clara Martins Rocha e Maria Regina Lins
The now Buzzing AI revolution in the financial sector
É com grande entusiasmo que convidamos todos para a nossa primeira reunião do Comitê AI Tech e inovação, que acontecerá em 13 de fevereiro, às 19h00.
Este encontro marcará