
Skills for educating in the age of AI and the challenges of the Brazilian context
Ana Paula Almeida, Andreza Garcia Lopes, Maria Clara Martins Rocha e Maria Regina Lins
The A.I. Act is an attempt to address the risks technology poses to jobs, misinformation, bias and national security. Adam Satariano, European technology correspondent for The Times, has been reporting on regulators' efforts to draw limits around A.I. He spoke to DealBook about the challenges of regulating a rapidly developing technology, how different countries have approached the challenge, and whether it is possible to create effective safeguards for a borderless technology with vast applications.
The European Union has adopted a "risk-based" approach where they define different uses of A.I. that can pose the greatest potential harm to individuals and society — think of an A.I. used to make hiring decisions or operate critical infrastructure such as energy and water. These tools face more oversight and scrutiny. Some critics say the policy falls short because it is overly prescriptive. If something isn't listed as "high risk," then it isn't covered. The European Union's approach leaves many potential gaps that policymakers have tried to fill. For example, A.I. systems More powerful ones made by OpenAI, Google and others will be able to do many different things beyond simply powering a chatbot.
The A.I. shows the wider differences between the US, the European Union and China in terms of digital policy. The US is much more market-oriented and practical. America dominates the digital economy, and policymakers are reluctant to create rules that threaten that leadership, especially for a technology as potentially significant as A.I. President Biden signed an executive order imposing some limitations on the use of A.I., especially as it relates to national security and deepfakes. The European Union, a more regulated economy, is being much more prescriptive about rules for A.I., while China, with its state-controlled economy, is imposing its own set of controls with things like algorithm registrations and chatbot censorship. . The UK, Japan and many other countries are taking a more practical, wait-and-see approach.
The future benefits and risks of A.I. are not fully known - to the people who create the technology or the policymakers. This makes it difficult to legislate. Therefore, a lot of work is being done to analyze the direction of technology development and establish safeguards, whether to protect critical infrastructure, prevent discrimination and bias, or prevent the development of killer robots.
Technology appears to be advancing much faster than regulators can devise and pass rules to control it. This is probably the quickest response I've seen policymakers around the world give to a new technology. But it has not yet resulted in many concrete policies. Technology is advancing so quickly that it is outpacing policymakers' ability to create rules. Geopolitical disputes and economic competition also increase the difficulty of international cooperation, which most believe is essential for rules to be effective.
Advogado graduado na UFRJ com mais de 20 anos de experiência em direito trabalhando com empresas de telecom, internet, mídia e entretenimento, tendo trabalhado em empresas como Claro, Embratel, IG e UOL.
Pós graduado em Direito da Informação na Universidade Cândido Mendes, pós graduado em Estratégias Processuais na Advocacia Empresarial na FGV/SP e em Aspectos Políticos da União Europeia no INSPER/SP.
Foi General Manager e hoje é Global Board Member do Mobile Ecosystem Forum (MEF), fundador e consultor de Policies da MMA LATAM, além de ser Diretor da I2AI - Associação Internacional de Inteligência Artificial.
Ana Paula Almeida, Andreza Garcia Lopes, Maria Clara Martins Rocha e Maria Regina Lins
The now Buzzing AI revolution in the financial sector
É com grande entusiasmo que convidamos todos para a nossa primeira reunião do Comitê AI Tech e inovação, que acontecerá em 13 de fevereiro, às 19h00.
Este encontro marcará