There are holes in Europe’s AI Act — and researchers can help to fill them
2024; Nature Portfolio; Volume: 625; Issue: 7994 Linguagem: Inglês
10.1038/d41586-024-00029-4
ISSN1476-4687
Tópico(s)Ethics and Social Impacts of AI
ResumoWhere is the evidence that most AI is low-risk?"help shape what promises to be one of the world's most comprehensive set of laws and regulations on AI.Researchers need to seize this opportunity, and quickly.There are holes in the act that need to be filled before it enters into full force, which is expected to happen in around two years' time.Among those who have identified gaps are researchers studying the intersection of technology, law and ethics.To take one example, the act assumes that most AI carries "low to no risk".This implies that many everyday AI applications (such as online chatbots that answer simple queries, and text-summarizing software) will not need to be submitted for regulation.Applications considered 'high-risk' will be regulated, and include those that use AI to screen candidates for jobs or to carry out educational assessments, and those used by law enforcement.But as Lilian Edwards, a legal scholar at Newcastle University, UK, points out in a report for the Ada Lovelace Institute in London, there are no reviewable criteria to support the act's low-and high-risk classifications (see go.nature.com/4alwbha).Furthermore, where is the evidence that most AI is low-risk?A second concern is that AI developers will, in many instances, be able to self-assess products deemed highrisk.Under the act, such providers will need to explain the methodologies and techniques used to obtain training data, including where and how those data were acquired, and how the data were cleaned, as well as confirming that they comply with copyright laws.The regulator should ideally establish an independent, third-party verification system that can also verify raw data when necessary -even if it checks only a representative sample.Once established, the AI Office needs to make good on the commission's pledge to work closely with the scientific community, harnessing all available expertise to provide answers to these questions.The regulation of new technologies is an unenviable, but essential, task.Governments need to support innovation, but they also have a duty to protect citizens from harm and ensure that people's rights are not violated.Lessons learnt from the regulation of existing technologies, from medicines to motor vehicles, include the need for maximum possible transparency, for example, in data and models.Moreover, those responsible for protecting people from harm need to be independent of those whose role it is to promote innovation.Hadrien Pouget, who studies AI ethics at the Carnegie Endowment for International Peace in Washington DC, and his colleague Johann Laux at the University of Oxford, UK, have highlighted the necessity of regulatory independence, as well as transparency from AI providers, in an open letter to the future AI Office (see go.nature.com/3sckfvv).Meanwhile, the AI Advisory Body convened by United Nations secretary-general António Guterres is urging all those working on AI regulation to listen to as diverse a range of voices as possible in the process.The EU, to its credit, has much experience of drawing on natural and social science, along with engineering and technology, business and civil society, in its law-making.It needs to ensure that it draws on all of
Referência(s)