The main artificial intelligence companies accept the European law, but demand that its application does not represent a brake | Technology
Large artificial intelligence companies accept the European regulation approved last Friday at midnight sharp, but affirm that it does not represent a brake on their development. As Pilar Manchón, advisor to the Spanish government advisory committee and head of artificial intelligence research strategy at Google, said, “AI is too important not to regulate it.” The main companies responsible for these developments have worked alongside the negotiation of the standard to ensure an ethical evolution of these tools, so that the standard always coincides with their general expectations, as they warn. Christine Montgomeryvice president and chief privacy and trust officer at IBM, “provides protective barriers to society while promoting innovation.”
Until now, the tendency of technology companies has been to leave the limits of their development in the hands of self-regulation. They all have ethical principles, which Manchón summarizes as follows: “Do good things and make sure that it will have a positive impact on the community, on society, on the scientific community. And if it might do something that isn’t what you use it for or what you designed it to do, let’s make sure you take all the necessary precautions and mitigate the risks. So: only do good, innovate, be bold, but be responsible.
However, this formula has proven to be absolutely insufficient in areas such as social networks. According to Global witnessan NGO that investigates and monitors human rights compliance with 30 years of experience, “these companies prefer to protect their lucrative business model by adequately moderating content and protecting users.”
To prevent these artificial intelligence malfunctions, some of the leading companies and institutions welcome the standards and propose their own formulas that guarantee compliance with the principles they include.
In this sense, around fifty companies, including IBM, Meta, AMD, Intel and Dell; universities like Imperial College London, Cornell, Boston, Yale and Harvard; and entities such as NASA or the NSF, have formed the Artificial Intelligence Alliance (AI Alliance) for the development of AI in compliance with standards: open, safe and responsible.
“Increased collaboration and information sharing will help the community innovate faster and more inclusively, identifying specific risks and mitigating them before releasing a product to the world,” the signatories say. To do this, its working groups will establish their own standards and “partner” with initiatives from governments and other organizations. “This is a crucial moment to define the future of AI,” warns Arvind Krishna, president of IBM. “We can help ensure that the transformative benefits of responsible AI are widely available,” adds Lisa Su, CEO and president of AMD.
We can help ensure the transformative benefits of responsible AI are widely available
Lisa Su, CEO and President of AMD
Thus, the members of the alliance, which do not currently include OpenAI, the developer of ChapGPT, nor Google, which has just presented Gemini (a model with capabilities beyond those of people), advocate collaboration, between companies and with governments. , to follow a common path. As Tom Mihaljevic, president of the Cleveland Clinic, one of the most advanced medical institutions in the use of new technologies, explains, “the capabilities of AI are now constantly growing and improving and it is essential that Organizations from diverse fields are coming together to help move forward while addressing concerns about safety and security.
It is also defended by Bob Shorten, director of the Dyson School of Engineering at Imperial College London: “We believe that community participation is essential for AI to be reliable, accountable, transparent and auditable”, principles defended by the European standard.
This community includes governments, industries, academic institutions and researchers aligned with ethical development. But, as Manuel R. Torres, professor of political science at the Pablo de Olavide University and member of the advisory board of the Royal Elcano Institute, explains. “The problem is the proliferation of technology that we must avoid falling into the wrong hands.”
Torres praises the European role as a “regulatory power”, but warns: “The conflict lies in the way in which this technology is developed in other areas which do not have any type of scruples or limitations in respect of compliance with the privacy of citizens whose data powers everything. .” that”.
He gives the case of China as an example: “Not only is it in this technological race, but it has no problem massively using the data left by its own citizens to power and perfect these systems. No matter how scrupulous we want to be about the limits we place on our local developers, at the end of the day, if it doesn’t happen on a global scale, it’s also dangerous.
Wu Zhaohui, China’s vice minister of science and technology, said last November at the UK’s AI Security Summit that his government was “willing to increase collaboration to help build a framework for international governance”.
But legislation alone is insufficient. After the European standard is approved, the key will be “permanent monitoring,” he adds. Cecilia Danesilawyer specializing in AI and digital rights, professor at the Pontifical University of Salamanca and other international universities, broadcaster and author of The empire of algorithms (Galerne, 2023).
For Danesi, also a member of the group Women for the Ethics of Artificial Intelligence (Women4AI Ethics) of UNESCO, follow-up is necessary: “These systems pose a high risk, which can significantly affect human rights or security issues. They must be evaluated and examined to verify that they do not violate rights, that they do not have prejudices. And this must be done continuously because systems, as they continue to learn, can acquire biases. And act preventively to avoid harm and generate systems that are ethical and respectful of human rights.
150 leaders of continental companies, such as Airbus, Ubisoft, Renault, Heineken, Dassault, TomTom, Peugeot and Carrefour, opposed the regulation of the sector in Europe. The responsibles signed an open letter in June against regulation in the EU, believing that the rule will affect “Europe’s competitiveness and technological sovereignty without effectively responding to the challenges we face and will face.”
Cybertivism
NGOs and experts dedicated to cyberactivism have expressed surprise and disappointment at the law approved yesterday. Ella Jakubowska, an analyst specializing in biometric identification technologies at European digital rights NGO EDRi, says: “Despite many promises, the law seems destined to do exactly the opposite of what we wanted. This will pave the way for all 27 EU member states to legalize live public facial recognition. “This will set a dangerous precedent around the world, legitimize these deeply intrusive mass surveillance technologies and mean that exceptions can be made to our human rights. »
Carmela Troncoso, a telecommunications engineer specializing in privacy at the École Polytechnique Fédérale de Lausanne (Switzerland), assures: “There are many very promising bans, but also many loopholes and exceptions that do not clearly show that the bans will truly protect. human rights, as we expect, for example, law enforcement to use real-time facial recognition to search for suspects. It is also regrettable that Spain is behind some of the most worrying proposals in this law,” adds Troncoso, creator of the technology that enabled apps covid tracking, reports Manuel González Pascual.
You can follow EL PAÍS Technology In Facebook And X or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_