Yann LeCun, Chief AI Scientist at Meta: “Human-level artificial intelligence is going to take a long time” | Technology
The extraordinary potential and enormous risks of the generative artificial intelligence (AI) revolution were at the center of discussions at the World Economic Forum’s annual meeting in Davos. Nick Clegg and Yann LeCun, President of Global Affairs and Chief AI Scientist at Meta, expressed their views on this issue in a meeting with journalists from five international media outlets, including EL PAÍS.
Meta, the parent company of Facebook, is one of the leading companies in this revolution. This is because of its notable capacity in this specific sector, and it is because it goes hand in hand with the enormous power granted by the control of its gigantic social platform, the management of which has attracted serious criticism and accusations in recent years, among others. things, for its impact on democracy.
In the conversation, LeCun points out that “contrary to what some people may hear, there is no system design that achieves human intelligence.” The expert believes that “asking for regulation out of fear of superhuman intelligence is like asking for regulation of transatlantic flights at a speed close to the speed of sound in 1925. It is not for tomorrow; “It will take a long time, with systems that we do not yet know,” he says, and that is why he believes that it is premature to legislate given the risk that they escape human control. The EU passed the world’s first AI legislation in December, and other countries, such as the US and UK, are also working on specific laws to control the technology.
Clegg, for his part, urges lawmakers around the world dealing with the issue to regulate products, but not research and development. “The only reason you might think it would be helpful to regulate research and development is because you believe in this fantasy that AI systems can take over the world or are inherently dangerous,” says Clegg, former British Deputy Prime Minister. and leader of that country’s Liberal Democratic Party.
Both men are pleased that after a period of unrest following the emergence of ChatGPT, the public debate has moved away from apocalyptic hypotheses and focused on more concrete issues and current challenges such as disinformation, copyright and access to technology.
State of technology
“These systems are intelligent in a relatively narrow domain in which they have been trained. They’re fluent in the language and that makes us think they’re smart, but they’re not that smart,” says LeCun. “And we don’t have the ability to just grow at scale and scale it with more data, with bigger computers, and thus achieve human intelligence. It will not arrive. What will happen is that we will have to discover new technologies, new architectures of these systems,” specifies the scientist.
The expert explains that it will be necessary to develop new forms of AI systems “that would allow these systems, first of all, to understand the physical world, which they cannot do at the moment. Remember, this they also can’t do at the moment. Reasoning and planning, which they can’t do either at the moment. And when we discover how to build machines that understand the world, remember, plan and reason, we will have a path to human intelligence,” continues LeCun, born in France. In numerous debates and speeches in Davos, the paradox was mentioned: Europe has very significant human capital in this sector, but is not a leading company on a global scale.
“It’s not for tomorrow,” insists LeCun. The scientist believes that this path “will take a long time; years, even decades. This will require new scientific advances that we are unaware of. So it’s worth asking why people who aren’t scientists are saying this, since they’re not the ones in the trenches trying to make the system work. The expert explains that currently we have systems capable of passing the bar exam, but we do not have systems capable of cleaning the table and throwing it in the trash. It’s not because we can’t build a robot. It’s because we can’t make them smart enough. So it’s obvious that we’re missing something important before we can achieve the kind of intelligence we see, not only in humans, but also in animals. “I would be happy if at the end of my career (he is 63 years old) we had systems as intelligent as a cat or something similar,” he emphasizes.
The state of regulation
The debate on how to regulate this technology in its current state and taking into account the possibilities for nearby development was one of the key themes of the annual forum in Davos. One of the main focuses of attention has been the legislation being introduced in the EU, in many ways pioneering.
Questioned on this subject, Clegg, who was a Member of the European Parliament and is a convinced pro-European, avoids making a definitive statement on the subject, but throws barbs at the Union. “It’s still a work in progress. It’s a very classic thing in Europe. There is noise, it is said that something has been agreed, but in reality it is work that is not finished. We will study it closely when it is completed and published, I think the devil will really be in the details,” says Meta’s president of global affairs.
“For example, when it comes to data transparency in these models, everyone agrees,” Clegg continues. “But what level of transparency? Is it the datasets? Is this individual data? Or, for example, in matters of copyright. Copyright legislation already exists in the EU. Are you going to limit yourself to that? Or will a specific new layer finally be added? When these models are trained, a huge amount of data is devoured. Labeling each piece of data for intellectual property reasons is extremely complex. So I think the devil is in the details. We will study it.
This is where the criticisms come from. “Personally, as a passionate European, I am sometimes a little frustrated that in Brussels we seem to pride ourselves on being the first to legislate, rather than knowing whether the legislation is good or not. Remember, this EU law on artificial intelligence was originally proposed by the European Commission three and a half years ago, before the whole generative AI thing (like ChatGPT) broke out. And then they tried to adapt it through a series of amendments, provisions to try to take into account the latest technological developments. “It’s a rather clumsy way of legislating, an adaptation, for something as important as generative AI.”
The debate between establishing protections and avoiding hindering development generates strong tensions, within politics and between politics and the private sector. In this subtle line that lawmakers must draw, incalculable value is at stake: the productivity, the jobs, the capabilities that will define the geopolitical balance of power.
Clegg hits that nerve. “I know that France and Germany, and Italy in particular, have, I think, wisely asked MEPs and the European Commission to be very careful not to include in the legislation anything that would hinder truly European competitiveness. Because among the ten largest companies in the world, none is European.” On the other hand, a group of experts called on the EU, in an open letter published by EL PAÍS, for even stricter legislation “to protect citizens’ rights and innovation.”
Optimism and caution
Under this enormous pulse of power is advancing technology that, while nowhere near fully reaching human or superhuman levels, has already entered our lives with extraordinary force.
“AI amplifies the corrective capacity of human intelligence. There is a future in which all of our interactions with the digital world will be mediated by an AI system,” says LeCun. “This means that at some point these AI systems will be smarter than us in some areas, in fact they already are in some areas, and perhaps smarter than us in all areas at some point . And this means that we will have assistants with us at all times, who are smarter than us. Should we feel threatened by this? Or should we feel empowered? I think we should feel empowered.
Throughout the interview, LeCun introduces several elements of cautious optimism. “If you think about the effect this could have on society in the long term, it could have a similar effect to the invention of the printing press. So basically creating a new Renaissance where you can be smarter is a good thing in itself. Of course, there are risks now. And technology must be deployed responsibly, so that benefits are maximized and risks are mitigated or minimized.
You can follow EL PAÍS Technology In Facebook And X or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_