Jason Banta, vice president of AMD: “Computers with artificial intelligence will be able to anticipate our needs” | Technology
In a hotel in San Jose, in the heart of Silicon Valley (California, United States), AMD presented, under lights and euphoric applause, the next piece of the artificial intelligence revolution. This is the MI300, an accelerator that fits in the palm of a hand and which, according to the Californian company, will be able to run artificial intelligence software faster than any product currently on the market. “We may be tired of hearing about artificial intelligence, but it is surely the most transformative technological revolution of the last 50 years, probably even more so than computers and the Internet,” said Lisa Su, CEO of Advanced Micro. Devices (AMD), during an event to which EL PAÍS was invited in early December.
The launch is one of the largest in the company’s five-decade history, whose AI-enabled processors are already used by the industry’s leading companies and researchers to build computers with embedded AI, including Microsoft, Open AI and But. The vice president and general manager of AMD, Jason Banta (Lubbock, United States, 44 years old), spoke with EL PAÍS at the end of the event about the possibilities of this technology, which claims to progress to “dizzying and exciting” rhythms. .
Ask. How is it possible that something that no one was talking about five years ago has become the most important topic of the moment?
Answer. One of the most transformative aspects has been large-scale language models. They evolved quite quickly since 2017 and really took off when many people saw their stunning beauty and everything they could do. This has led to a lot of additional interest, whether in chatbots or other generative AI applications. And it’s probably the fastest development of a new technology we’ve seen in the industry, becoming a transformative tool in the cloud, on PC and other devices.
Q. What does it mean to have artificial intelligence directly integrated into computers?
A. That computers are more personal, that they are capable of better understanding how each user wants to interact. If before it was us who had to learn how to use them, in the future computers equipped with artificial intelligence will be able to anticipate our requests. This will enable new audio and video experiences, making it easier to generate creative content, a process that can be difficult on a traditional computer. Devices with built-in AI will take care of this and manage everything for users. But they will also make computers more secure, by being able to detect and manage threats.
Q. On a practical level, what can we already do?
A. Today, many of the biggest applications we see are collaborations with different tools. If you use Zoom or Teams, there are many areas where AI is used, such as image enhancement. And what can be achieved is that this happens with better battery performance. Another area is the creation of content or applications, in the areas of audio and video editing: this is a sector that will continue to grow. However, it is still too early to see the full potential of this technology applied to personal computers. We will see most of the advancements in the near future.
Q. How long will it be before everyone has an AI computer?
A. I think we’ll see more by 2024, although for a big inflection we’ll have to wait until 2025. We’ll have to take a big step in that direction first, and then it will grow from there . But we’ll start to see much stronger adoption in the coming months, until within the next couple of years AI wearables start to take hold.
Q. Is the power of current computers sufficient to adequately develop AI?
A. Much of the development and training of AI models still takes place in the cloud. But in the next few years, I hope to see training of smaller models running in real time on computers. This way, as you interact with an app, you can train it in real time to adjust how it interacts with you.
Q. In which areas are the most radical changes expected?
A. On mobile phones and laptops, the challenges are similar: adapting AI models to work effectively on these devices. We are starting to see progress in this regard. Many phones coming out today already have artificial intelligence built in, but they need to be able to be used without having to charge them six or seven times a day.
Q. What are the current limits of this technology?
A. Our biggest challenge is to scale this technology down and make it work efficiently on laptops. Some of this is starting to emerge. We’ve just released a software package that allows for something called quantization: it’s basically reducing the size of these large models so that they can run efficiently on laptops. Stay in laptop memory, work efficiently and do not harm the life of a battery. This is the challenge we face as an industry, and we are putting the tools in place to get there.
Q. Asian countries dominate the semiconductor supply chain. Can we expect Europe and the United States to improve their production?
A. Our goal is to create a geographically diverse semiconductor ecosystem, we believe this is positive for the industry as a whole. We are working with governments and supply chain partners to encourage this. This is something we see evolving over time, but we think it’s important not to be overly reliant on any one geography for semiconductor supply.
Q. Is AI developed with environmental impact in mind?
A. We take into account the ecological perspective, it is at the forefront of our approaches in thinking about the development of these technologies. It is also true that a computer with integrated AI can, compared to a normal laptop, save workload and time, which implies real efficiency on a large scale. We are working on ways to optimize energy consumption, and these are very real implications that we take into account at every stage of our development.
You can follow EL PAÍS Technology In Facebook And X or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_