Orit Halpern: “Why does everything have to be “smart” now? Why is this something we want? | Technology

Orit Halpern, 51 years old and born in Philadelphia (United States), was an epidemiologist for six years: “I should have continued, it was an opportunity for growth, I don’t know what I was thinking,” jokes -she now about him. change of job years before covid. After studying the history of science and abandoning viruses, he returned to Harvard for his doctorate in the humanities. Today, she has spent nearly 15 years understanding digital culture and the social changes it is causing at various universities. His current chair is for the first time in Europe, at the Technical University of Dresden. A few days ago he participated in a conference at the Center de Cultura Contemporània in Barcelona linked to its exhibition on “AI: Artificial Intelligence”, where this interview was conducted. His latest book is The intelligence mandate (The mandate of the intelligent), without translation yet into Spanish.

Ask. Is there a mandate for intelligence?

Answer. Maybe you have a smartphone. Perhaps you have heard of smart home, smart power grids. Intelligence is a recent turning point around infrastructure, urban planning and especially digital technologies intended to be integrated into daily life. Our question was: why does everything have to be “smart” now? Before, you had an oven or a thermometer and you didn’t need a computer to know when you were home and how to set the temperature. What is it about this term that not only has become something we want, but is actually something that more and more governments, cities, and businesses think they need to actually implement, that this is a mandate?

Q. There will be something.

A. There are several reasons. But especially the three. First, a change in the economy. In the 1970s, we saw many Western countries transition from manufacturing to the information economy. Second, geopolitical change, things like decolonization, global instabilities and energy markets, growing issues of race or urban planning. And third, the arrival of new technologies that really changed computing and newer and newer machine learning models began to emerge and Big Data. These things crystallized especially around the 2000s, to inaugurate a discourse or a language of the intelligent.

Q. Is this a speech for the benefit of citizens?

A. You can think of intelligence as a risk management strategy. One way to cope with changes in the world is to use these systems of Big Data who is supposed to learn constantly. So if you live in a smart home, learn what you’re doing and everything gets better, like energy savings. And it’s getting better and better to offer services to you, from Amazon or whoever. With an equal city. We face many problems: migration, crime, energy, climate change. How will your city deal with all these problems? Since the 1970s, many have believed that governments are not doing a good job. So we need data-driven decision making. Perhaps if we use intelligent systems, we will improve our problems by bypassing the political process.

Q. But isn’t it just technology?

A. No, it’s an ideology.

Orit Halpern.Gianluca Battista

Q. How to test it?

A. After World War II, IBM introduced computers to the public around the word “think.” Apple launched the principle of “thinking differently” in the 1980s, so we are used to relating these machines to thinking. This has now been incorporated into the idea of ​​artificial intelligence.

Q. It’s marketing.

A. Yes, there was already this kind of marketing around these things as thinking machines. In 2008, IBM introduced high-level intelligence. It was a really interesting moment. We’re facing the financial crisis, and just as the entire economy is collapsing, IBM is waking up and saying it’s going to present a plan for a smart planet. Thus, several very large companies of the time, perhaps looking for a new opportunity in the face of a financial crisis, began to get involved in this type of urban infrastructure projects.

Q. This coincides with the arrival of the iPhone and smartphone.

A. It’s about the same year. All these companies have the idea of ​​smart and want to integrate all these systems, but they end up changing the very infrastructure of human life, which is the cloud. We now have a new IT structure. It’s no longer just your home computer on your little desk. Now everyone is using the Internet more and more to upload all their data to these new cloud servers. That’s why a lot of what’s smart is about transforming the IT infrastructure itself. This means moving everything to the cloud, whether it’s traffic or mobile information. Everything is integrated and here are the urban planners and Google with their Sidewalk Labs who built, for example, Hudson Yards in New York. It emerged as a consulting service for cities, which was particular during the pandemic, when a lot of big companies came forward to say, “We’re going to help track Covid.” And from there, they believe that they will ensure public health or education.

Q. This of course has more implications.

A. Once they’re in the system, they can stay there. It is important for people to realize that this is a completely new form of computing that emerged in the first decade of the century. And that needs big infrastructure. Only Amazon, Microsoft or Google can provide cloud services to everyone. There is a real concentration of data in terms of infrastructure.

Q. This seems to be a problem.

A. This data or smartification Everything makes us very dependent on systems. Intelligence becomes a self-fulfilling mandate because we all think we need it to improve our daily lives, but also in a more serious way when it comes to doing things like, for example, climate management and environmental issues of cities. We need to figure out how to prepare the flood model. So there is good and bad.

Q. The question then is whether we actually need some of this, but not in this way or not all of it.

A. Yes, one of the questions about intelligence is not whether it is good or bad. It’s more about what kind of intelligence would it be, what kind of digital technologies do we want? What kind of world do we want to live in? It’s not about throwing away your cell phone and going back to the past. In the face of the climate crisis and other geopolitical issues, we need these technologies to survive and thrive, but what kind of systems will we build, who will own them, and who will they be built for? Who will benefit?

Q. We ask ourselves ?

A. Smart is closely related to artificial intelligence and Big Data. The way we’ve built large language models (like ChatGPT) depends on very large datasets. Many people are worried. The Biden administration is deeply involved in examining the monopoly capitalism of tech companies. It is now essential to make this problem more visible. The European Union has already approved the General Data Protection Regulation. They are now debating numerous regulations on artificial intelligence. We always worry about things like “will AI destroy humanity?” “, but it is likely that we will not see a Terminator chasing us, we will have daily problems. People talk about water systems and power grids controlled by Big Data. So it depends on the services we use daily, the destination of our data, whether in the health system or at school.

Q. This is linked to his famous concept of “computationally optimistic pessimist”. What is it?

A. Behind all these smart technologies, whether simple or very sophisticated, like smart border systems, lies a rather negative vision: we need to protect ourselves from future waves of immigration from Africa or the Middle East. When we talk about preparing for climate change, we no longer believe we can stop it, we simply prepare to withstand the blow. It is therefore a negative vision of the future, we are pessimistic. We kind of accept that things aren’t going well, that’s why we need more security, more data, smarter borders, more technology, because we hope that will help us survive. But at the same time, we are computationally optimistic because we also believe that maybe our technology will somehow save us or prevent this event from happening. So we have mixed feelings.

Q. Is Elon Musk and his Mars plan equally pessimistic and computationally optimistic?

A. He is also somewhat pessimistic, but at the same time optimistic because he is going to leave the planet. You stay here to suffer the bad weather. It’s this ambivalence. Another example, everyone says that AI is dangerous and needs to be controlled, then Germany and France leading the “well, not really, because it’s the future of our economy, we need from this to grow.” It’s a contradictory relationship: a lot of fear, but also the idea that if we don’t adopt this technology, we won’t be able to compete with the Americans or the Chinese, and our societies won’t prosper. This contradictory feeling is what I call computationally optimistic pessimism.

Q. It doesn’t seem like this will allow us to ask the right questions.

A. We can’t have a serious conversation about the society we want because we are always reacting to trauma. How can we preserve, for example, the German automobile industry? That seems the most important thing. But there’s a lot of “we need to adopt these technologies to compete with Silicon Valley or China” and little “what technology do we really want?” Maybe we have other ways to build them, what type of economies do we want to develop in the long term? And how we envision these technologies to advance sustainability, equity, justice, diversity, and other goals we might have as a society.

You can follow EL PAÍS Technology In Facebook And X or sign up here to receive our weekly newsletter.

Limited time special offer

Subscribe to continue reading

Read without limits