Ethical machines? How to teach computers morality by translating ethics into numbers | Technology

The mathematical formula for teaching machines a code of ethics has three basic elements. And it’s not much different from the ethical cocktail that we people handle. Action, value and norm make up this triad with which researchers play to establish the limits that control the behavior of artificial intelligence.

For people, value equates to a kind of commonly accepted social norm: we know that lying is a morally wrong act. And standards help to formalize the idea of ​​value in a legal code. “Rules prohibit, just like smoking is prohibited in closed spaces, but courage also helps you promote good deeds, like donating or being kind,” says Maite López-Sánchez, AI researcher and professor at the University of Barcelona. ​who works on systems to introduce ethical principles into artificial intelligence systems.

People learn this framework, which serves to delimit our behavior, during the process of socialization. But in machines, everything must be translated into numbers and mathematical functions. The final objective is to provide a sequence of actions. “Ultimately, machines are very integrated into society and end up making decisions that affect us as people. It would be desirable for these decisions to be aligned with what we consider correct, for them to be well socially integrated,” believes the researcher.

López-Sánchez gets to the point in explaining the need for ethical machines: “I can have a self-driving car and, if I make it the goal of taking me to work, the car would take the most efficient path or the fastest. . We’re very clear that I want to get to work, but I don’t want to crush anyone. “That wouldn’t be morally right.” But casuistry goes well beyond extreme hypotheses. “There are many aspects to consider when driving correctly. It’s not just about not breaking the rules, but about doing things right, such as giving way to a pedestrian, maintaining a safe distance or not being aggressive with the horn,” adds the researcher.

AI ethics also serves to promote equal treatment. “If it’s a decision-making system for granting health insurance, what we want is for it to be an unbiased algorithm that treats everyone in the same way. he evaluates,” says López-Sánchez.

In recent years, algorithmic biases of all kinds have emerged. A system developed by Amazon that selected candidates for employment preferred men’s CVs over women’s CVs. He did this because he trained with a predominantly male program and there was no way to correct this gap. Another algorithm, in this case used by the health system in the United States, penalized blacks compared to whites of equal clinical severity, such that whites were assigned higher risk and therefore received priority for medical care.

Additionally, autonomous systems address issues related to intellectual property or the use of private data. One formula for avoiding these deficiencies is to establish self-limitations in the algorithm design. Ana Cuevas, professor in the field of logic and philosophy of science at the University of Salamanca, defends this proactive approach: “We must not wait for things to happen to analyze the risks they may present, but rather start from the the hypothesis that before “To create an artificial intelligence system, we need to think about what type of system I want to create to avoid certain undesirable outcomes. »

Ethics in machine language

Introducing an ethical corpus into machines is a relatively new job. The scientific community has approached it mainly from a theoretical point of view, but it is not so common to dig into the mud to specify numerical values ​​and moral teachings in engineering. In the Sánchez-López research group, WAIfrom the University of Barcelona, ​​are exploring this area experimentally.

These researchers connect the concepts of value and action in systems design. “We have mathematical functions that tell us that for a certain value, a certain action of the machine is considered positive or negative,” explains López-Sánchez. So, in the self-driving car example, smooth driving on a winding road will be considered positive given the value of safety. However, if observed through the prism of the value of kindness to other drivers, the vehicle might decide to increase its speed if it notices that it is hindering the pace of other cars.

In this specific case, there would be a conflict of values ​​which would be resolved by weighting. Previously, preferences are established which indicate which values ​​predominate. The set includes interleaved formulas, which must also contain the norm variable. “There is another function which states that a norm promotes a value,” specifies the researcher. “And we also have functions that observe how a standard evaluates action and also how the value of said action is evaluated.” It is a complex system in which feedback is essential.

When López-Sánchez talks about evaluation, he is directly referring to machine learning. One of the ways they learn is through reinforcement, like people we do well because we are rewarded and we avoid doing badly because we are punished. This mechanism also works in artificial intelligence.

“Rewards are numbers. We give rewards with positive numbers and we give punishments with negative numbers,” explains the WAI researcher. “The machines try to score as many points as possible. So the machine will try to behave if I give it positive numbers when it does things correctly. And if, when she misbehaves, I punish her and take away points, she will try not to do it. As for teaching children, it is noted for educational purposes.

But there are still many problems to resolve. To start, something as simple as deciding what values ​​we want to input into the machines. “Ethics evolves in very different ways. In some cases, we will need to make utility calculations to minimize risk or damage,” explains Professor Cuevas. “Other times we may need to use stronger codes of ethics, such as establishing that a system cannot lie. Each system must integrate certain values ​​and for this, there must be community and social agreement.

In López-Sánchez’s laboratory, they engage in sociological studies to find common values ​​between people and between different cultures. At the same time, they take international documents, such as the United Nations Universal Declaration of Human Rights, as a reference. Although there will be aspects on which it will be more difficult to reach consensus at the global level. This is what Cuevas thinks: “The limits of machines will have their limits. The European Union, for example, has its way of doing things and the United States has another,” he underlines, referring to the different regulatory approaches that exist on each side of the Atlantic.

You can follow EL PAÍS Technology In Facebook And X or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

_