How to stop artificial intelligence from further failing women in medical diagnostics |  Technology

How to stop artificial intelligence from further failing women in medical diagnostics | Technology

While bored in a New Jersey hospital, Diane Camacho spoke to ChatGPT about the symptoms she was experiencing and asked him to come up with a list of possible medical diagnoses. He had difficulty breathing, chest pain, and the feeling that his heart was “stopping and starting.” The OpenAI chatbot told him that anxiety was the most likely diagnosis. Camacho again asked for the prognosis of a man with the same symptoms, with the surprise that the artificial intelligence warned him of the possibility of suffering from pulmonary embolism, acute coronary syndrome or cardiomyopathy , but without any trace of anxiety. This is how Camacho published it a few weeks ago on the X network (formerly Twitter).

Generative AI, like ChatGPT, combines large amounts of data with algorithms and makes decisions using machine learning. If the data is incomplete or unrepresentative, the algorithms may be biased. When sampling, algorithms can make systematic errors and select certain responses over others. Faced with these problems, the European law on artificial intelligence approved last December gives priority to the tool being developed according to ethical, transparent and impartial criteria.

Medical devices, according to the standard, are considered high risk and must meet strict requirements: have high-quality data, record their activity, have detailed system documentation, provide clear information to the user , have human surveillance measures and a high level. robustness, safety and precision, as explained by the European Commission.

There to start up by Pol Solà de los Santos, president of Vincer.Ai, is responsible for auditing companies so that they can comply with European conditions. “We do this through a quality management system consisting of algorithms, models and artificial intelligence systems. A diagnosis of the linguistic model is made, and the first thing is to see if there is any damage and how we correct it. Additionally, if a company has a biased model, they recommend warning them with a warning. “If we wanted to distribute a medicine that is not suitable for 7-year-old children, it would be unthinkable not to warn,” explains Solà de los Santos.

In healthcare, artificial intelligence (AI) tools are becoming common in diagnostic imaging testing and programming. They help healthcare professionals speed up their work and be more precise. In radiology, these are “help systems”, indicates Josep Munuera, director of radiodiagnostics at Sant Pau Hospital in Barcelona and expert in digital technologies applied to health. “The algorithms are inside magnetic resonance devices and reduce the time needed to obtain the image,” explains Munuera. Thus, an MRI that would last 20 minutes can be shortened to just seven minutes, thanks to the introduction of algorithms.

Biases can lead to differences in health care based on gender, ethnicity, or demographics. An example occurs in chest x-rays, as Luis Herrera, solutions architect at Databricks Spain explains: “The algorithms used showed differences in accuracy depending on gender, which led to differences in care. Specifically, women’s diagnostic accuracy was much lower. Gender bias, Munuera points out, is a classic: “It has to do with population bias and databases. Algorithms are fed or queried against databases, and if the historical databases are gender biased, the response will be biased. However, he adds: “Gender bias in health exists regardless of artificial intelligence. »

How to avoid prejudice

How is the database recycled to avoid bias? Arnau Valls, coordinating engineer of the Innovation department of the Sant Joan de Deu Hospital in Barcelona, ​​explains how this was done in a case of covid detection in Europe, using an algorithm developed with the Chinese population: “The precision of the The algorithm dropped by 20% and false positives appeared. We had to create a new database and add images of the European population to the algorithm.”

To confront a biased model as users, we need to be able to contrast the responses the tool gives us, Herrera says: “We need to raise awareness of AI bias and promote the use of critical thinking , as well as demanding transparency from companies. and validate the sources.

Experts agree not to use ChatGPT for medical purposes. But José Ibeas, director of the nephrology group at the Institute of Research and Innovation at the Parc Taulí University Hospital in Sabadell (Barcelona), suggests that the tool would evolve positively if the chatbot ask medical databases. “We are starting to work on it. The way to achieve this is to train the patient database with the OpenAI system using its own algorithms and engineers. In this way, data confidentiality is protected,” explains Ibeas.

ChatGPT technology is useful in the medical field in certain cases, recognizes Ibeas: “Its capacity to generate structures, anatomical or mathematical, is total. The training he has in molecular structures is very good. There, we really don’t invent much. In agreement with the rest of the experts, Ibeas warns that artificial intelligence will never replace a doctor, but emphasizes: “The doctor who does not know artificial intelligence will be replaced by one who knows. »

You can follow EL PAÍS Technology In Facebook And X or sign up here to receive our weekly newsletter.

Subscribe to continue reading

Read without limits

_