Large companies seek ways to ensure the ethical and legal development of artificial intelligence | Technology
Europe has taken a step forward with the approval of the world’s first regulation to regulate artificial intelligence. This categorizes applications of the tool according to their risks and provides for severe penalties for offenders, which can reach 35 million euros or 7% of the business volume or, in the lowest case, 7 .5 million or 1.5% of it. The EU is establishing a transition period before its final application in 2026, a period that companies will have to take advantage of to ensure that their developments comply with the law. Giants like IBM, Intel or Google, in favor of regulation, have developed platforms and systems to guarantee that, because it is unstoppable, artificial intelligence develops according to ethical, transparent and bias-free criteria. Companies are therefore offering formulas to comply with the first law on artificial intelligence, the European AI act.
Technology consulting Entelgia highlights three keys that companies must take into account: those that manage personal, medical, recruitment or decision-making data must disclose how the algorithm works and the content it generates in a European register; Although not mandatory, it is recommended to put in place human supervision mechanisms; and Large Language Models (LLMs) will need to introduce security systems and developers will have an obligation to be transparent about the copyrighted material they use.
“We must ensure that the technology we develop is created responsibly and ethically from the start. “It’s a great opportunity, but it also poses challenges,” he warns. Christine Montgomery, vice president and chief privacy and trust officer at IBM. Unlike other companies favorable to free development (150 European business leaders have positioned themselves against the standard), IBM is committed to “intelligent regulation which constitutes protective barriers for society, while promoting innovation”. Intel, another giant in the sector, is of the same opinion, according to Greg Lavender, Chief Technology Officer of this company: “Artificial intelligence can and must be accessible to everyone so that it can be deployed responsibly”.
Both companies have developed their own platforms to ensure this development in accordance with the standards that, little by little, governments and companies deem necessary.
IBM’s solution is Watsonx.governance a platform that includes ethical data processing, risk management and regulatory compliance. “It was developed to help organizations apply AI responsibly, adhere to today’s policies, and prepare for tomorrow’s regulations,” explains Montgomery.
82% of business leaders have adopted or implemented AI or plan to do so in the next year
Survey of European business leaders
Ana Paula Assis, president and general manager of IBM for Europe, Middle East and Africa, argues the need for these tools based on a survey of 1,600 business leaders in Germany , from France, Italy, Spain and Sweden. According to the results, 82% of business leaders have adopted or implemented AI or are considering doing so in the next year and almost all (95%) are doing so or will do so because it is effective in decision making in management. and commercial strategy. According to Hazem Nabih, Middle East technology director at Microsoft, “the productivity of any business increases between 30% and 50%.”
But this tool with enormous potential faces challenges: a necessary ethical framework, the need for new skills and increasing costs so that its development, in addition to being effective, is equitable (without bias) and transparent ( explainable and measurable), as well as ensuring security and confidentiality.
IBM’s proposition is that it can be used by any company, regardless of the IT model implemented, whether open source or developed individually or by other companies. “Our strategy and architecture is open, hybrid and multi-model in the sense that we really give our customers the flexibility to implement our solutions in the environments that suit them best,” explains Assis.
The solution proposed by the other giant is Intel Trust Authority and is based on a similar philosophy: “An open, developer-focused ecosystem to ensure that artificial intelligence opportunities are accessible to all.” » “These are tools that streamline the development of secure AI applications and facilitate the investment necessary to maintain and evolve these solutions in order to bring AI everywhere,” according to the company’s technology director.
“If developers are limited in their choice of material (teams) and software (programs), the range of use cases for AI adoption globally will be narrow and likely limited in terms of the social value they are able to deliver,” says Lavender.
Intel’s strategy isn’t just for big companies. He also launched, during the Innovations 2023AI PC Acceleration Program, an initiative designed to accelerate the pace of development of artificial intelligence in personal computers (PCs).
The program aims to connect independent suppliers of material and of software with Intel resources, including AI tools, co-engineering, teams, design resources, technical expertise and commercialization opportunities. “These resources will help accelerate new use cases and connect the industry in general to AI solutions,” defends the company. Program partners include Adobe, Audacity, BlackMagic, BufferZone, CyberLink, DeepRender, MAGIX, Rewind AI, Skylum, Topaz, VideoCom, Webex, Wondershare Filmora, XSplit and Zoom.
We have a comprehensive set of controls to ensure that, for businesses using Vertex, your data is yours and not anyone else’s. They are not filtered, they are not shared with anyone, not even with Google
Thomas Kurian, head of Google Cloud
Google has developed specific protection systems for Gemini, its latest artificial intelligence model, for aspects such as the protection of personal data that the new standard will require. “We have a comprehensive set of controls to ensure that, for businesses using Vertex AI, your data is yours and not anyone else’s. They are not disclosed, they are not shared with anyone, not even Google. Vertex offers a long set of compliance and auditing controls and capabilities,” explains Thomas Kurian, Director of Google Cloud in the Gemini developer tools overview.
Prejudices
One of the biggest challenges lies in biases, deficiencies in algorithms that can spread throughout the artificial intelligence system, underestimating the complexity of humans. In this sense, two articles by researchers from Sony and Meta presented at the International Conference on Computer Vision (ICCV for its acronym in English), they propose ways to measure bias to check the diversity of data which is used not only to make decisions but also to train machines.
William Thong, AI ethics researcher at Sony, explains in MIT Technology Review about their proposal: “It is used to measure bias in computer systems, for example by comparing the accuracy of AI models for light and dark skinned people. »
Sony’s tool expanded the scale of computer-recognizable skin tones to examine not only whether it is light or dark, but also the shades of different colors.
To streamline bias assessments, Meta also developed the tool Fairness in Computer Vision Assessment (FACET). According to Laura Gustafson, an AI researcher at the company, the system is based on 32,000 human images labeled by people based on 13 perceptible parameters, such as age (young or old), skin tone, gender , hair color and texture, among others. . others. Meta put his data available for free online to help researchers.
Widespread and uncontrolled use
The importance of caution is highlighted by a recent report of the security company Kaspersky carried out among Spanish managers and which reveals that 96% of those questioned in Spain admit to regular use of generative artificial intelligence among their employees without measures to avoid its risks in almost half of the entities (45 %). According to another study from the same company, 25% of those using generative AI are unaware that it can store information such as the user’s IP address, browser type and settings, as well as data on the functions they use. most used.
“Generative AI systems are clearly growing and the longer they operate unchecked, the more difficult it will be to protect certain areas of the enterprise,” warns David Emm, senior security analyst at Kaspersky.
You can follow EL PAÍS Technology In Facebook And X or sign up here to receive our weekly newsletter.
Subscribe to continue reading
Read without limits
_