ChatGPT’s evil ‘brother’ and other threats to generative artificial intelligence |  Technology

ChatGPT’s evil ‘brother’ and other threats to generative artificial intelligence | Technology

ChatGPT’s evil ‘brother’ and other threats to generative artificial intelligence |  Technology

FraudGPT is the brother Evil CatGPT. He is promoted in the Dark Web and you can write a message pretending to be a bank, create malware or display websites susceptible to fraud, depending on the data analytics platform Netenrich. Other tools like WormGPT They also promise to make the work of cybercriminals easier. Generative artificial intelligence can be used for malicious purposes: from generating sophisticated scams to creating non-consensual pornography, disinformation campaigns and even biochemical weapons.

“Although it is a relatively new phenomenon, criminals have been quick to take advantage of the capabilities of generative artificial intelligence to achieve their goals,” says Josep Albors, Director of Research and Outreach from the IT security company ESET in Spain. The expert gives some examples: the development of Phishing increasingly sophisticated – without spelling errors, in addition to being very well segmented and oriented – to the generation of disinformation and deep fakes. That is, videos manipulated with artificial intelligence to modify or replace a person’s face, body or voice.

For Fernando Anaya, country manager According to Proofpoint, generative artificial intelligence has proven to be “an evolutionary step rather than a revolutionary one.” “Gone are the days when users were advised to look for obvious grammatical, contextual and syntactic errors to detect malicious emails,” he says. Now, attackers have it easier. Simply ask one of these tools to generate an urgent and compelling email about updating account and routing information.

Plus, they can easily create emails in many languages. “It is possible that an LLM (a colossal language model, which uses deep learning and is trained on large amounts of data), first read all of an organization’s LinkedIn profiles and then write a very specific email to each employee. All this in impeccable English or Dutch, tailored to the specific interests of the recipient. » warns Dutch Cybersecurity Center.

Philip Hackerprofessor of law and ethics of digital society at the New European School of Digital Studies, explains that generative artificial intelligence can be used to create malware more effective, harder to detect, and capable of attacking specific systems or vulnerabilities: “While it is likely that deep human expertise will still be required to develop advanced viruses, artificial intelligence can help in early stages of virus creation. malware“.

The implementation of this type of technique “is still far from being generalized,” according to Albors. But tools like FraudGPT or WormGPT can pose a “serious problem for the future”. “With its help, criminals with virtually no prior technical knowledge can prepare malicious campaigns of all kinds with a considerable probability of success, which means for users and businesses that they will have to face even more greatest threats. »

Generate audio, images and videos

The more convincing a scam is, the more likely someone will become its victim. There are those who use artificial intelligence to synthesize audio. “Scams like the butchery of pigs“They could one day move from messages to calls, further increasing the persuasive power of this technique,” ​​believes Anaya. This scam, translated into Spanish as “pig butchery,” is so called because the attackers “fatten” the victims, gain their trust and then take everything they own. Although this is usually related to cryptocurrencies, it can also involve other financial exchanges.

Proofpoint researchers have already seen cybercriminals use this technology to deceive government officials. Something your research shows on the TA499 group, who uses this technique against politicians, businessmen or celebrities. “They make video calls in which they try to look as much like the impersonated individuals as possible using artificial intelligence and other techniques so that the victims share information or ridicule them, then uploading the recording to networks social,” explains Anaya.

Generative artificial intelligence is also used to run campaigns with edited images and even videos. The audio of television presenters or important personalities such as Ana Botín, Elon Musk or Alberto Núñez Feijóo has been cloned. Here’s how Albors explains it: “These deepfakes are mainly used to promote investments in cryptocurrencies which usually end in the loss of the invested money. »

From pornography to biochemical weapons

Hacker finds the use of generative artificial intelligence to create pornography particularly “alarming”. “This form of abuse is aimed almost exclusively at women and causes serious personal and professional harm,” she emphasizes. A few months ago, dozens of miners in Extremadura reported that fake nude photos of them created by artificial intelligence were circulating. Some celebrities like Rosalía or Laura Escanes have suffered similar attacks.

The same technology has been used “to create false images depicting threatening immigrants, with the aim of influencing public opinion and electoral results, and to create more sophisticated and convincing large-scale disinformation campaigns”, as Hacker points out. After the wildfires that ravaged the island of Maui in August, some publications reported without any evidence that they were caused by a secret “climate weapon” tested by the United States. These messages were part of a campaign led by China and included images apparently created with artificial intelligence, according to The New York Times.

The potential for using generative artificial intelligence doesn’t stop there. An article published in the magazine Intelligence of natural machines indicates that advanced artificial intelligence models could help create biochemical weapons. Something that, for Hacker, represents a global danger. Additionally, algorithms can infiltrate critical infrastructure software, according to the expert: “These hybrid threats blur the lines between traditional attack scenarios, making them difficult to predict and counter with existing laws and regulations. »

The challenge of preventing fraud risksGPT and other tools

There are solutions that use machine learning and other techniques to detect and block the most sophisticated attacks. Nevertheless, Anaya focuses on educating and raising awareness among users so that they themselves can recognize phishing emails and other threats. For Hacker, mitigating the risks associated with the malicious use of generative artificial intelligence requires an approach that combines regulatory measures, technological solutions and ethical guidelines.

Among the possible measures, he mentions the establishment of mandatory independent teams that test this type of tools to identify vulnerabilities and possible abuse or the banning of certain open source models. “Addressing these risks is complex, as there are important trade-offs between the different ethical goals of artificial intelligence and the feasibility of implementing certain measures,” he concludes.

You can follow EL PAÍS Technology In Facebook And X or sign up here to receive our weekly newsletter.