The risk of entrusting personal secrets to ChatGPT | Technology
If you ask ChatGPT what it does with personal data provided by someone in the conversation, this is its response: “As a language model developed by OpenAI, I do not have the ability to process, store or use users’ personal information, unless provided to me during an individual interview. However, OpenAI, the company that owns ChatGPT, may use this information in certain cases, according to the company’s privacy policy.
This is a specific data type and only for certain cases. This must be OpenAI account data, such as the user’s name or payment card information, personal information that the user exchanges with ChatGPT or the company, user information when from interaction with OpenAI accounts on social networks, such as Instagram, Facebook, Medium, Twitter, YouTube and LinkedIn or the data that the user provides to the company as part of its surveys or events. With this information, the company can improve its products and services, create new developments, carry out research, establish direct communication with users, comply with legal obligations in favor of the company and prevent fraud, use abuse of service and criminal activities.
This delicate question does not only concern new generative AI. Sending an email via Gmail to a friend, or sharing photos or documents in cloud spaces like OneDrive, are daily actions that allow providers of these services to share information with third parties. Companies such as OpenAI, Microsoft, and Google may disclose information to service providers to meet their business needs as outlined in their privacy policies.
However, with few exceptions, companies cannot use personal data for other purposes. Ricard Martínez, professor of constitutional law at the University of Valencia, points out that this is strictly prohibited by the General Data Protection Regulation (GDPR): “They expose themselves to high regulatory risk. The company could be sanctioned with a fine equivalent to 4% of global annual turnover. In these cases, they can only be used for purposes of general interest permitted by regulations, such as archiving or historical, statistical or scientific research, or if a judgment of compatibility is made.
Generative artificial intelligence, like ChatGPT, is powered by a large volume of data, some personal, and from this information it generates original content. In Spain, these tools receive 377 million visits per year, according to a study. They analyze the information collected, respond to user requests and improve their service, even if the tool “does not understand the documents it feeds”, warns Borja Adsuara, lawyer expert in digital law.
Recommendation: be very discreet with chatbots
The Spanish Data Protection Agency (AEPD) suggests to users who do not accept that he chatbot request registration data that is not necessary; which requests consent without defining for what purpose the data will be processed and without allowing their withdrawal at any time, or which carries out transfers to countries which do not offer sufficient guarantees. It also recommends “limiting the personal data exposed, not disclosing personal data of third parties if there are doubts that the processing will transcend the domestic sphere and taking into account that there is no guarantee that the information provided by the chatbot is correct. “The consequences are “emotional harm, false or misleading information.”
Experts agree on the same advice: don’t share personal information with the artificial intelligence tool. Even ChatGPT itself warns: “Please note that if you share personal, sensitive or confidential information during the conversation, you should exercise caution. “It is recommended not to provide sensitive information through online platforms, even in conversations with language models like me.”
Delete personal data
If, despite these recommendations, personal data has already been shared with artificial intelligence, you can try to delete it. There is a form on the OpenAI site to be able to remove them: the bad news is that the company warns that “submitting a request does not guarantee that information about you will be removed from ChatGPT results.” It must be supplemented by the actual data of the interested party, who must “swear” in writing to the veracity of what is declared. In addition, the information contained in the form may be cross-checked with other sources to verify its veracity. Microsoft also offers a privacy panel to access and delete personal data.
Through legal action, Martínez explains that the user “can exercise the right of deletion if he considers that the personal data has been processed illegally, is incorrect or inadequate. You can unsubscribe, withdraw your consent, which is free and without conditions, and the company is obliged to delete all information. This specialist also emphasizes that there is a right to portability: “More and more applications allow the user to download their entire history and take it with them in a compatible format. The regulation also recommends the anonymization of personal data.
Anonymization, according to the AEPD, consists of the conversion of personal data into data that cannot identify a person. In his guide on the processing of artificial intelligence (IA), the agency explains that anonymization is one of the techniques to minimize the use of data, ensuring that only the data necessary for the given purpose is used.
New law on artificial intelligence
Companies that manage personal data, after the entry into force of the new European law on artificial intelligence, will have to take into account three keys, as the consulting firm Entelgy explains to this newspaper: they will have to reveal the functioning of the algorithm and the content it generates in a European register; Although not mandatory, it is recommended to put in place human supervision mechanisms; Finally, large language models (LLMs) will need to introduce security systems and developers will have an obligation to be transparent about the copyrighted material they use.
However, the new rule is not incompatible with the General Data Protection Regulation. Here’s how Martínez explains it: “AI that processes personal data, or that generates personal data in the future, will never be able to reach the market if it does not guarantee compliance with the GDPR. This is particularly evident in high-risk systems, which must implement a data governance model, as well as operational and usage records that ensure traceability.
The next step for artificial intelligence, says Adsuara, is that the personal information collected can be used in a sort of personal pool: “A place where everyone has their repository of documents containing personal data, but the information does not go away “Here, they are not used to power universal generative artificial intelligence,” he explains.
You can follow EL PAÍS Technology In Facebook And X or sign up here to receive our weekly newsletter.