Privacy risks: Ban on ChatGPT in Italy
- 04/04/2023
- Reading time 3 Minutes
The Italian data protection supervisory authority “Garante per la protezione dei dati personali” (“GPDP”) has banned access to US provider OpenAI’s AI chatbot ChatGPT in Italy until further notice. Companies and employers should take this as an opportunity to take a close look at how they deal with chatbots such as ChatGPT in order to minimize the risk of fines.
The ban is said to have been imposed due to violations of data protection law by ChatGPT (notice of the authority is available here). According to the GPDP, there is, among other things, no legal basis for the extensive storage and processing of data subjects’ data and users and data subjects are not sufficiently and transparently informed about the use of their data.
The GPDP had become aware of the issues in connection with a reported data breach by ChatGPT on March 20, 2023, involving ChatGPT users’ conversations and information about payments made by the service’s subscribers.
ChatGPT now has time to comment on the GPDP’s findings and any measures taken to protect personal data and data subjects.
Caution when providing personal data at ChatGPT
The measures taken by the GPDP show once again that many (data protection related) legal requirements must be observed when using and applying AI. This applies not only to providers of AI-based chatbots or other companies that use AI, but also to the users of such chatbots.
If, when using ChatGPT, users provide information that directly or at least indirectly refers to a person, this constitutes a processing of personal data. According to the “Terms and Conditions” of OpenAI, ChatGPT may use the respective information for the purpose of further developing the AI chatbot. Thus, it is not excluded that the personal data provided by the users can be displayed to other users of ChatGPT. This can be problematic from a data protection point of view, especially if it involves third-party data.
This transfer of data may constitute a data protection violation. Data processing and disclosure can be particularly critical when it involves special personal data, such as health data. In this case, there is probably no legal basis for the transfer of data.
In this context, it is also possible that data will be transferred to the US, since OpenAI is a US company. According to the current legal situation, such transfer regularly violates data protection law without the conclusion of corresponding agreements (e.g., EU standard data protection clauses).
In light of the above, companies and employers in particular should consider how and whether they allow their employees to use ChatGPT or other AI applications accessible on the Internet. There is a risk that the unauthorized processing of data will be attributed to the employer. The employer should therefore inform its employees that no personal data of work colleagues, customers or other business partners must be provided when using ChatGPT and similar services, or block access to the provider internally.
It remains to be seen how German data protection supervisory authorities will position themselves on the use of AI language models such as ChatGPT in the future.