ChatGPT Faces Scrutiny from European Privacy Regulators: Investigated in France and Spain

Highlights


  • ChatGPT is facing scrutiny from privacy regulators in Europe.
  • Italy was the first Western country to block ChatGPT on March 31, citing violations of the EU's General Data Protection Regulation (GDPR).
  • The chatbot is accused of collecting and storing personal data of millions of users for the purpose of training its algorithms without a legal basis, and exposing minors to inappropriate and potentially harmful responses.
  • Other countries, including France and Spain, are investigating ChatGPT and considering similar actions.
  • OpenAI has 20 days to address Italy's concerns or face a fine of up to €20 million or 4% of its annual revenue.


ChatGPT, the advanced chatbot developed by OpenAI and backed by Microsoft, has been facing increasing scrutiny from privacy regulators in Europe over its data collection and processing practices. The chatbot, which can converse with users in natural language and generate text based on the internet data till year 2021, has been banned in Italy and is under investigation in France and Spain, while other countries are considering similar actions.

ChatGPT  Faces Scrutiny from European Privacy Regulators - Banned in Italy, Investigated in France and Spain

Italy was the first Western country to block ChatGPT on March 31, after its data protection authority (Garante) found that the chatbot violated the General Data Protection Regulation (GDPR), the EU's strict privacy law. The Garante said that ChatGPT had no legal basis to collect and store personal data of millions of users for the purpose of training its algorithms, and that it exposed minors to inappropriate and potentially harmful responses. The Garante gave OpenAI 20 days to address its concerns or face a fine of up to €20 million or 4% of its annual revenue.


Following Italy's ban, France's data protection authority (CNIL) announced that it was investigating several complaints about ChatGPT, and that it would coordinate with other EU data protection authorities on the matter. The CNIL said that it was concerned about the chatbot's compliance with GDPR, especially regarding the transparency, consent and security of personal data processing. The CNIL also said that it was examining the potential risks of ChatGPT for human dignity, freedom of expression and information.


Spain's data protection agency (AEPD) also said that it was probing ChatGPT, and that it had requested the European Data Protection Board (EDPB), the EU's privacy watchdog, to discuss the issue at its next plenary meeting on April 13. The AEPD said that it wanted to ensure harmonized actions at the European level for global processing operations that may have a significant impact on the rights of individuals. The AEPD said that it had not received any complaint about ChatGPT, but that it did not rule out a future investigation.


Other EU countries, such as Ireland, Germany and Sweden, said that they were following the developments in Italy, France and Spain, but that they had not taken any action against ChatGPT yet. However, they said that they were ready to intervene if necessary to protect the privacy and rights of their citizens.


GDPR

OpenAI, which is a non-profit research organization backed by Microsoft and other tech giants, said that it complied with GDPR and that it respected the decisions of the regulators. It said that it was working with them to address their concerns and to demonstrate the benefits of ChatGPT for society. It also said that it was committed to ensuring the safety and ethics of its chatbot, and that it had implemented several measures to prevent misuse and abuse, such as filtering out harmful or offensive content.


ChatGPT is one of the most advanced examples of artificial intelligence (AI) systems that can generate natural language based on large amounts of data. It uses a deep learning model called GPT-3, which has 175 billion parameters and can produce coherent texts on almost any topic. However, such systems also pose challenges for privacy, security and human rights, as they may collect sensitive personal data, generate misleading or harmful information, or influence people's opinions and behaviors.


The EU has been at the forefront of regulating AI systems, and has proposed a set of rules to ensure their ethical and trustworthy use. The rules would classify AI systems into four categories based on their risk level: unacceptable, high-risk, limited-risk and minimal-risk. Unacceptable AI systems would be banned outright, such as those that manipulate human behavior or exploit vulnerabilities. 


High-risk AI systems would be subject to strict requirements, such as human oversight, transparency and accuracy. Limited-risk AI systems would have to inform users when they are interacting with them, such as chatbots or voice assistants. Minimal-risk AI systems would be largely unregulated, as they are considered harmless or beneficial for society.


The EU's proposed rules are expected to be adopted by 2024, after a period of consultation and negotiation with stakeholders. In the meantime, EU regulators will continue to monitor and enforce existing laws on data protection, consumer protection and competition on AI systems such as ChatGPT.

No comments: