Artificial Intelligence & Machine Learning , Geo Focus: The United Kingdom , Geo-Specific
UK Privacy Watchdog Probes Gen AI Privacy Concerns
ICO Calls for Evidence to Focus on Legal Basis for Scraped Training DataThe British data regulator is set to analyze the privacy implications of processing scraped data used for training generative artificial intelligence algorithms.
See Also: Establishing a Governance Framework for AI-Powered Applications
The Information Commissioner's Office on Monday announced that it is soliciting comments from AI developers, legal experts and other industry stakeholders on how privacy rights might be affected by developments in generative AI.
Since the majority of the AI systems use data scraped from the public internet that may include a large swath of personal identifiable information such as names and contact details, the primary concern is that AI developers could be processing that data in violation of existing privacy laws.
The ICO's consultation seeks to understand if current data processing practices followed by AI developers violate privacy requirements stipulated in the U.K. General Data Protection Regulation and the Data Protection Act of 2018.
The consultation will focus on whether AI developers meet the "lawfulness" clause under the U.K. GDPR, which lays down six measures that a company should follow to prove that its data processing requirements are compliant. They include obtaining consent from users or ensuring that the business represents the legitimate interests of its customers, among others.
"Training generative AI models on web scraped data can be feasible if generative AI developers take their legal obligations seriously and key to this is the effective consideration of the legitimate interest," the ICO said.
The consultation will close in March 1. Based on the responses received, the agency intends to release guidance on AI in the coming months.
The United Kingdom does not have a comprehensive artificial intelligence regulation, although the British government has told its data, competition, healthcare, media and financial regulators to monitor AI within their jurisdictions, giving the ICO authority to investigate potential data and privacy aspects of AI.
In October 2023, the ICO rebuked instant messaging app Snapchat for failing to properly assess the privacy risk to the users of My AI - the platform's generative artificial intelligence-powered chatbot. In 2022, the agency imposed a fine of 7.5 million pounds on facial recognition firm Clearview AI for unlawfully obtaining U.K. citizen facial images to power the company's database (see: UK Privacy Watchdog Pursues Clearview AI Fine After Reversal).