Skip to content
Home » AI and GDPR: How is the AI Regulation Handled?

AI and GDPR: How is the AI Regulation Handled?

Share to

The GDPR provisions were introduced six years ago to help standardize European frameworks for data privacy and protection. Due to the massive interest in AI, the GDPR is seen as a frontline defense against the uncertainties of new techniques, business models, and data processing pipelines.

Data privacy issues have become more complex with the rise of generative AI applications. Companies like OpenAI are hesitant to share information about how their training data was collected and how they address privacy concerns related to the use of these AI models.

Italian regulators, for instance, initially blocked the deployment of OpenAI’s ChatGPT, citing data privacy issues. Four weeks later, regulators allowed the chatbot interface to operate with large language models (LLMs), only to report privacy violations at the start of 2024. Data privacy concerns are not limited to major AI providers. Businesses are beginning to integrate new LLMs with their internal processes and data.

Privacy experts are also concerned that the GDPR has not anticipated some potential issues arising from new AI models.

“The fact that AI can be used to automate decision-making and profiling underscores the need for regulators to implement measures ensuring these activities are conducted fairly and ethically,” stated Martin Davies, head of audit alliance at compliance automation platform provider Drata.

The GDPR, for instance, includes provisions for algorithmic transparency in certain defined decision-making processes. However, AI systems and models can become black boxes, making it challenging for regulators and business leaders responsible for data protection to understand how personal information is utilized within them.

The GDPR’s AI Boundaries

The General Data Protection Regulation (GDPR) has played a pivotal role in advancing privacy protection in Europe and has inspired regulators worldwide. However, when it comes to Artificial Intelligence (AI), the regulation has several shortcomings.

Ironically, one of the GDPR’s greatest weaknesses is also a significant strength: the “right to be forgotten” framework, which emphasizes individual control over personal data. According to Davi Ottenheimer, Vice President of Trust and Digital Ethics at data infrastructure software provider Inrupt, this presents a problem for AI under GDPR.

“Imagine a robot that can only be turned off but not reprogrammed, and you see the issue with AI and GDPR,” Ottenheimer said, suggesting that “the right to be understood” would better serve the GDPR’s framework.

“It would enforce transparency engineering in AI systems so that individuals can comprehend and challenge the decisions made,” he explained.

GDPR applies to AI whenever personal data is processed during the training or deployment of a model, stated Sophie Stalla-Bourdillon, Senior Privacy Counsel and Legal Engineer at data security platform provider Immuta.

Yet, the regulation does not always apply when trained on non-personal data, she noted, adding that GDPR has also not been the most effective mechanism for flagging early warnings.

“The GDPR-based approach becomes less effective in guiding practices when organizations are determined to join the AI race, regardless of the consequences,” explained Stalla-Bourdillon. 

Companies need clearer, earlier, and more specific regulatory signals to know when to slow down. The European AI Act attempts to fill this gap by establishing a three-tiered distinction between prohibited AI practices, high-risk AI systems, and other AI systems, as well as concepts such as general-purpose AI systems and models.

The GDPR lacks specific guidelines for AI

The General Data Protection Regulation (GDPR) does not explicitly mention artificial intelligence (AI) or the many new ways AI can be used to process personal information, which can lead to confusion among data and technology management teams.

“While the GDPR can be interpreted and generally applied to AI, AI practitioners are likely looking for additional guidance,” assumed Tom Moore, Managing Director of the consulting firm Protiviti.

More specific guidance could further help companies leverage AI’s data protection capabilities, comply with GDPR, and avoid the substantial penalties codified in the law. Moore noted that the GDPR faces several unique challenges in terms of AI, including the following:

  • Transparency. The GDPR provisions on automated decision-making and profiling grant certain rights to individuals, but they may not be enough to ensure transparency in all AI use cases.
  • Bias and Discrimination. The GDPR prohibits the processing of personal data deemed sensitive, such as race, ethnic origin, or religious beliefs, except under specific conditions, but does not directly address the issue of algorithmic biases that may be present in training data.
  • Accountability. The GDPR’s provisions on the responsibilities of data controllers and processors may not fully capture the complexity of AI supply chains, the potential for harm, and who is responsible when harm occurs and multiple parties, such as developers, deployers, and users, are involved.
  • Ethical Considerations. The GDPR does not directly address broader societal concerns and ethical questions beyond data protection, such as the impact of AI on employment, the potential for manipulation, and the need for human oversight.
  • Sector-Specific Requirements. The GDPR provides a general framework for data protection that may not necessarily be sufficient to cover the risks and challenges specific to a sector.

The EU’s AI law adopts a risk-based approach to address these gaps by imposing requirements based on the risk levels associated with specific AI applications. It also includes provisions on transparency, human oversight, and accountability.

Governments want their economies to reap the benefits of AI, but society is only beginning to become aware of the associated risks. Developing better regulations that balance rewards and risks may require input from multiple sources.

“The European Data Protection Board [EDPB], the European Data Protection Supervisor [EDPS], national data protection authorities, academics, civil society organizations, as well as commercial enterprises and many others all want their voices to be heard in any legislative process,” Moore said.

Creating adaptable and scalable regulations to keep up with technological advancements, Moore noted, can be challenging and time-consuming. Previous examples of European legislation on technology protection, including the GDPR, the Digital Markets Act, and the Digital Services Act, took many years and, in some cases, decades for the EU to develop and enact.

Stalla-Bourdillon mentioned that the lack of consensus among legislators and intense lobbying by AI providers can also slow down the regulatory process. “Every piece of legislation is a political compromise,” she said, “and politics takes time.”

The AI law has evolved much more rapidly, but Moore believes that a faster pace could sacrifice enough detail and specificity in deployment. “Until the authorities provide details on the implementation of the law,” he assumed, “industry practitioners will want to work with their advisors to help assess the implications of the law.”

How will GDPR be affected by AI?

GDPR has established national data protection authorities and European-scale bodies, such as the EDPB and the EDPS. These bodies are likely to issue guidance to help citizens and businesses understand AI and the various laws that govern it.

They could also enforce the behavior of AI practitioners, alone or with other regulatory bodies, and use AI to manage corporate data protection activities, respond to citizen requests, and conduct investigations.

Moreover, Moore stated that GDPR provisions could influence the development and deployment of AI in several ways, including:

Enhanced control over data protection procedures. 

Artificial Intelligence (AI) systems consume and analyze an incredible amount of data to train and operate effectively.

As a result, organizations that develop or deploy AI must pay even greater attention to the principles and practices of data protection outlined in the GDPR, such as data minimization, purpose limitation, and storage limitation.

Stricter transparency requirements. 

The GDPR requires organizations to provide individuals with clear and concise information about how their personal data is processed.

For AI systems, this could include explaining the logic behind automated decision-making and providing meaningful information about the consequences of such processing.

Given the complexity of AI systems, meeting these transparency requirements can be challenging.

Focus on data quality and accuracy. 

AI systems are only as good as the data they are trained on. The GDPR’s principles of data accuracy and quality become even more critical in the context of AI, as biased or inaccurate data can lead to discriminatory or unfair outcomes.

Companies must ensure that the data used to train AI models is accurate, relevant, and representative.

Increased need for human oversight. 

The GDPR protects individuals from being solely subject to automated decisions and profiling, which implies the need for human monitoring and intervention in certain cases of AI use and compels organizations to rethink their AI governance structures.

Data Protection Impact Assessments (DPIAs) are becoming more intricate. 

The General Data Protection Regulation (GDPR) mandates organizations to conduct DPIAs when processing is likely to result in a high risk to an individual’s rights and freedoms.

Given the potential risks associated with AI, such as privacy intrusion, biases, and discrimination, DPIAs may become more frequent and complex for AI projects.

There’s an emphasis on privacy by design and by default. 

GDPR’s principles of privacy by design and by default require companies to embed data protection measures into their systems and processes from the outset.

For AI systems, techniques that allow data analysis and model training while preserving individual privacy could be impacted, such as federated learning, differential privacy, or secure multi-party computation.

Challenges in accountability and responsibility are emerging. 

AI systems may involve complex supply chains and multiple parties, making it challenging to pinpoint responsibilities when issues arise.

GDPR provisions on the responsibilities of the data controller and processor may need to be adapted to better reflect the development and deployment efforts of AI.

Regulating AI for the Future

“The future of AI and its regulation in Europe,” Moore posited, “is likely to have a significant impact on the industry globally, much like the GDPR did upon its introduction.” The AI Act, for instance, could become a global benchmark for AI governance, influencing regulations in other jurisdictions and shaping industry practices worldwide.

However, the AI Act includes numerous exceptions that may undermine its overall objective, argued Stalla-Bourdillon. It delegates the setting of standards for new benchmarks to various bodies, which will depend on their work and the oversight of auditors. Standards and auditing, for example, tend to focus primarily on processes rather than substance, in order to protect privacy.

The rapid adoption of AI will require establishing trust rather than settling for faster models, warned Ottenheimer from Inrupt. “AI development accelerates when it is made substantially safer, just as a fast car needs quality brakes and suspension,” he explained.

“This fosters public trust and enhances competitiveness.” By focusing on safe AI in the AI Act, he added, “Europe now serves as a global model for ethical practices, shaping the future of the industry with tangible societal benefits and setting important benchmarks for individual freedom and progress.”

Leave a Reply

Your email address will not be published. Required fields are marked *