Impact of Artificial Intelligence on Data Protection

It’s a fact! Artificial intelligence (AI) is rapidly transforming various sectors and influencing aspects such as personal data protection. This technological advancement presents both significant opportunities and challenges, as the use of AI involves handling large volumes of information, including sensitive personal data. This situation raises important questions about the impact of Artificial Intelligence on data protection. Specifically, how it affects the impartiality of processing, the privacy, and the security of this data.

Certainly, machine learning, an essential branch of AI, fundamentally relies on large datasets to train models and make accurate predictions. Frequently, this data includes personal information, which raises concerns about how such data is handled and protected. AI’s ability to analyze and learn from mass data not only maximizes its potential but also helps mitigate biases and errors in information processing. However, this same process can expose data subjects to various risks if not managed appropriately.

One of the biggest challenges is identifying scenarios where AI can jeopardize individuals’ privacy. There are two critical aspects in this context: automated decisions and learning based on previous experiences. Indeed, AI can make decisions without direct human intervention, and these decisions can significantly affect individuals if based on incorrect or outdated personal data. Furthermore, the AI learning process uses historical data that may contain inherent biases, thus perpetuating inequalities and discriminations if not properly supervised.

In a recent post, we identified the influence of AI as one of the current challenges in intellectual property law. But the implication for personal data processing goes further.

The General Data Protection Regulation (GDPR) of the European Union establishes a robust legal framework for the protection of personal data, also applicable to AI technologies. In particular, Article 5 of the GDPR sets forth fundamental principles that must be followed when processing personal data. Regarding the impact of Artificial Intelligence on data protection, these principles ensure that the use of the aforementioned technology complies with the privacy and data protection standards required by the regulations:

  • Lawfulness, fairness, and transparency. Personal data must be processed lawfully, fairly, and transparently, ensuring that individuals understand how their data is used.
  • Purpose limitation. Indeed, data collection must have specific, explicit, and legitimate purposes. It also cannot be used for other purposes incompatible with the original ones.
  • Data minimization. Only data necessary for the stated purposes should be collected, avoiding the accumulation of unnecessary information.
  • Accuracy. Specifically, it is essential to keep data accurate and up-to-date, correcting any inaccuracies in a timely manner.
  • Storage limitation. Data must be stored only for the time necessary to fulfill the purposes of processing. 
  • Integrity and confidentiality. In this regard, technical and organizational measures must be applied to ensure the security of personal data, protecting it against unauthorized access, destruction, or accidental damage.

Furthermore, the GDPR also introduces the concept of proactive accountability, which means that data controllers must not only comply with the aforementioned principles but also be able to demonstrate such compliance. Moreover, if controllers use Artificial Intelligence in data protection and in its processing. This implies keeping detailed records of how data is processed, conducting privacy impact assessments, and adopting preventive measures to protect personal data.

Precisely, one of the biggest challenges facing the implementation of Artificial Intelligence in data protection is algorithmic bias or “Machine Bias,” and how this relates to the principle of legality. The latter requires data controllers to adopt measures to prevent processing that may affect the fundamental rights of individuals. Here a critical question arises: what happens when an AI discriminates against a user in an automated decision? In theory, algorithms are neutral and objective mathematical formulas. But, in practice, they can replicate human prejudices, such as discrimination based on gender or race.

In particular, algorithmic bias manifests when a computer system reflects the values and prejudices of the people involved in its creation and in the collection of data used to train it. AI is effective at identifying patterns and streamlining processes with large volumes of information (Big Data). The problem is that if it is fed biased data, it will inevitably reflect those inclinations. Biases can be classified into three types:

  • Statistical bias. One of the subtypes of this category is selection bias. In this case, the data sample is not representative of the total population or is not sufficiently random. For example, if we investigate the effectiveness of an educational method based only on students with the best grades.
  • Cultural bias. This derives from society and language. It reflects stereotypes and cultural prejudices learned over time. Regional stereotypes are a clear example of this type of bias.
  • Cognitive bias. This bias is associated with our personal beliefs that can “filter” into the use of Artificial Intelligence in data protection and in its processing. For example, we tend to validate news that matches our opinions, even if they are false.

In reality, the impartiality of an AI program can be compromised by machine learning and deep learning methods. These algorithms are trained with large volumes of labeled data. For illustration, a classification algorithm can learn to identify cats in photos if it is provided with millions of cat images. Similarly, a speech recognition algorithm can transcribe spoken language with great accuracy if it is fed with enough voice samples and corresponding transcriptions.

As algorithms receive more labeled data, their performance improves. Even so, this also means that they can develop blind spots based on the specific data they were trained with. These blind spots can result in inadvertent prejudices and discriminations when the algorithm encounters new situations or different data.

For all the above, profiling and automated decisions present significant risks to individual rights and freedoms. Often, these processes are opaque, meaning that people may not be aware that a profile is being created about them. Or, they may not fully understand the implications of such action. Profiling can perpetuate stereotypes and social segregation, pigeonholing people into specific categories and limiting their options, such as suggested preferences for books, music, or news.

In some cases, decisions based on profiles can be inaccurate, leading to erroneous predictions about a person’s behavior or preferences. This can result in the denial of services and goods, and in unjustified discrimination. 

To minimize these risks, it is critical to implement adequate safeguards that protect the rights and freedoms of individuals. This includes transparency in profiling and automated decision-making processes, ensuring that individuals understand how their data is used and what implications they have. Likewise, it is essential to continuously monitor and correct algorithms to prevent and correct biases, ensuring fair and equitable data processing.

In parallel, other measures must be implemented to reduce the negative impact of Artificial Intelligence on personal data protection. Among them, data controllers must ensure that the AI system is fed with relevant and accurate data, learning to identify and emphasize the correct information. It is essential that AI does not process sensitive data, such as racial or ethnic origin, political opinions, religion, beliefs, or sexual orientation. In this way, it is feasible to avoid arbitrary processing that may result in discrimination.

Article 22 of the General Data Protection Regulation (GDPR) states that European citizens have the right not to be subject to decisions based solely on automated processing. This includes profiling, if these produce significant legal effects or similarly affect them. Consequently, if a credit application, for example, is automatically denied by an AI system, the data subject has the right to object to this processing. The proper processing of data involves informing the user that the decision has been automated, allowing them to express their opinion. It must even allow them to challenge the decision and request human intervention in the process of reviewing the decision taken by the algorithm.

In March 2024, the European Parliament approved the Artificial Intelligence Act. This represents a significant advance in the regulation of the technology at hand worldwide. Faced with the rapid progress and integration of AI in multiple sectors, the European Union (EU) recognized the need for a legal framework that fosters innovation and technological development, while protecting the safety, privacy, and fundamental rights of citizens.

In this regard, the aforementioned regulation adopts a risk-based approach, classifying AI systems according to their potential impact on society and individuals. High-risk systems, such as those used in surveillance or that influence judicial decisions, will be subject to strict requirements before their implementation. The key points of the Act are:

  • Transparency. AI systems must be designed so that their operations are understandable to users, thus ensuring greater transparency.
  • Accountability. Legislation will establish clear rules on liability in case an AI system causes harm or damage.
  • Security and privacy. In this regard, the law will require that Artificial Intelligence in data protection systems be secure and respect the privacy of such information.
  • Supervision. Supervision mechanisms will be implemented to ensure continuous compliance with the law.

For this reason, companies and developers who create or use AI systems must comply with a series of legal obligations. These include conducting impact assessments and implementing corrective measures in case risks are identified. The law also establishes significant sanctions for those who do not comply with the regulations, underlining the seriousness with which the EU addresses this issue.

Although our specialty at ISERN is the registration and protection of patents and trademarks, we can also advise you on the legality of your brand’s actions in processing your clients’ personal data.

Therefore, if you use Artificial Intelligence in data protection tools and in their utilization, you can consult us.

Name*
This field is for validation purposes and should be left unchanged.