Meaningfully explaining logic: Challenges in explaining automated decision making

Viewpoints
June 15, 2022
5 minutes

The reliance on technologies such as artificial intelligence to sift through large mounds of data and automate routines has also meant that decisions are increasingly made on the basis of automated decision making (ADM). 

However, the delegation of tasks does not mean the disavowing of responsibility. In fact, the use of ADM has its own unique set of restrictions and obligations, particularly when they fall within the remit of Art. 22 GDPR. In addition, the ongoing impetus to increase transparency in relation to processing based on AI and similar technologies may be translating into heftier transparency obligations for processing based on ADM, as recent cases demonstrate.

Background

The right not to be subject to a significant decision made solely on the basis of ADM is not a new right that came with the advent of the GDPR. At the pan-European level, a substantially similar right was present in the previous Data Protection Directive. At that time, jurisprudence on this topic was scant. However, perhaps by dint of technological advancement in recent years or the recent push to regulate AI, several cases pertaining to this right have emerged since. With the EU’s AI Act on the horizon, this topic is gaining in importance and coincides with other transparency guidance issued by regulators, such as the UK’s ICO AI explainability guidance.  

Transparency obligations are set at a higher threshold for ADM

The GDPR imposes general transparency obligations. In general, these involve the proactive supply of information to data subjects (Art. 13 and 14), and the reactive supply of information when subject to a data subject access request (DSAR) (Art. 15). When such processing falls within the scope of Art. 22, organisations are required to not only disclose the fact that they are processing personal data through the use of profiling or ADM, but also required to provide "meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject". These ADM obligations apply both in the context of proactive and reactive information obligations, as the same wording is replicated in Art. 13(2)(f), Art. 14(2)(g) and Art. 15(1)(h).

What this means in practice has recently been highlighted in a case involving credit scoring. In March 2022, the Austrian courts referred several questions to the CJEU, including:

  • In the case of profiling, does meaningful information about the "logic involved" includes, in particular, (1) the disclosure of the data subject’s processed data, (2) the disclosure of the parts of the algorithm on which the profiling is based that are necessary to provide transparency, and (3) the information relevant to establishing the connection between the processed information and the rating arrived at?
  • When responding to a DSAR, what content requirements must be satisfied in order to be regarded as sufficiently "meaningful" information? When profiling is involved, what are the minimum information requirements? Do data subjects also have a right to request the pseudonymised personal data of other data subjects in order to ensure the accuracy of the ADM?
  • What is the impact and resulting procedure when such information falls within a trade secret?

Some EU supervisory authorities have also occasionally found that even where ADM does not meet the Art. 22 threshold, additional transparency obligations may apply. These can range from an obligation to proactively supply information on how the data subject's profiles were created and the decisions made on the basis of such profiles, to reactive/DSAR obligations to supply information regarding parameters/input variables, their weightage in profiling, how the parameters/input variables were determined (i.e. through statistics), an explanation as to why the relevant data subject was assigned a specific result and a list of possible profile categories.

The EDPB (the body with oversight of personal data regulation in the EU) currently recommends, as a matter of good practice, the disclosure of:

  • the categories of data that have been or will be used in the profiling or ADM
  • why these categories were chosen
  • how any profile used in the ADM process is built
  • why this profile is relevant to the ADM process
  • how it is used for a decision concerning the data subject

Additional challenges of ADM

1) A qualified prohibition rather than a right

Despite being phrased as a right, Art. 22 is, in fact, a qualified prohibition on certain types of processing rather than a right that data subjects may exercise over their personal data. As clarified by the EDPB, this "right" applies at all times and not just when actively invoked by the data subject, and also establishes a "general prohibition for decision-making based solely on automated processing" regardless of whether the data subject initiates the exercise of the right on their own accord.

 Misunderstanding this distinction potentially means automatically falling foul of the GDPR. Therefore an organisation cannot proceed with ADM processing unless they have a legal basis to do so under Art. 22(2). Where explicit consent is relied on, the transparency obligations mentioned above become all the more important as consent can be invalidated if insufficient information on the ADM processing was provided at the time consent was given.

2) Potential for multiple cascading infringements of data protection principles

The use of ADM may have an immediate impact on multiple data processing principles. As above, the failure to provide adequate information about ADM processing will be a breach of the GDPR's transparency principle. In addition, the choice and use of certain parameters/input variables may potentially breach the GDPR's fairness principle (e.g. background, nationality, social economic status), and if such parameters have a material impact on the decision, a further breach of the GDPR's accuracy principle may occur. As the recent fine on Clearview AI also demonstrates, the use of public domain information may also be in breach of the fairness principle, particularly where such information is used in a way that runs contrary to the expectations of data subjects. In addition, considering whether such information is necessary for the purposes of processing is also required for compliance with the data minimization principle.

Takeaways

As often recommended, organisations should carefully consider their deployment of ADM from start to end and position themselves well to be able to demonstrate compliance. To this end, organisations should conduct a DPIA, especially as the use of automated processing is frequently featured in European and UK regulatory guidance as an example of high-risk processing requiring a DPIA.

 Publicly available guidance such as the ICO's explainability guidance (as mentioned above) and auditing framework should also be consulted as reference points. As the Austrian case mentioned above is currently at the referral stage and the EU supervisory authorities that take an expansive approach to the ADM obligations referred to above are currently in the minority, there is, for now, no immediate impact for organisations that employ ADM. However, the outcome of the CJEU referral should be closely monitored and whether other EU supervisory authorities begin to take a broader interpretation of ADM transparency obligations. Additionally, it also remains to be seen as to whether the UK’s ICO diverges on this interpretation. We are closely watching this space for updates.