The use of AI systems can bring many benefits to many different businesses and sectors of society - such as healthcare and retail for example.  However, they can also create data protection risks, such as:

  • the potential for bias and discrimination arising in the data used by such systems
  • possible security issues
  • collection of excessive and unnecessary personal data
  • use of personal data for purposes incompatible with the purposes for which such data were originally collected.

On 20 July, the UK Information Commissioner (ICO) announced the launch of a new beta version of the ICO’s AI and Data Protection Risk Toolkit.  This is intended to help organisations which are using artificial intelligence (AI) to process personal data to understand the related data protection risks that can arise and how to comply with applicable data protection laws.

The alpha version of the toolkit was launched earlier this year and the next phase of development of the toolkit will include testing it on live AI systems that process personal data to assess how helpful the toolkit is for organisations.  The ICO is planning to publish the definitive version of the toolkit in December 2021. 

Pending publication of the final version, the ICO will carry on involving interested parties and is seeking comments on how to improve on the beta version of the toolkit.  The ICO is also interested in engaging with organisations interested in assisting with testing the new version toolkit on live AI applications.

The toolkit addresses best practices for ensuring that AI systems adhere to data protection requirements.  It is likely to be of interest both to those with responsibility for compliance (e.g. data protection officers, senior management and general counsel) and also to technology specialists (such as software developers and engineers and machine learning developers and data scientists).

The toolkit addresses four phases of the AI lifecycle, namely: (i) business requirements and design; (ii) data acquisition and preparation; (iii) training and testing; and (iv) deployment and monitoring.  The toolkit also includes information on procurement.

The toolkit is designed to be flexible and can be divided up into risk statement and risk domain areas.  Each risk covers a significant area of data protection law (such as accountability and governance; lawfulness and purpose limitation; fairness (statistical accuracy, bias and discrimination); transparency; security; data minimisation; individual rights; and meaningful human review).  Examples of practical steps are provided in respect of each risk area to assist organisations in addressing risks and showing how they have complied with the law. 

The toolkit has been developed alongside and mirrors the auditing framework being developed by the ICO’s internal assurance and investigation teams.  The toolkit provides a clear approach to auditing AI applications and making sure that they comply with applicable data protection laws.  The guide to using the toolkit notes that organisations using the toolkit should be able to be confident that they are complying with applicable data protection law regarding their use of AI.

While the use of AI systems in many different sectors can be beneficial, clearly such benefits need to be weighed against the possible data protection risks that such systems can create.  It will be interesting to see what the final version of the toolkit looks like once this is published later this year.  The toolkit should provide practical assistance to organisations that use AI systems to ensure that any data protection risks created by the use of such systems are mitigated and that use of such systems complies with applicable data protection laws.