UK introduces light-touch AI regulation

Viewpoints
March 30, 2023
3 minutes

2023 is unquestionably the Year of AI.  General purpose tools like ChatGPT have made the public keenly aware of and interested in AI, even though the novelty of those tools will fade as AI technology becomes a part of everyday life. 

You would expect this to be driven by products and services that most impact people (such as the use of AI in recruitment, healthcare and finance), but because applications that seem closer to science fiction may be just around the corner, it’s not always easy to predict where the next innovation will come from.  And that is incredibly exciting.

The regulation of AI is also exciting, although I accept that I may be alone in thinking that.  On Wednesday, the UK Government released a long-awaited AI white paper setting out how it plans to regulate these technologies.  The press release is here and the white paper is linked below.

Even a cursory read of the documents makes clear that the Government views AI through the same pro-business lens that has been a feature of the UK’s post-Brexit policy making.  References to innovation, creating jobs, placing undue burdens on businesses and onerous legislative requirements hammer home the point that this will be a light-touch regulatory regime.

Before we think about what that means for businesses, here are the key takeaways from the documents:

  • The Government does not plan to introduce legislation to govern AI, nor will it create a new regulator to oversee the technology.  Rather, a non-statutory regulatory framework will be supported by existing regulators — the Financial Conduct Authority, the Information Commissioner’s Office, and so on.
  • The framework is based on five key principles that regulators and organisations must consider in their oversight and use of AI.
    • Safety, Security and Robustness — AI must function in a secure, safe and robust way where risks are carefully managed.
    • Transparency and Explainability — Organisations that develop and deploy AI must communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the relevant risks.
    • Fairness — AI should be used in a way which complies with existing laws (e.g., the Equality Act 2010 and the UK GDPR) and must not discriminate against individuals or create unfair commercial outcomes.
    • Accountability and Governance — Organisations must have appropriate oversight of the way that AI is being used and clear accountability for the outcomes.
    • Contestability and Redress — People need to have clear routes to dispute harmful outcomes or decisions generated by AI.
  • Over the next 12 months, regulators will issue guidance to organisations in their respective sectors that set out how to implement the above principles.

The Government’s decision to rely on a regulatory framework rather than introducing new AI laws means that the UK risks swimming against the tide of global sentiment as Europe, China and the U.S. all start to put in place laws that assess AI technologies based on the risks they pose to individuals rather than their business benefits.  Clearly, governments are free to regulate as they think best, and indeed there is something to be said for taking a more flexible regulatory approach that may in some cases be better suited to adapting to fast-moving technologies such as AI.

The challenge for British businesses that operate anywhere other than the UK, but especially in Europe (and likely the U.S.), is that their products and services will be subject to much stricter laws in those countries.  

That the rules in the UK are more permissible therefore becomes something of a false paradise. Indeed, many of those businesses will find it operationally easier to design AI applications that meet the highest regulatory bar to which they are subject, and that’s certainly no bad thing from a legal, ethical and reputational perspective.

But it does mean that multi-national businesses shouldn’t get too excited about the UK’s laissez-faire approach.  We’re not even at the end of the beginning of AI regulation: the EU’s AI Act is still rumbling through the Strasbourg long grass, and most countries are still grappling with how best to oversee these technologies. That said, the UK looks like it will be something of an outlier in regulating AI, for both good and bad.