Financial services: banking on AI?

Viewpoints
May 22, 2023
3 minutes

Last week I had the pleasure of speaking on a panel at the NICE Actimize conference in London, where we explored the applications, benefits and challenges of using AI to tackle financial crime.

The Gartner hype cycle (see image below) might suggest that AI is close to reaching the “Peak of Inflated Expectations”, given that we are already well into the mass media hype stage.  Indeed, you will have seen any number of articles along the lines of “Will AI be the future of [insert trivial use case here]?”  But I would argue that we are still much closer to the bottom of the curve — not even the end of the beginning.  

Among the industries for which the development of AI will prove to be an event that is similar in importance to the industrial revolution (healthcare, logistics, cybersecurity, memes), the financial services sector is somewhat uniquely placed. This is because although it has used machine learning, automated decision-making and other AI techniques for a number of years, the industry is also ripe for additional and continued AI-enabled development.   

These counterpoints are discussed in a recent Financial Times article written by the Nasdaq chairwoman and chief executive, Adena Freidman.  She focuses on macro issues for the finance industry, such as using AI to ensure market maintenance and allowing responsible data sharing to combat financial crime.  It’s a good read for the helicopter view of topic, and the link to the article is here and below.  But for our purposes I’d like to drill down a little further.

Regulators, companies and consumers are currently focused on generative AI tools such ChatGPT (perhaps you’ve heard of it?), and it’s certainly the case that chatbots are being as used by bad actors in a range of ways — from replacing the decades-old Nigerian Prince scam with more sophisticated phishing emails, to writing tools that convincingly talk with bank representatives and customers. 

But financial institutions are also deploying AI in ways that go beyond the low hanging fruit use cases — and the rate and speed of that deployment is growing by the day.  To take three of the examples that we see most commonly:

  • Real-time detection.  Spotting anomalies in a customer’s behavioural patterns, or transactions that exhibit high-risk characteristics, as they happen (or as soon as possible after the fact) is the holy grail for many financial institutions, and understandably so.  In addition to combatting identify fraud, leveraging this information feeds into and can help to streamline financial institutions’ obligations to file SARs, a/k/a suspicious activity reports.  Although distinct from its closely related acronym (DSARs, or data subject access requests), most folks whose job involves either filing SARs (or responding to DSARs, for that matter) will likely appreciate all the help that they can get.
  • Predictive analytics.  These technologies are being used in respect of customers, employees and third parties, for the bad (fraud prevention) and the good (document analysis, lending optimisation) alike.  However, given the — sometimes negative — effects that they can have on individuals, organisations should be alive to the Art. 22 EU / UK GDPR rules around automated decision-making and profiling, including the potential for individuals to challenge decisions made without human involvement.
  • Increased personalisation.  The days of the local bank manager are, for most of us, gone forever.  Challenger banks and fintech providers have sought to fill that void — usually pretty well — by offering customers digital channels and services that feel increasingly personalised, including through the use of chatbots, tailored investing recommendations and useful push notifications.  Again, Art. 22 rules on profiling will likely come into play here, not to mention restrictions on sending electronic marketing.

Each of these use cases is also subject to the legal and regulatory AI guardrails that regulators and lawmakers are currently in the process of defining, as well as existing rules around data protection, employment and consumer protection. Unsurprisingly, one of the audience questions that came up regularly at the NICE conference was whether AI regulation is, and will be, able to keep up with the speed of technological development. 

It should also come as no surprise that the answer was somewhere between “unlikely” and “no”.  Does the flexibility of the UK’s light-touch approach arguably have some merit here?  Critics would say that the path being taken by the UK does not amount to regulation, and so doesn’t count.  In any event, it would be hard to imagine that financial institutions in the UK, Europe and beyond will not continue to lean into their use of AI, meaning that the interplay — and tension — between innovation and regulation will continue for some time to come.