Tech experts urge caution in the development of advanced AI systems

Viewpoints
April 3, 2023
2 minutes

There is no doubt that the proliferation of advanced artificial intelligence (AI) systems has accelerated rapidly in recent months. Despite the many potential economic and social benefits of such systems, however, some experts are sounding a note of caution.  

On 29 March 2023, the Future of Life Institute (a non-profit making campaign organisation) published an open letter, which quickly garnered more than 1,100 signatories from across the technology sector and academic community, seeking a six-month hiatus in respect of the development of powerful advanced AI systems and calling on governments to intervene if necessary to ensure that this happens.  

The letter highlighted the recent competition between AI labs to produce increasingly sophisticated AI technologies, expressing concerns about how well such technologies are understood or can be kept in check effectively, even by those that designed them.  

Increasingly, many organisations are launching AI systems onto the market, while others are incorporating AI components into their existing products and tools, meaning that the general public has much greater access to such technologies.

Some commentators are concerned about the possible risks to society and individuals that may be posed by such rapid development and general usage of advanced AI systems. Potentially problematic issues include, for example, concerns around safety, privacy and data protection, decision making and human rights.  

The letter urges the implementation of shared safety protocols which are scrutinised by independent experts to ensure the safety of such systems, noting that "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal".

Some jurisdictions, notably the European Union, appear to reflect this more cautious outlook and are leaning towards the introduction of more stringent legislative approaches regarding the development of advanced AI systems. However, as demonstrated by the UK Government's white paper entitled "A pro-innovation approach to AI regulation" which was also published on 29 March, the UK seems to be adopting a somewhat different strategy when it comes to regulating AI.

In order to encourage innovation, growth and job creation, the UK is not proposing to introduce extensive legislation to regulate AI at this stage, but instead will adopt a more flexible approach based on five principles of safety, security and robustness, transparency and explainability, fairness, accountability and governance and contestability and redress.  

Additionally, the UK is not planning to appoint a specific new regulator to oversee the administration of AI, but will delegate this responsibility to various existing regulators, who it is envisaged will develop bespoke approaches to address the way that AI is being utilised in their specialist areas. It is hoped that this approach will allow the UK to respond quickly to fast-paced developments in AI technology, while realising the benefits that AI technologies can bring at the same time as protecting the public.

It will be interesting to see whether ultimately a more cautious attitude towards AI regulation will turn out to be the preferred approach globally, or whether the UK's proposed more agile perspective will prove more workable in practice. Either way, it is difficult to see how the seemingly unstoppable juggernaut that is advanced AI will be slowed down for too long.