EU proposals may limit the use of artificial intelligence

Viewpoints
April 20, 2021
2 minutes

The European Commission (EC) may be set to propose extensive new legislation - potentially later this week - which, among other things, would ban the use of facial recognition technology for surveillance purposes and the use of algorithms that influence human behaviour, according to recently leaked draft documents. The proposals would also introduce new rules regarding high-risk artificial intelligence (AI).

Although the use of AI systems is regarded as beneficial in many areas of society, use of AI in some contexts can be controversial. For example, the use of algorithms in the context of employment-related decision-making, allegedly based solely on automated personal data processing, including profiling, has recently been challenged under the GDPR in the Dutch courts, although this decision is likely to be contested.

Proposed types of banned AI systems include those used for general indiscriminate surveillance, those used for social scoring, those that influence human behaviour, opinions or decisions which result in individuals acting in a way that is disadvantageous to them and those that manipulate information or predictions and people to exploit their weaknesses and vulnerabilities.

Military uses of AI as well as systems used by authorities to maintain public security would not be subject to the rules in certain circumstances, although the rules would cover algorithms used by police forces.

Concern has been expressed that certain aspects of the proposed new legislation are currently somewhat vague and difficult to interpret.

In echoes of the financial penalties that can be imposed under the EU General Data Protection Regulation (GDPR), organisations that fail to comply with the new rules could be fined up to €20 million or up to 4% of their turnover.  EU member states would also need to increase supervision in respect of high risk AI systems in various ways and ensure that such systems are subject to human oversight.  Examples of AI systems likely to be regarded as high risk could include systems used to predict crime or assist with judicial decisions, algorithms used in recruitment and systems that measure credit worthiness, among others.

The EU faces the challenging task of trying to strike a balance between encouraging technological development in the area of AI while simultaneously protecting the data protection rights and freedoms of individuals in the context of AI systems.  It will be interesting to review the official draft of the proposed new legislation once it is published and the extent to which these competing interests appear to have been successfully reconciled.