You don't have javascript enabled.

Navigating AI regulation: A guide for UK and EU financial services firms

The AI regulatory landscape in the UK and EU is rapidly evolving, with the EU’s stringent AI Act categorising systems by risk while the UK adopts a pro-innovation approach.

How can firms operating across both regions navigate these divergent frameworks ensure compliance and avoid costly sanctions?

  • Kay Chand
  • June 20, 2024
  • 4 minutes

We are early days in terms of the AI regulatory landscape across the UK and EU.

What we know already is that the EU has implemented the AI Act which regulates the AI technology itself from a risk-based perspective. It categorises AI systems as minimal risk; limited risk; high risk and unacceptable risk and therefore prohibited. The majority of the obligations fall on the providers (developers) of high-risk AI systems. However, procurers of such systems also have some obligations imposed on them.

The prohibited systems include those that:

  1. Deploy subliminal, manipulative, or deceptive techniques to distort behaviour and impair informed decision-making, causing significant harm.
  2. Exploit vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm.
  3. Use biometric categorisation systems inferring sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation).
  4. Implement social scoring systems, i.e., evaluating or classifying individuals or groups based on social behaviour or personal traits, causing detrimental or unfavourable treatment of those people.
  5. Infer emotions in workplaces or educational institutions.

The UK on the other hand is taking a pro-innovation approach based on the use cases of the AI technology. It is also not at the current time putting regulation on a statutory footing but rather leaving this to the regulators to police and enforce. However, underpinning the UK’s approach to AI are five pillars:

  1. Safety, security, and robustness.
  2. Appropriate transparency and explainability.
  3. Fairness.
  4. Accountability and governance.
  5. Contestability and redress.

The Financial Conduct Authority admits that whilst existing regulations are in place to address the majority of these pillars, more work is to be done especially on appropriate transparency and explainability.

Coordination among UK regulators

Accordingly, the FCA has not ruled out further regulation coming down the track. At the moment it is exploring in further detail the full capability of AI/LLMs so that any future regulation is tailored to address and mitigate identified risks in an appropriate and proportionate manner reflecting a pro-innovation approach.

Other regulators are also likely to implement further rules and guidance that will equally be applicable to firms regulated by the FCA. For example, the ICO has published its guidance on AI and data protection.

The Bank of England, PRA, and FCA are currently assessing their approach to Critical Third Parties (which amongst other service providers are intended to include large providers of tech/tech services). There are proposed requirements to manage the potential risks to the stability of, or confidence in, the UK financial system that may arise due to a failure in, or disruption to, the services that Critical Third Parties provide. It is not a regime specific to AI but is broad enough to encompass some of the pillars.

The Digital Regulation Cooperation Forum (“DRCF”) has also been established which brings together four UK regulators (FCA, Competition and Markets Authority, the Information Commissioner’s Office, and the Office of Communications) to deliver a coherent approach to digital regulation.

Whilst I mentioned above that there is no current intention for the Government to legislate for AI technology, the Artificial Intelligence (Regulation) Bill, being a private members’ bill, is currently going through the parliamentary process. The current thinking is that the bill will not be successful but it is an indicator of more change to come.

Considerations for cross-jurisdictional firms

The divergent approaches across the UK and the EU mean that those firms operating on a cross-jurisdictional basis will need to consider carefully their AI deployment strategy which may mean different systems in the UK and the EU or detailed analysis will need to be carried out early on to ensure proposed AI systems do not fall foul of any governing legislation or regulation. This will add further time and cost to procurement plans.

The legislative and regulatory landscape is therefore evolving and emerging and firms should keep a close eye on it to ensure they do not inadvertently find themselves in breach leading not only to potential sanctions but also wasted time, costs, and effort on AI procurements that are not in compliance with the regulatory regime.

 


Kay Chand heads up Browne Jacobson’s digital and sourcing practice across the firm’s regional offices. Kay has nearly 20 years’ experience helping clients deliver their digital and business processing procurement and sourcing strategies having advised on some of the most novel, large scale and complex transactions.