Existing members can use the sign in option below.
Bobsguide members enjoy:
Regulators on both sides of the Channel look set to keep a light-touch and risk-based approach to AI technologies to foster nascent business, senior consultants at EY and Linklaters said on Wednesday.
Talking at a City and Financial conference on Wednesday, panelists agreed that the current regulatory debate seems to take into account the evolving nature of AI technologies and their need for a high degree of flexibility.
“AI is an ongoing area of development. To a large extent, we are still very early in the understanding of how AI is being used,” said Dr. Ansgar Koene, global ethics, AI and regulatory leader at EY.
“There’s a need [for] a more flexible approach to AI.”
“We need to have a regulatory framework that is sufficiently flexible so that [it] can provide general principles for those who are earlier in their maturity cycle, and more detailed guidance and case studies for those who are further down the line,” added Julian Cunningham-Day, global co-head of fintech at Linklaters.
“Adopting a light-touch approach that allows for evolution and flexibility may well be the way to ensure that we attempt to create a regulatory environment that keeps pace with technological change.”
In April, the European Commission laid out proposals to categorise AI systems based on their level of risks – with those identified as carrying “limited’ or “minimal” risk subject to light-touch requirements.
The proposals, which are currently being discussed by the EU Parliament and the Council, focus more on use rather than the technology itself, noted Koene.
“The legislation doesn’t really impose a lot of restrictions. It’s a broad definition of AI together with a focus on intended use,” he said.
“As long as your intended use case [rather than the technology] doesn’t pose a direct threat to safety and security of fundamental rights, then the legislation places few restrictions on you.”
From their side, UK officials have recently unveiled plans to set out a “pro-innovation” framework next year that will favour public investment and “AI standards hubs” over direct regulation.
Cunningham-Day went further to argue that the risk-based approach being favoured at the moment is also an acknowledgment that complete compliance is unachievable, given the various regulations and laws that already surround AI.
Much of the regulation around AI so far – such as the EU’s General Data Protection Regulation, Digital Services Act and Digital Markets Act – has been focused on data (the key input for AI).
“100% achievement of every guidance and complete removal of risk is impossible,” he said, pointing out that “firms that are regulated under both data protection and financial services regulations [will] need to adopt a risk-based approach that focuses on key risks.”
At the same time, the multiplying range of AI uses in financial services has recently prompted the International Organization of Securities Commissions (IOSCO) to call for stronger processes to safeguard consumer protection and the overall financial system, while also spurring industry-wide calls to ensure tech companies are brought under an equal level of monitoring and scrutiny.
Given the many applications AIs can be used for, Dr. Frank De Jonghe, EY’s EMEIA quantitative & analytics services leader for financial services, noted the EU draft regulation was already predicting misuse of AI and that potential misuse needs to be factored into risk assessments.
“[Firms] should assume that the tools they develop may get in the hands of somebody who’s less familiar with it,” he argued, “not necessarily intentionally, but because of the normal course of business, [they] use it outside the intended range.”
This is also an issue for the UK, said Cunningham-Day, where there is an onus for senior level staff to understand technology risks.
“AI presents a significant challenge to that role – it requires significant technological education for those in senior manager positions.”
The A-Z of financial technology solutions