You don't have javascript enabled.

IOSCO calls on asset managers and intermediaries for stronger AI and ML processes

Amid the increased use of Artificial Intelligence (AI) and Machine Learning (ML) by asset managers and market intermediaries, the International Organization of Securities Commissions (IOSCO) warned that the technologies could pose or amplify a number of risks potentially undermining consumer protection and the overall functioning of key processes. The global financial-standards setter released a report

  • Anjali Kochhar
  • September 9, 2021
  • 7 minutes

Amid the increased use of Artificial Intelligence (AI) and Machine Learning (ML) by asset managers and market intermediaries, the International Organization of Securities Commissions (IOSCO) warned that the technologies could pose or amplify a number of risks potentially undermining consumer protection and the overall functioning of key processes.

The global financial-standards setter released a report on Tuesday showing that, with the rise in the use of electronic trading platforms and the ever-increasing availability of data, investment companies and market intermediaries have started relying more significantly on AI and ML for their daily functioning – to optimise portfolio management as well as advisory and support services, client identification and monitoring.

With this increased use, the expanding range of potential related risks have prompted regulators to sharpen their focus on the role and control of AI and ML in the broader financial markets.

At present for example, one of the most heated debates in the sector is the outsourcing of services to technology providers. IOSCO noted that firms use external providers of AI and ML to varying extents. While some larger firms rely on partial collaboration, smaller firms tend to revert to outsourcing full functions and products from external providers, the body said.

Increasing reliance on a small number of growingly large firms within this space is raising higher outsourcing and concentration risks, the organisation warned.

Risks on the rise: tech market concentration, poor transparency and biased algorithms

“Use of external providers for AI and ML solutions, including data pools and cloud computing may raise concerns about data privacy, cybersecurity and operational risk in addition to concerns about concentrations of certain providers, particularly where firms may not have the appropriate level of expertise to conduct effective due diligence assessments ahead of utilising their services,” the report said.

Overall, IOSCO said there is a clear need for increased transparency in firms’ use of AI and ML to improve public understanding and confidence in the use of the technology.

“It is important that firms appropriately disclose information about their service offerings, to help clients understand the nature and risks of products and service offerings, so that they can make informed decisions.

“Applying unexplainable ML algorithms to refine a trading strategy could expose the firm to unacceptable levels of legal and regulatory risk. Firms offering automated investment services, including robo advice and automated portfolio construction should appropriately disclose the nature of the use of ML and the automation in their service offering,” the report said.

However, the organisation also conceded that an excessively granular level of transparency could somehow “create confusion or opportunities for individuals to exploit or manipulate the models,” it said.

To that extent, it noted  that the topic of transparency brought out various conflicting views from different asset managers and market intermediaries it surveyed. While some firms agreed  that more transparency should be provided in investment decisions, most respondents supported a level of generic transparency, with some explicitly against disclosures on the specific implemented approaches. Meanwhile, some also suggested there may be differences in disclosures provided to institutional versus retail clients.

Another key issue was that related to AI and ML governance, with IOSCO’s survey finding that firms rely on internal oversees and governance that may lack adequate dedicated resource.

For example, while some businesses rely on the senior leadership for oversees and governance, for some it remained a departmental or business line consideration.

“Many firms did not employ specific compliance personnel with the appropriate programming background to appropriately challenge and oversee the development of ML algorithms.

“With much of the technology still at an experimental stage, the techniques and toolkits at the disposal of compliance and oversight (risk and internal audit) functions currently seem limited.

“In some cases, this is compounded by poor record keeping, resulting in limited compliance visibility as to which specific business functions are reliant on AI and ML at any given point in time,” the report said.

Similarly, IOSCO’s survey showed that in most cases there is not an established specific framework for developing AI and ML. As a substitute, many firms use the same development and testing frameworks that they use for traditional algorithms and standard system development management processes, which may pose several threats.

“Robust development and testing controls are necessary to distinguish signals and statistically significant data from the noise. Unlike traditional algorithms, as more data is processed by ML algorithms, they may behave unpredictably as they are exposed to new data patterns,” it said.

With an increasing slice of the asset management sector increasingly involved in leveraging the technology, regulatory requirements and initiatives in the space are on the rise too.

“Many jurisdictions have overarching requirements for firms’ overall systems and controls; however, few jurisdictions have regulatory requirements that specifically apply to AI and ML based algorithms.

“These overarching requirements include rigorous testing of the algorithm in a safe environment away from the market before its controlled deployment, as well as continuous performance monitoring throughout the lifecycle of the algorithm,” the report said.

In this context, IOSCO urged  firms to consider whether solely relying on existing processes remains appropriate when meeting regulatory requirements or whether those processes should be further revised.

“Moreover, not all ML techniques will be compatible with existing legal or regulatory requirements.

“Regulatory regimes that require algorithms to be fully understood and explainable throughout their lifecycle invariably limits the use of algorithms that evolve through the course of deployment in response to environmental changes,” it said.

Another risk that could grow substantially in relation to the broader adoption of AI and ML occurs when models develop certain social biases and produce pre-conditioned outputs.

IOSCO noted that learned bias in the dataset can impact the decisions made by algorithms, jeopardising the longer-term performance of those algorithms:

“Such a dataset, where a bias may have been introduced by either the questioner or by the respondents, will influence the conclusions reached by the algorithm.

Any output based on such a bias will likely degrade the performance of the algorithm more quickly over time, and could result in consumer detriment,” it warned.

IOSCO thus urged market participants to ensure that their developed models and AI- and ML-driven decisions are monitored closely to avert biased outcomes, the global body said.

The global body also urged asset managers to be particularly cautious when developing AI and ML using large pools of alternative (i.e., non-traditional) datasets, such as satellite data or twitter feeds.

EU markets supervisors urge firms to bolster cybersecurity safeguards

Pitching into the global regulatory debate on the broader risks that are emerging from stronger digital finance trends, the three European Supervisory Authorities (ESAs) on Wednesday raised warnings on the recent surge in cybersecurity risks and incidents.

Citing the infamous cases of GameStop, Archegos and Greensill, the three markets watchdogs said “the materialisation of event-driven risks […], as well as rising prices and volumes traded on crypto-assets, raise questions about increased risk-taking behaviour and possible market exuberance.”

More specifically, “the financial sector has been hit by cyber-attacks more often than other sectors, while across the digital economy, cyber-criminals are developing new techniques to exploit vulnerabilities,” they said in their second joint risk report of 2021.

“Financial institutions will have to rapidly adapt their technical infrastructure in response to the pandemic, [as] the crisis has acted as a catalyst for digital transformation more generally.”

The joint committee also pointed out that “the costs of cyber incidents coupled with a tightening in data protection regulation across the world could boost cyber insurance demand.

“The increased demand is expected to originate from the sectors more exposed to cyber security risk, such as healthcare and the financial services, but also from individuals and families,” it warned.