Black box problem stunting ML adoption in default risk analysis

By Rebekah Tunstead | 14 August 2019

Difficulties in explaining machine learning (ML) models is causing concern as banks look to the technology for default risk analysis, according to market participants.  

“Many different types of ‘black-box’ models have been developed out there even by banks claiming that they can accurately predict mortgage defaults. This is only partially true,” said Panos Skliamis, chief executive officer at SPIN Analytics in an email.

“[These models] usually target a relatively short-term horizon and their validation windows of testing remain actually in an environment too similar to that of the development samples. However, mortgage loans are almost always long-term and their lives extend to multiple economic cycles, while the entire world changes over time and several features of ML models severely influenced by these changes of the environment,” he said.

The black box problem refers to the inability of an end user to understand the processes occurring between input and output in a machine learning model.

On August 9 the Bank of England published a staff working paper on the application of ML in default risk analysis with the aim of addressing the black box problem through an example of applying the technology to predict mortgage defaults.

The paper states that in the case a regulator “usefully consider an influence-based explainability approach implemented by the bank,” it is “still difficult to estimate how a complex model would behave out of sample, for instance in stress-test scenarios where inputs are deliberately stretched.”

A lack of understanding of the limits of ML is a fundamental part of the problem, according to Skliamis.

“In our opinion, the biggest ‘black box’ problem is that the people that insist to use them in credit risk modelling do not understand their limitations,” said Skliamis.

But uncertainties outlined by the paper are not well understood even by the people that develop and use the models, according to Skliamis.

“It is rather disappointing that many banks, when they develop internal unregulated models, blindly use ‘black-box’ techniques to obtain the highest possible accuracy metrics, without any concern of the implied problems and possible impacts on their real risks,” said Skliamis.

On May 31 the UK’s Financial Conduct Authority (FCA) published an article stating that as the industry strives for better understanding of ML models there would be “a trade-off between the ability to meet demands for an explanation and the ability to supply more accurate predictions at reasonable time and cost.” The article argued that “the focus should be ‘sufficient interpretability’”.

But pressure from regulators to ensure ML is presented in a clearer manner is mounting, according to Haonan Wu, head of data science at Synechron.

“Most of these [ML] initiatives stay internal R&D initiatives. I haven’t seen any real examples of putting this model into production,” says Wu. “Definitely the pressure from the regulator is a very important factor there. Whatever model that you build, there is almost no tolerance for the black box from a regulator’s perspective. You need to be able to explain everything in a very detailed and structured way.”

Become a bobsguide member to access the following

1. Unrestricted access to bobsguide
2. Send a proposal request
3. Insights delivered daily to your inbox
4. Career development