17-12-2019 | POINT OF VIEW
Why the future of machine learning hinges on a change in human mindset

Machine learning (ML) has gained tremendous popularity in recent decades. Being a subset of AI (Artificial Intelligence), ML is the development of models that require limited human intervention, as they are able to ‘learn’ from data, relying on patterns and inference. ML algorithms are used in a variety of applications, from data security and marketing personalisation, to healthcare and face or speech recognition. The ability of ML to use large, unstructured data and capture complex dependencies allows for superior forecasting performance – and has driven successful and high-profile projects like Google Ads or Apple’s Siri.

The financial industry is no stranger to this trend. A recent survey by the PRA/FCA of 300 UK financial institutions confirmed a rise in ML adoption, with over two thirds of respondents reporting they already use ML in some form. Amongst the greatest perceived benefits of ML are better personalisation for customers, improved compliance, increased operational efficiency, new analytical insight and improved services.

Despite this, the current use of ML in financial services remains limited to specific fields, such as anti-money laundering, fraud detection, and some customer-facing applications. The benefits associated with wider ML use –sounder risk management with ML-supported credit underwriting, audit improvements through continuous monitoring, and expand stress testing scenario – remain largely untapped.

There are many barriers to the use of ML in financial services, including IT infrastructure requirements, lack of data science talent, high operational risk of migrating to new models, long development-to-production time frame, technological debt accrued over time, organisational structure and culture.

However, one of the key constraints can be the scepticism towards ML. The PRA/FCA study called out the difficulties in model interpretation as one of the most material obstacles in wider implementation of ML. ML algorithms are referred by many as a ‘black-box’, which is partially driven by the inherit complexity of the methodology but may also imply a different mindset in financial services compared to other industries with more advanced ML use.

Some of this is justified: ML models can consist of a multitude of interacting components, which makes it harder to see if they always interact as intended and creates governance and validation challenges throughout the life of the model. However, part of the challenge is that ML does not have its root in classical statistics with well-defined rules for interpretability. This sense of inertia can make ML seems ‘novel’ and harder to interpret – and therefore the potential increase of model performance is not worth the risk.

Nevertheless, ML has come of age in recent years and so has ML validation and testing tools. A recent paper by the PRA on the topic of ML explainability, assesses Quantitative Input Influence as a method to tackle this problem. This method is just one of many developed in the last five years, including Partial Dependence Plots, Permutation Importance, or Local Interpretable Model-Agnostic Explanations. One of the popular methods that is used for ML model interpretation and explanation is SHAP (SHapley Additive exPlanations) methodology, proposed by Lundberg and Lee (2016). It explains the output of any ML model, by measuring the features importance for the predictions, and graphically demonstrates the dependencies that the model was able to capture (see an example here).

These methods allow for both an explanation of the dependencies that the model captures as well as which variables  they contribute to model output (and to what extent). The latter is extremely important in for credit underwriting, where the ability to explain to an assessment of a customer’s credit application is a regulatory requirement for banks.

If ML models can be understood, they are likely to be more favourably viewed and more easily approved by stakeholders and regulators. When models need to go through lengthy compliance checks (such as in the credit risk area), being able to explain the model helps in assessing its compliance to regulatory articles and norms.

Ultimately, the development of ML models over the last few years have worked to overcome the opaque characteristics of the technology in its earlier form. In parallel, controls frameworks have become more mature to provide the level of assurance required by regulators and internal governance.

Financial institutions should plan and mobilise resources into an early adaptation phase of ML to yield the broader benefits of ML. Focus should be on identifying areas where institutions could benefit from implementation of ML, and outlining a 2-3 years long road map of an iterative and gradual adoption of ML. Large scale implementation of ML requires the support of efficient IT infrastructure and as such this should be a key pillar in any plan.

Ultimately, harnessing the benefits of ML now hinges on a mindset shift – a willingness to take the necessary steps towards digital transformation.


FOLLOW US
@p_f_g - Parker Fitzgerald
TOP
LOAD MORE