WHO WE ARE
WHAT WE DO
CAREERS
INSIGHTS
CONTACT
14-06-2018 | POINT OF VIEW
Banks must overcome risks to maximise benefits of AI

This article, by Scott Vincent and Kuangyi Wei, first appeared in The Telegraph on 29 May 2018.

The image of Artificial Intelligence has progressed rapidly in mainstream culture over the past decade. What was once the stuff of dystopian sci-fi films has been transformed into an innovative technology which is helping to reshape our everyday lives. When viewed from the context of the banking sector, AI is already giving rise to a new type of customer experience: faster decision-making and genuinely 24-hour customer service.

But take a look beyond the chatbots helping customers with their queries, and AI is also being utilised by banks to pull in masses of data to perform regulatory checks on capital levels, helping firms and financial regulators to better manage prudential risks.

AI is also assisting in the fight against financial crime by improving ‘Know Your Customer’ (KYC) and anti-money laundering (AML) checks. Predictive analytics and machine learning have opened new possibilities in the detection of fraudulent activity which not only protects individual customers against financial losses, but also maintains the integrity of the whole banking system.

This technology promises to transform an industry which has struggled to generate profits throughout the decade since the financial crisis. According to the IMF, Return on Equity is set to remain in single digits for many of the world’s largest banks. Banks need a further efficiency boost to lift financial returns beyond current expectations – this is a prospect AI offers.

As with all beneficial changes, there are risks. As we have already seen on multiple occasions, automated algorithmic high-frequency trading within equity and foreign exchange markets may, if not properly supervised, cause major volatility and losses in market value. The potential for havoc could be seen in the 2010 Flash Crash or a 2013 incident where trading machines reacted to the then-nascent “fake news” of explosions at the White House.

A decade on from the 2008 financial crisis, banks are far more capitalised and financially stable. But they also face a new landscape of non-financial risks that they must learn to navigate. Banks must be able to demonstrate their ability to safely handle the masses of data that need to be pumped through computer systems in order to maximise efficiencies. Without new techniques to improve risk management, cybercrime and breaches of data privacy present new dangers with major consequences: the maximum fine for a data breach under the new GDPR regime, which goes live this month, is €20m or 4% of group revenues.

New systemic risks have also been highlighted by the Financial Stability Board (FSB), the global financial standard-setter, which spoke out last year on fears over a handful of AI providers dominating the market as banks race to utilise the technology. Not only would the lack of market competition prove unhealthy, but such market concentration means that large tech providers – which fall outside the scope of financial regulators – would have the size and power of a global systemically important bank (G-SIB) but without any of the supervisory oversight.

This demands a two-fold approach to establish both firm-level and sector-wide financial resilience. The onus is on risk managers and compliance teams alongside internal audit committees to ensure that a bank’s AI systems, capabilities and operations are functioning effectively. Ultimately, it is humans who are responsible for ensuring AI serves the interests of all.

Overseeing this will be a new regulatory framework setting out standards alongside a new risk taxonomy, which sets out clearly the standards that artificial intelligence must adhere to. This will be implemented segmentally as regulators develop their wider fintech supervisory approach. For example, the Financial Conduct Authority (FCA) made clear in its Business Plan for the coming year that it will review firms’ use of data within algorithmic trading and artificial intelligence to assess the potential harm to financial stability and customers alike. Cybersecurity will also fall under its lens as will the use of third-party technology providers with a monopoly over services.

A strong regulatory framework for artificial intelligence within financial services will help the industry flourish, as well as providing a benchmark for professionalism as this technology develops. As seen with the FCA’s efforts on creating a consortium amongst industry players on cybersecurity to expand its expertise through a working dialogue, the same model could directly be applied to the use of AI. Creating this partnership between the public and the private sphere may help to assuage any public doubt about the power of AI for positive means from the banking sector.

There is still work to be done to convince a suspicious general public, as well as sceptical politicians, that the banking industry can use this technology in ways that will actually do good rather than harm. After all, the real risk AI poses to financial stability comes not from robo-advice or the proliferation of chatbots. Instead, it comes from how predictive analytics is managed so that the fears about humans losing control of financial markets are not allowed to materialise.




For more information contact:

Kuangyi-Wei-1-500px
Kuangyi Wei
Head of Research and Market Engagement
Email: kwei@pfg.uk.com
Phone: +44 (0) 207 100 7575

FOLLOW US
@p_f_g - Parker Fitzgerald
TOP
LOAD MORE