Much has been done to address the root causes of the global financial crisis and to ensure a more resilient financial system. Yet in its latest report examining financial stability concerns associated with Artificial Intelligence (AI), the Financial Stability Board (FSB) is recognising that the new risks associated with the digitalisation of financial services are emerging and merit far closer scrutiny.
Artificial intelligence – the use of algorithms to identify and predict patterns – has been implemented in many facets of the financial system. This ranges from assessing consumer credit and automating client interactions in retail banking to optimising capital allocation and stress testing in the wholesale market; from helping funds to identify new hedging strategies to supporting regulators to detect frauds and malpractices. This widespread use of AI warrants further attention from regulators and financial firms alike – to both harness its potential and, importantly, to identify, measure and mitigate new risks associated with the technology.
In its latest report, the Financial Stability Board (FSB) highlighted three areas where AI threats financial stability.
The ‘black boxing’ of AI algorithms, which relates to the technology’s lack of transparency and ‘auditability’, poses macro-level risks. Particularly as many AI and machine learning models are being trained during a period of low market volatility, a widespread use of opaque models may result in unintended consequences.
The market concentration among AI providers in the technology sector could pose systemic risks to the financial industry. Network effects and scalability of new technologies means that AI solutions are increasingly being offered by a few large technology firms. This could worsen third-party dependencies already prevalent in the financial sector, and trigger systemic risks if a large technology provider were to face a major disruption or insolvency.
Applications of AI and machine learning could also result in new and unexpected forms of interconnectedness between financial markets and institutions, for instance, based on the use of previously unrelated data sources in designing trading and hedging strategies.
Guarding and growing the financial market with AI requires stronger assessment of AI and machine learning in view of their risks, including adherence to relevant protocols on data privacy, conduct risks, and cybersecurity. Adequate testing and ‘training’ of tools with unbiased data and feedback mechanisms will be key, so will the build-up of skills in-house to understand and supervise AI and machine learning models.
For more information contact:
Phone: +44 (0) 207 100 7575