Why finance is turning off AI

By James Sherlow, Systems Engineering Director, EMEA, for Cequence Security

Generative AI is having a troubled time in the financial sector. While two thirds of finance leaders think it will have an immediate impact in helping to summarise and explain variances in forecasts and budgets, reservations remain over just how it can be implemented, according to a recent Gartner survey. One of the concerns raised is over its auditability in addition to reliability, accuracy, cost, data privacy and security. This is because the decision-making associated with financial processes must be made by a human in order to be tracked and audited in compliance with industry regulations. If an AI is making that decision, this raises questions over who should be held responsible in the event of an error.

It’s a huge problem for AI and is actually seeing some firms elect to choose off the AI functionality in their systems. But failing to adopt AI could be equally harmful to the future of the sector. The technology is expected to boost operating profits in banking between 9-15% and lead to productivity gains of 3-5% of annual revenues, estimates McKinsey. The same report warns that regulation could act as a real inhibitor in processes involving personally identifiable information (PII) such as credit scoring where it’s seen as too high risk to deploy.  

In a heavily regulated sector, the expectation is that the legislation will catch up, but this could be a long time coming. While the EU has leapt into the fray and published the AI Act earlier this year, the UK has so far avoided doing so although reference was made to putting in place appropriate legislation to place requirements on those working to develop the most powerful AI models in the King’s Speech back in July.

Realistically, AI will need to be incorporated into existing frameworks which the GenAI and the Banking on AI: Financial Services Harnesses Generative AI for Security and Service report cautions could take between three to five years. Just how this should be approached has been the focus of the UK Financial Authorities comprising the Bank of England, PRA, and FCA who put out a consultation and published the results (FS2/23 – AI and Machine Learning) last year. This showed a strong demand for harmonisation with the likes of the AI Act as well as NIST’s AI Risk Management Framework.

It’s not just productivity that’s at risk here, however, with threat actors likely to use AI to create sophisticated attacks. The FS2/23 report warned that we can expect an uptick in money laundering and fraud with AI used to create deep fakes and phishing attacks and this is already happening. UK engineering firm Arup saw a financial employee who was based in Hong Kong told to transfer $25m on a deep fake video conferencing call in which all of the attendees, including the CFO, was a clone. Similarly, a deep fake vishing attack against a LastPass employee was also carried out earlier this year using voice cloning technology to impersonate the company CEO.

Combatting such attacks will require the financial sector to fight fire with fire. It’s for this reason that Mastercard has been so proactive in exploring the technology to combat fraud. It estimates AI could drive down bank fraud by improving detection by between 20-30% or even higher and even address false positives where legitimate transactions are flagged for investigation. 

It’s a problem that’s set to worsen if cyber attacks become self-learning. In such a scenario an AI attack which triggered defences would be able to pivot and sidestep defences. In the case of an assault on an Application Programming Interface (API), which is used to access sensitive data for numerous financial processes from ecommerce transactions to Open Banking, defences will typically look to block an attack. Going forward, that is likely to change, with deception playing a greater role in allowing the attack to seemingly progress and diverting it to frustrate the attacker and make it too costly to sustain the assault.

Right now, the financial sector is feeling its way to determine how it can utilise AI. There is a regulatory vacuum but a great deal of progress has been made in developing frameworks to assist with deployment. In addition to the NIST framework, there’s ISO 42001 and the oversight framework set out in DORA (Digital Operational Resilience Act) for third party providers. These can be used to assess the AI functionality the business has, use cases, risks and to put in place processes to mitigate those, ensuring the business tackles the issue head-on rather than simply switching off.

spot_img
Ad Slider
Ad 1
Ad 2
Ad 3
Ad 4
Ad 5

Subscribe to our Newsletter