Site icon Finance Derivative

How to manage AI hallucinations in financial services?

businessman using tablet PC and information communication technology concept. IoT(Internet of Things). GUI(graphical user interface). paperless office.

Karthik Narayan, Product Management Director, Solutions at Reltio

The necessity for financial services LLMs to be grounded in dependable, transparent, and trustworthy data models is more pressing than ever. With the EU AI Act recently coming into effect, companies have entered a transitional phase where they must align their operations with the new regulatory standards. This act establishes a robust framework for overseeing AI systems across various categories, according to their intended use and potential risk of harm. As a result, financial institutions are now facing increased scrutiny to ensure their LLMs are developed using ethical, high-quality, and reliable data.

The wide applicability of LLMs and other types of AI to the sector is illustrated by findings made by The Alan Turing Institute in 2024. When surveying the sector, more than half of firms said they were applying LLMs in operations to enhance the performance of information-focused tasks and 29 percent use these models to increase their critical thinking abilities.

However, as LLMs have become more popular, there has been a rise in cases of AI hallucinations, where an AI generates an answer that is incorrect. For example inaccurate credit scoring for customers, misinformation in customer service, or errors in fraud detection, which could cause unprecedented harm to both the financial institution and firms. These AI hallucinations stem from how LLMs operate.

It is important to note that even though LLMs are trained on models that are used to predict the next word based on available patterns in their training data, they are not intelligent in themselves. These models are able to generate output that may be seen as factual, but in reality, they do not have the judgement to conduct any fact-checking. This leads to end users being unable to confirm if the model is actually providing trusted information or not. Consequently, AI hallucinations are a product of this, and so leads to mistrust of the model and its outputs.

The fact is, AI hallucinations cannot be totally eliminated but their incidence can be curtailed. So, what steps should financial services institutions follow?

Tackling AI hallucinations with industry-specific knowledge

Before financial services institutions even start using LLMs, they should review the data which is training the model to ensure that sources are ethical and transparent. Critically, only data consented for use for model development should be leveraged by AI. This is a significant risk that financial services organisations need to mitigate for properly leveraging AI capabilities. As well as this, the data which is being used should be able to produce tailored context-specific responses. One of the ways this could be done is through leveraging retrieval augmented generation (RAG). This includes creating a model which recalls information from a set database of company-specific information to give the LLM more informed and personalised responses.

Karthik Narayan

Having said that, firms using RAG alone are not effective in reducing AI hallucinations. It is important for the data to be accurate, of high quality, complete and relevant. In this way, financial services institutions should invest in a strong data unification and management system which has the ability to make real-time updates.

In addition, firms should be using graph augmentation. This includes using a highly structured knowledge graph of business-wide entities and relationships within each organisation, enabling bank-specific terminology and facts to be included in the outputs. Like RAG, the extent to which graph augmentation is effective, depends on the quality of the data which has been fed into the model. Differently, graph augmentation includes a further layer of assurance and control to the quality of the AI-driven responses.

The importance of unified and trusted data

Leveraging modern cloud-native data management and unification tools are essential when addressing the challenges training LLMs bring.This is done through master data management (MDM), data products and entity resolution, which are all key parts of creating a financial services-critical core data set. It is essential to use canonical data models to unify data from many sources and ensure that this is fed directly to LLMs in a real-time and accurate way. This will ensure that the LLMs produce reliable outputs. Data unification and management systems that offer out-of-the-box integrations with data governance frameworks. Such integrations simplify this process, making it easier and more efficient to synchronise metadata.

As financial operations continue to grow and data collection increases, these LLMs must scale and have access to the most up-to-date information to continue to produce reliable results. Additionally, application programming interface-first (API) performance is essential for real-time data availability and automation of these models. Using security-compliant APIs creates seamless integration between the LLM and the data unification platform which ensures efficient data access and processing, as well as creating trusted data outputs.

The future of LLMs in financial services

There is a great need for LLMs to be based on reliable, trusted and transparent data models, especially in light new regulatory frameworks. Financial services institutions are under pressure to ensure that they use LLMs which have been built with a foundation of ethical, reliable and high-quality data.

Through having robust data management and unification strategies, firms are able to create trusted models. It is important for financial services institutions to purposefully work towards creating LLMs which limit hallucinations and are in line with regulations, to reap the benefits which these models bring to banking operations and for customers.

Exit mobile version