Why the financial industry must act to protect customers from voice fraud

By Dr Nikolay Gaubitch, Director of Research at Pindrop  

 

Much of the financial world has become dominated by digital solutions in recent years, with online services and mobile apps offering easy access to everything from transferring funds to applying for credit cards. Nevertheless, many financial activities still require the human touch, particularly when it comes to more complex interactions, or providing for those who cannot readily use online options. Voice is one of the most effective routes here, enabling customers to access services and receive guidance without visiting one of the declining number of physical branches.

However, this same accessibility makes telephony a key focus for fraudsters targeting financial institutions and their customers. Voice often offers an easier attack path than heavily fortified online infrastructure, and fraudsters seek to exploit it to steal data and access user accounts to commit fraud and theft.

Financial organisations are seeking to implement more effective defences to protect their voice channels and bring them in line with the same protection afforded to digital platforms. Solutions using artificial intelligence (AI) and machine learning (ML) are some of the most promising options for protecting customers from fraudsters targeting their accounts.

 

Why is voice regarded as a security weak point?

There are a number of factors that make telephony a favourite target for fraudsters. Because these attacks revolve around impersonating a legitimate customer to access their account, the process requires far less technical expertise than attempting to break through online security measures.

The remote and anonymous nature of the voice channel makes it easier to assume the identity of a legitimate customer. Call centre personnel meanwhile are primarily, and quite rightly, concerned with providing a high level of customer service, and may not have the time or resources to investigate every call for signs of an imposter. Instead, verification is usually handled by Knowledge Based Authentication (KBA), with the caller answering pre-set questions that might include personal details,a keyword or a number.

This approach is fundamentally flawed as it only confirms that the caller knows the answer to the questions asked. Adept fraudsters are able to cheat their way through the system with knowledge gained from previous calls or data breaches.

 

How do fraudsters access accounts through voice?

A critical weakness of telephony is the fact that fraudsters can easily take a multi-stage approach to gradually harvest the data they need to pass authentication requirements.

The voice channel allows criminals to make multiple calls fishing for more information or validating existing data. Data can also be gained from other companies where the customer has reused the same information.

Once they’ve done their homework, fraudsters will attempt to bluff their way through the rest of the call and deceive the call centre agent into revealing or bypassing any missing information. Savvy criminals may be armed with effective social engineering skills and will often target services such as PIN replacement , where the sense of urgency can help them to exploit human sympathy.

The end goal is usually to gain control of a victim’s financial account, which can then be exploited for any number of malicious activities. Criminals may use the account to make fraudulent purchases, transfer the bank balance, or conduct further financial fraud and blackmail.

Whatever they do, the result will be a traumatic and costly experience for the victim, as well as economically and reputationally damaging for the financial firm that failed to protect their account.

 

Supporting the human element of voice with AI technology

Financial organisations need to preserve the human element that makes the voice channel so engaging, while also hardening their services against exploitation by fraudsters.

Many firms are now turning to anti-fraud technology driven by AI and ML to achieve this. ML-powered analytics can crunch through large volumes of data to identify potential imposters in real-time. Factors such as the caller’s voice and behaviour, as well as metadata relating to the call, all contain hidden clues that point to a fraudulent call. While a human call centre agent is unlikely to pick up on these signs, anti-fraud technology can detect them and make a decision in moments.

This process happens seamlessly in the background in real time, which means genuine customers can continue their call without interruption. Further, the technology can also be applied to authentication as well as fraud detection. The solution can create unique multi-factor credentials for each customer based on their particular voice, device, and behaviour. This can be used to implement a more efficient authentication process that means customers can get the help they need sooner.

 

spot_img
Ad Slider
Ad 1
Ad 2
Ad 3
Ad 4
Ad 5

Subscribe to our Newsletter