This article is more than 1 year old

Knowing Your Customer: You need to, but regulation makes KYC extra-crispy...

Machines join the march against identity fraud

There’s a conundrum called know your customer (KYC), the process of verifying the identity of a company’s clients. A decade ago, KYC was a mild inconvenience that could be tackled using some familiar procedures.

Today, as the volume of business conducted online has exploded and the value of personal data has grown, the issue of KYC has become a quagmire of confusing regulatory demands and technologies, with the potential to cause lasting harm to businesses and their customers.

In 2013, the UK’s now defunct National Fraud Authority put the cost of KYC at £52bn. By 2017 figures from banking organisation Cifas showed identity fraud at a record 174,000 cases – up 125 per cent in a decade.

Identity fraud has spread from financial services to mobile phone contracts and retail, with 80 to 90 per cent of cases taking place online where verification is often weak.

Data breaches are giving criminals the basic material they need to steal or borrow identities. Once details such as names and dates of birth are stolen they are gone for good and can be recirculated in an indefinite loop.

Regulation is adding to the challenge. The last 12 months have seen a clutch of new rules, including the EU’s Fourth Anti-Money Laundering Directive arriving in 2017, and the Revised Payment Services Directive and the EU’s imminent new data privacy regulation, the General Data Protection Regulation (GDPR).

These require strong customer authentication and identification, throwing up an unavoidable problem: how to implement the rules in a way that guards against fraud but does not build a barrier for customers.

AI spots the difference

Confronting the challenge, artificial intelligence (AI) is entering the arena as a means of detecting whether the person logging in is who they claim to be.

AI has grown rapidly of late to give us algorithmic machine learning and deep learning, a type of machine learning derived from the ability of neural networks to learn without human guidance.

If machine learning can be used to automate decision-making for a given data set, the hope is that deep learning can mimic how humans make decisions about the real world, particularly their ability to learn from errors.

The roots of AI go back decades but a surprising amount of deep learning orthodoxy goes back only half a decade at most. That has not prevented it from being set loose on real customers, not only to detect transaction fraud but also to verify identification.

Many companies now say they use AI in some capacity – there are perhaps 30 open source algorithms to get organisations started. Yet this doesn’t appear to have stemmed fraud, even in industries such as financial services where AI should be a perfect fit.

Does this mean AI is being over-sold? Not necessarily, but it does pose some difficult questions about implementation.

“It’s not always being done well in the commercial world. Just because you have access to the algorithms doesn’t mean you know what to do with them,” says Mark Whitehorn, professor of analytics at the University of Dundee.

Many of today’s machine learning systems can be set to work on decision support and fraud detection, but humans are still needed somewhere in the chain to cope with exceptions, anomalies or customer interaction. Having a human in the feedback loop can also help train algorithms in a way that can make them smarter.

“You turn somebody down for a loan and they ask why, and the answer is because the machine learning algorithm said so. What can they change? The bottom line is we don’t know,” Whitehorn says.

The GDPR puts into law the principle that individuals – called data subjects in GDPR – have the right not to be subject to a decision based solely on automatic processing.

Today’s machine learning and deep learning are really about risk control, which improves as larger data sets are fed in over time. One example is neural networks used in areas such as image analysis, where employing people would be too expensive or not be accurate enough.

The challenge with documents is that they can be easily forged to a high standard, with a vast number of variations according to the type of document and where and when it was issued. This is difficult for humans to keep up with but it’s an ideal job for deep learning technology, which can spot even tiny anomalies.

“AI has come on leaps and bounds and can do very complex learning such as recognising pictures,” says Whitehorn.

Another challenge is spotting suspicious patterns, for example noticing when a customer has made purchases in three countries within a short time frame. Machine learning simply measures the extent to which an event departs from what is defined as normal for that type of customer.

Humans still needed

It may seem counter intuitive but despite the drive to automate detection, humans often remain the best able to spot fraud.

PwC’s Global Economic Crime Survey 2018 found that 22 per cent of fraud was picked up by humans at some point in the transaction verification process. This was ahead of whistleblowing and internal tip-offs on 13 per cent, internal audits on 8 per cent, and accidental discovery, also on 8 per cent.

PwC concluded: “The percentage of frauds detected by technology has decreased since our last survey [in 2016], especially in the key areas of suspicious transaction monitoring and data analytics.”

This points to the limits of machine learning and the growing volume of work needed to keep the algorithms ticking over. People don’t anticipate changes in the model of fraud; they take five years’ worth of data and produce a model from it, not taking the changes into account.

The biggest limitation on the evolution of machine learning and deep learning is a shortage of people who know how to make the technology work. This might be an argument for wrapping it into a service that allows organisations to implement AI without having to start from scratch.

As more industries look to machine and deep learning, the skills shortage will only become more acute. A company offering machine learning, deep learning and possibly human verification looks good on paper but what counts beyond the brochure?

“It’s not just about pushing the data through an algorithm and it works. How do you distinguish between one company and another? On track record and who they have working for them,” says Whitehorn.

More about

TIP US OFF

Send us news


Other stories you might like