Every time you talk to a financial crime manager in Australia, they always seem to be hiring.

Scams are a hot topic in Australia, just like everywhere else in the world. When a person gets scammed for tens of thousands of dollars, it elicits a sympathetic response from just about everyone. These are tough stories to hear, they make for good headlines, and so banks are compelled to keep hiring people to fight rising scams and protect their customers.

Are more people enough?  Prevention, detection, and response strategies need to evolve to scams.  Whilst banks have made good headway by leveraging existing fraud strategies and processes in tackling scams, it is becoming clear to all that scams need a distinctive approach.

Banks have been working on developing fit-for-purpose frameworks, strategies, and processes for scams. It’s fair to say however that it has been challenging, and progress is not where everyone would like it to be.

That’s about to change.

Below, Featurespace Subject Matter Expert Sasha Slevec considers how augmented analytics — and anomaly detection, in particular — has given banks the tools they need to supercharge scam detection.

The key terms to understand

Regulators and customer advocacy bodies don’t like to draw a distinction between scams and fraud, but for the purposes of this article it makes sense to do so:

Historically, this distinction was easier to understand. A generation ago, if a customer wrote a paper cheque to a scammer, it was understood that the victim’s bank would not be held liable – the customer authorized the bank to pay the cheque by way of their signature. Transactions and interactions with scammers have now moved into the digital space and it’s become harder to establish when a bank should compensate scam victims.

It’s also important to draw a distinction between first-party fraud and third-party fraud:

  • First-party fraud is when the bank’s customer, or the person who opened the account and interacts with the bank is the person committing the fraud.
  • Third-party fraud is when the person mis-represents themselves as the bank’s customer when interacting with the bank. This third party that interacts with the bank is the person committing the fraud.

These distinctions help fraud teams understand when and how to intervene in a case of suspected fraud.

But where do scams fit in to these definitions, when it’s the customer interacting with the bank (first party) because the scammer (third-party) has deceived the customer.  In short, they don’t fit. Which is one of the reasons scam cases are proving difficult for banks to manage.

The best opportunity to halt a scam in its tracks

In general, there are three moments that matter when a bank’s intervention can stop a scam:

  • Before the scam occurs. Banks spend considerable resources to educate customers on how to recognize scams because prevention is always better than the cure.
  • As the scam is unfolding. At this point, detecting that a customer is in a scam is the priority.
  • Before money leaves the account. Once the bank knows a customer is a victim of a scam, taking decisive action is the only viable response.

That means detection and education are the keys to preventing both fraud and scams.

Before the scam occurs.  Education is effective. Banks have been educating their customers not to click on links in unsolicited emails as a fraud prevention measure. The next step here is to make sure customers are made aware that they should ignore social media accounts promoting suspicious investment opportunities when it comes to scams.

We know ‘Don’t click the link’ campaigns are effective.  There are enough surveys, enough anecdotal evidence, and enough stories online that help inform people to not click the link, and that is because of the education campaigns. Of course, some still do click.

The same notion applies to scam education. There is enough evidence to indicate that the majority of people can spot a scam.  Of course, even the most astute can sometimes fall victim.

What we don’t know is just how effective that education is. Banks have some knowledge of how many times customers click on phishing email links or fall victim to a scam, but there’s no data to show how often customers ignored those links or ignored the scam. Very few customers phone in to say, “I almost clicked on that suspicious link.” or “I almost fell for that scam”.  But we know education does have a positive effect, so we must continue.

As the scam is unfolding.  Once the customer is in a scam, research, case studies, anecdotal evidence all supports that education is no longer effective.  What the evidence shows is that once a customer is lured into a scam, no amount of exposure to education cuts through to alter the customer behavior.  At this point, the only effective action is for the bank to become aware that their customer is in a scam, and that means effective scam detection.

At the moment, banks are able to detect some proportion of scams using tried and tested fraud detection techniques. Unfortunately, a lot of scams still go undetected by traditional fraud monitoring systems. Compounding the issue is the fact that customers invariably do not alert their bank to any interaction with a scammer.

That gap in capability that has made it difficult to write effective detection rules or to train older machine learning algorithms to spot novel instances of scams, because of reliance solely on traditional known fraud scenarios. Traditional methods and those older systems only know as much as we know.  They rely heavily on understanding known frauds and scams. As the saying goes ‘You have to have a fraud to find a fraud’.  Rules based systems and historic model approaches rely predominantly on knowing the types of fraud and scams.

Modern machine learning algorithms, including Featurespace’s Adaptive Behavioral Analytics and Automated Deep Behavioral Networks, are able to learn and make connections that we humans cannot spot. This is the breakthrough — the pipe dream — that financial crime teams in banks have been waiting for.

Before money leaves the account.  Once detected, the evidence shows that the only effective response is action.  There are too many examples, too many case studies, of customers caught in scams not responding to attempts to inform and educate on the high risk of them being in a scam.

There appears to be only one effective response at this point in the scam life cycle, decisive action.  For example, this might include a bank declining to process a transaction when the bank believes it to be part of a scam. This is however difficult in practice, as there is still an untested area between protecting a customer from themselves and preventing a customer accessing funds that are legally theirs.

Before action can be taken, the banks need to know about it first.  That leaves detection to serve as the hinge point for everything that follows.

How anomaly detection unlocks detection capabilities

Most banks today have rules-based fraud-prevention systems, and many combine these with models based on known scenarios.

These dual technical capabilities are typically quite effective at preventing third-party fraud.  Not to say that uncovering anomalies does not elevate third-party fraud detection, because it does, it’s just that the effectiveness ceiling that banks are able to attain with traditional methods is already quite high. Technology is the fundamental challenge with first-party fraud and scams. This is where augmented analytics can make a huge difference.

Featurespace’s ARIC™ Risk Hub uses proprietary machine learning inventions, Adaptive Behavioral Analytics and Automated Deep Behavioral Networks, that are capable of uncovering anomalies in real time.  The ability to understand and make predictions using the millions of computations necessary is beyond traditional technical capabilities.

Using customer account behavioral data, the models learn how a banking customer manages their finances. The models are capable of making behavioral connections and can then uncover new risk signals in real time.

Next steps: Bringing those capabilities to Anti-Money Laundering (AML)

For AML teams and converged FinCrime-fighting teams, anomaly detection brings a similar benefit.

From the bank’s perspective, money laundering, first-party fraud and scams have a common characteristic – that is, the customer is transacting on the account.

If a bank can implement anomaly identification and behavioral data to detect first-person fraud as well as scams, then it can do the same to detect attempts at laundering money.

For Australian AML professionals who spend all day sorting through false positives, a tool like ARIC Risk Hub significantly eases that burden.

In March 2022, Interim Head of Financial Crime Annegret Funke spoke with Mark Gregory, a director in PwC’s forensic practice, about how banks in Europe and the UK can navigate AML compliance with machine learning. Their webinar touches on similar points to those above.

I recommend AML professionals in Australia check it out. You can find that webinar here.

Learn more

To learn more about ARIC™ Risk Hub, book a demo and see how we can help protect your organization.