Over the last 5 years, we as an industry have talked a lot about the increasing prevalence of newer fraud typologies social engineering, scams and Authorised Push Payment (APP) fraud. But has this been at the expense of combatting what is still a massive problem: fraudsters compromising details and/or accounts and attempting transactions that a customer has no knowledge of, never mind authorized; an unauthorised transaction.
Since the introduction of the Contingent Reimbursement Model (CRM) Code in the UK in May 2019, the awareness and discussion on APP fraud has increased further. This isn’t surprising, as according to UK Finance data, its banks had gone from having 34,000 cases of APP fraud in the first half of 2018 to over 100,000 cases in the first half of 2021 and this brought with it an extra cost of refunding £120m to customers making these claims.
The chatter has been so loud about scams that you would be forgiven for thinking that traditional unauthorized transaction fraud has gone away. But are fraudsters spending all their time trying to trick victims? APP fraud only accounted for 44% of fraud losses in 2021 in the UK. If we look at fraud case volumes, APP only accounts for 7% of fraud, as card fraud case volumes still dominant, accounting for 91%. In fact, the UK is the ‘card fraud capital of Europe’ according to recent analysis of the European Central Bank’s Statistical Data Warehouse.
We also shouldn’t forget about unauthorised remote banking fraud where fraudsters compromise accounts and take funds with little or no customer involvement in the payment. This continues to rise, particularly on the internet channel, where volumes increased 30% in 2021 compared to 2020. The 30% rise in volume of unauthorised internet fraud is actually higher than the 27% rise in APP cases between 2020 and 2021.
So, in summary, card fraud continues to drive the biggest costs in fraud management due to the incredibly high volume of fraudulent cases and with very high losses that are comparable to APP. Unauthorised remote banking fraud also cannot be ignored, with losses that continue to increase and volumes that have grown 178% in 3 years.
Why are volumes of fraud so high, and increasing?
Bots & malware
Just as fintech has continued to grow and develop during the pandemic, so has fraudster tech. In 2021 there was a shift in attacks reported and this continues to grow with bot traffic increasing by a further 25% compared to the fourth quarter of 2021. These bot attacks often took the form of basic brute force attacks utilising compromised credentials and card details from data breaches, phishing or Magecart (digital skimming) attacks. The introduction of the Payment Services Directive 2 (PSD2) and Stronger Customer Authentication (SCA), is now expected to reverse this trend and push the tech-savvy fraudster away from brute force and more towards the stealthy approach of malware.
New malware strains such as Teabot and Sharkbot were also seen targeting banks in 2021 and the introduction of SCA will see the trend of new strains continue in attempts to circumvent the control. Teabot has continued its growth in 2022, with Cleafy labs stating that its use has grown 500% in a year and has been used in more than 400 separate targeted attacks across the globe, including the United States, with Robokiller reporting the United States seeing a more than five-fold increase between April 2021 and January 2022.
The defence against Teabot and Sharkbot fraudsters doesn’t initially seem simple as they are difficult to detect and have made their way into the Google Play Store and been downloaded by unsuspecting customers and also have the ability to trigger further downloads. We can advise people to be wary of new apps with limited reviews, but the malware writers are getting good at hiding the malicious elements and downloading them at a later point, which may make these even harder to defend against in the future.
Once downloaded onto a customer’s phone, these banking trojans can login to banking apps and make payments to mule accounts without any interaction from the victim. This initially sounds quite hard to detect with payments looking like they are coming from existing users and trusted devices but with the right level of behavioural data, the payments can be detected. For now, the malware doesn’t behave like the customer, it might do its activity on different days or times of day to the customer, and it is probably navigating around the app or browser differently to the victim or even any real human customer. Behavioural data fed into a good fraud detection engine should enable this fraud to be prevented with minimal false positives, as the engine can detect the anomalies compared to the population and the individual. Malware will continue to improve and look more like the customer and so we need to keep pushing the boundaries of profiling and detection to stay a step ahead.
Phishing, smishing and vishing
Basic phishing, smishing and vishing also continues to drive a high volume of fraud attacks, particularly in a pandemic world where there has been increased online communication that fraudsters have looked to exploit. We’ve all heard a lot about delivery scams with the BBC reporting them as “the most common con-trick”, but the focus is often on the complex APP related elements rather than the more common attempts to get card, account and/or credentials to make fraudulent purchases or take over and clear-out bank accounts. If they can’t get you one way, they will try others, like threatening your ability to binge watch streaming services.
If You Get This Text Message From Netflix, Delete It, FBI Warns (bestlifeonline.com)
This Netflix attack was essentially phishing and again, we can look to customers to be aware and protect themselves, but this won’t work for everyone as some people click before thinking. Can we rely on our operating systems to protect us? Google, Apple and Microsoft are doing more to detect malicious emails and calls, but will they catch up or is it on others to provide specific services to protect their customers? Many banks and Internet Service Providers offer an extra level of defence with security software that includes application hardening and smishing and phishing protection. Should this become an industry standard?
Learn more about bot attacks in our
A-Z of Fraud Innovations
What should financial institutions be doing to regain the upper hand in the battle with fraudsters? Can technology save us?
For institutions that have sufficient funding, then improvements can look to be made across all fronts, but where funding is more limited, focus may need to be on efficiency to then develop a virtuous cycle of improvement. For example, with incredibly high volumes of fraud, if those claims can be dealt with in a remote channel, then resource costs can be reinvested. Many victims appreciate a system where they can go in their banking app to question transactions, get responses, get refunded and get advice, all without having to struggle through telephone security or wait in a queue. For the victims who want to speak to a human, maybe the app can give them call wait times or offer video call appointments. Maybe all of this is rolled up into an efficient chatbot?
For many years, institutions have followed the best practice of having layers of fraud defence. This has been a simple approach of prioritising a problem or a strategic solution and then delivering that layer. For example, 10 to 15 years ago, malware and malware detection layers made a strong case for investment, then focus moved to identifying customers or their devices, with voice, phone, device and behavioural solutions. With the resurgence of phishing and smishing, and the introduction of PSD2, the best way to tackle the fraud is to link the layers together and coordinate them through a single system designed to deal with large amounts of data and that can be optimised through machine learning.
Fraud detection models continue to improve, with deep learning using Recurrent Neural Networks providing the latest uplift to performance. Utilising this new level of detection in a system that is central to the layers of detection could start to tip the balance away from the fraudsters. A conservative improvement of just 10% could be worth £80m or 300,000 fraud cases to the industry. What if your models or rules are advanced enough to pick up the fraud type being attempted and even suggest the most effective action to take to tackle the fraud type in the best way possible? Now we are back to the virtuous cycle of reducing fraud losses and reinvesting in customer contact. If phone numbers have been changed or sim swaps are detected then you don’t want to use an SMS, maybe you can use the app or even identity checks or biometrics to reduce the fraud risk whilst the customer can self-serve and complete their payment. Again, could a chatbot fed by machine learning make this super effective and efficient?
I will discuss efficiencies and the cost of fraud in a future article, but for now, let’s not miss an opportunity to improve detection and customer experience by making sure that we have a holistic view of fraud and make improvements wherever possible.
To discuss fraud trends and how Featurespace and ARIC™ Risk Hub can support those goals, get in touch or book a demo today.
Share