At the end of September, the Financial Conduct Authority (FCA) and Payment Systems Regulator (PSR) held its Authorised Push Payment (APP) Fraud TechSprint at the FCA offices in LondonFeaturespacers joined the one hundred participants in a hackathon: we were all keen to discuss our experiences and ideas that would showcase potential solutions for tackling APP Fraud in the UK. 

Collaborating to tackle authorised fraud in the UK 

Collaboration was the major theme of the event, with the hackathon structure including roles for all the experts needed to define, design, develop, and launch a solution to market. The one hundred participants covered a diverse range of viewpoints with representatives from large banks, smaller financial institutions, fintechs, machine learning start-ups, consultancy firms, fraud technology suppliers, regulators, and industry groups. Each participant had applied for the TechSprint including stating their skills and experience, this meant we could be allocated into teams with a diverse knowledge base and skillset, and have a defined role in the team. There were visionaries and experts who were expected to lead and help shape the solution, as well as more technical roles for developers, and roles for Subject Matter Experts like myself. The teams’ progress was ensured by the equivalent of a scrum master, and there was a marketing expert to help develop the messaging for the solution value proposition. After the initial introduction and briefing, we separated into twelve allocated teams to discuss the challenge of creating innovative solutions and proof of concepts to combat APP Fraud.  

Real time requirements for APP fraud prevention  

The importance of real time capabilities was central to the TechSprint, with it appearing as a core requirement in the brief for participants. The teams were given slightly different flavours of the challenge, with three use cases prioritised by the FCA: 

  1. Use Case 1: Real time APP fraud prevention using new and existing technologies: what are the barriers and limitations to current (real-time) APP fraud prevention technologies and processes, and how might we encourage the firms we regulate to improve and adopt them? 
  2. Use Case 2: Enhanced Data Sharing; how financial services firms and multiple sectors (including social networks) can share data and relevant analytics, securely in real-time, to spot and prevent APP fraud? 
  3. Use Case 3: Spotting fraud at source; communicating them to those in the chain, including PSPs so that they can take affirmative action and protect consumers 

See Featurespace’s contributions to the ideas presented in the Demo Showcase 

PSR regulatory update: receiving banks and mule accounts  

Kate Fitzgerald, Head of Policy at the PSR provided some context for the participants on the (now published) PSR consultation on reimbursement of Authorised Push Payment Scams. Stating that there is a desire within the PSR to do more within the rules of the payment systems to leverage the readily available data for combatting APP. This regulation is designed with innovation and competition in mind, given the significant innovation we have seen from scammers.  

Until now legislative hurdles have meant that victims bear the liability for APP losses, these regulatory constraints have in part given rise to the level of sophistication and sheer scale of social engineering we see directed at consumers. Although the voluntary Contingent Reimbursement Model (CRM) code spurred a massive step forward for the industry, still only around 50% of APP losses are reimbursed to victims. And, according to the PSR there is too much liability on the sending bank in a fraudulent transaction. Newer PSR guidance begins to shift some liability to receiving banks, with a focus on reducing the impact of mule accounts. 

The resounding message from the PSR was that the focus should be on stopping fraud and scams from happening in order to reduce the impact of liability and losses. In fact, Fitzgerald went to say that she hoped that new regulations would encourage increased investment from banks and financial institutions on fraud prevention. This was a lesson learnt from the broader New Payments Architecture (NPA) work that highlighted the need to embed fraud and data innovations at the payment system level. 

Download Scams: The Complete Guide 

The impact of CRM code changes for PSPs 

While many banks may feel that they are refunding victims, and that is true and can be seen by record losses borne by banks, the figures as a whole suggest that refund rates will increase. It does not seem likely that more than 50% of claims are first party fraud, or are where the customer was grossly negligent. Proving first party or evidencing gross negligence is particularly challenging, and the reality is that the organised criminal networks conducting scams are doing so at a growing rate.  

Although many Payment Service Providers (PSPs) are doing as much as they can to stop fraud, regulation to increase the rate of refunds could improve the business case from both a return on investment (ROI) for the FIs and regulatory body point of view. Even banks doing comparatively better at protecting their customers may be fearing the potential liability changes, with losses split equally between sending and receiving banks. Together, these changes will have large implications for all FIs, but could be even larger for PSPs whose current refund rates are under the industry average (at 41%), and a suspected larger volume of mule accounts than traditional banks, in fact exceeding the volume of genuine victims. Assuming this and future consultations do not significantly change the approach, PSPs will need processes to protect and refund victims, prevent mule cash out, and pay victim PSPs back the allocated loss, all whilst considering any indemnity funds. 

Naming and shaming in the fight against fraud

The PSR states in the consultation that: 

“There is substantial variation between PSPs’ current APP scam rates. Based on data for a number of large PSPs, the number of APP scams per million payments sent was seven times higher in the worst-performing than in the best-performing PSP. Some PSPs received more than ten times the rate of APP scam payments than others.” 

Statements like this give insight into where we are heading, with specific figures published to both encourage and force FIs into action. We may get to the point where it is reported that specific banks’ customers are more likely to be scammed due to poor fraud controls, or that specific FIs provide accounts to more scammers than their competitors. The potential reputational advantage or damage could be a third element that feeds investment plans to tackle APP in coming years. 

Finding the solutions to scams 

One of the wonderful things about fraud professionals is that although there was interest in discussing the potential changes coming at the TechSprint, everyone focused on finding the solution and not wasting time challenging the question. Industry rivalries were put aside to find a better way to protect people from fraud. All ideas were shared and debated openly and fairly with experts available to suggest where technology or legislation might be a barrier to a solution. 

Machine learning for APP fraud prevention

Many of the ideas centred around machine learning and this should not be a surprise to anyone, with the majority of the UK industry having already moved away from rules-only systems and now looking to maximise performance with tree-based models or even deep learning. Pushing improvement in this detection performance is potentially the easiest and most obvious way to stop more fraud, and has already been shown to improve performance for some larger banks who have invested in new machine learning capabilities.  

The simple extension to this is to score funds received into an account, and this aligns to the PSR proposal of shifting liability and responsibility for APP fraud prevention to both the sending and receiving FI. This extension will need changes or new data connections to be set up, but once sending real time scoring requests, profiling and modelling of the requests should be effective at picking out suspicious receivers, whether they are specifically set up as mules or genuine victims of scams. There was also an assumption that other entities in the payment chain, such as Telcos, social networks, and Big Tech would be able to use machine learning to identify potential victims and mules. This seems feasible and has been proven to some extent with Telcos, but more data sharing is needed to progress this further, and that is partly why data sharing was another area of focus. 

Data sharing to stop scams 

There is a clear belief in the industry that there is enough data available across the payment chain to detect scams, and the data just needs to be shared with the relevant entity to use it in their machine learning approach and act in real time. Many teams rightly pointed out that the starting point for data sharing is fraud data, so that everyone can at least react quickly to prevent victims and mules appearing across multiple fraudulent transactions. This data can then be used for analysis and machine learning to find out what data is useful, and share it or even share a score or indicator.  

There are concerns for both of these approaches. Privacy and data protection laws mean that data that is not relevant or useful for fraud detection should not be shared, and even if it is useful then restrictions vary across sectors. Sharing scores or indicators on the other hand, necessitates a reliance and trust in the provider that the scores or indicators will be consistent and reliable. They cannot stop or change without impacting the consumers of that data. One question that does not have a clear answer yet is whether the machine learning and data sharing is better performed in a centralised or decentralised environment. Centralised or consortium learning may seem more efficient, but leads to more challenges about how data is stored and used. 

Insights and innovation approaches 

There were ideas that were pursued by single teams, as they pushed further from the standard approaches. These ideas included: 

Combatting crypto crime 

Investment and cryptocurrency providers using open banking processes to confirm new customers are funding the investment from their own accounts, have control over the accounts involved and understand the process. 

The question remains: could this be expanded to more account opening processes and how would we stop or phase out the old processes for funding new accounts? 

Intervening in social engineering 

Using machine learning to detect scam types and challenge customers with warnings and advice earlier in the payment journey, to reduce a scammers ability to socially engineer a victim.  

We would still need to prove if machine learning can identify the most effective challenges for the scams and potential victims.  

Checks on inbound payments 

Receiving account holders challenged to input information about the sources of funds, providing additional information to check and behaviours to profile and thus improving the chance of picking out anomalies and fraud trends.  

Could more interaction and data gathering when sending the payment enable us to gather stronger signals of scam behaviour?

Outcomes of the FCA TechSprint on APP fraud prevention 

In less than three days, teams had been formed, skills assessed, discussions and ideas developed, and then presented and reviewed. The event and collaboration reinforced ideas I have developed during my years in the industry, and helped me gauge industry support for potential solutions. I was pleased to find that not only were those ideas feasible, but there is appetite to build them and deploy them to protect customers. I would definitely recommend attending an FCA tech sprint to others in the industry who like to share ideas and solve problems. I will be keeping an eye out for the next fraud related one. 

With this TechSprint confirming that we are world-leading with our ideas to prevent scams, and machine learning is central to preventing scams in the future, my Featurespace colleagues and I are excited to continue to collaborate with the industry to deliver improvement and hopefully some of the ideas developed at the TechSprint. If you would like to hear more about our achievements in improving scam detection for banks, PSPs and other FIs, or would like to work with us on developing a new idea to improve scam detection, then please get in contact with us and we can work together to reduce scam fraud and leading the rest of the world in scam detection. 

Outsmart scammers with ARIC Risk Hub, Download the fact sheet to discover the why ARIC Risk Hub is your ideal partner in the fight against rising scams.