<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2797612&amp;fmt=gif">

Technology has a vital role to play as financial institutions fight back against a seemingly ever-increasing volume of fraudulent activity. While there are solutions available that help to identify fraud, many lack visibility into the underlying attributes that identify fraud. They often assign an application or an account with a risk score, but what does that score actually mean? To truly understand and keep pace with evolving forms of fraud, we need to go beyond the black box and provide transparency to the risk-scoring process. 

This was the topic for our latest webinar where our Founder and CEO, Greg Woolf, and Lead Data Scientist, Nilabh Ohol, were joined by DCU Fraud Investigator, Kelley Donnelly, for a wide-ranging discussion that touched upon:

  • Why the industry is calling out for transparent risk scoring
  • How transparent risk scoring works
  • Real-world examples of how transparent risk scoring can help to identify fraud

Why it’s vital to go beyond the black box and provide transparent risk scores

Woolf kicked off proceedings by thanking the unsung heroes of the industry – the fraud analysts who work tirelessly to fight back against the onslaught of attacks by bad actors. He went on to explain that it’s these analysts who are calling out for a greater understanding of what contributes to a risk score.

“Let's talk about the need for transparency,” said Woolf. “From what we've learned, the committee of fraud analysts is really keen to understand how fraud is being perpetrated. There's an overwhelming volume of attacks. We know that fraudsters are so sophisticated and come up with these really innovative and smart kinds of hacks and scams. And what we've heard is that there's such a deep desire to share with the teams and institutions, and even across other institutions, an understanding of what is driving these attacks.” 

This sentiment certainly chimed with Donnelly who spends hours every day looking for the needle in the haystack – fraudulent accounts. The problem has only been exacerbated in recent years as the volume of fraudulent activity has skyrocketed. The key, she says, is using technology to identify bad actors, but also collaborating with other institutions to share information on fraud.

“Over the past couple of years, the volume [of fraudulent activity] has just gone out of control,” explained Donnelly. “We have people applying [for accounts] that are good people, and then you have the bad actors. Trying to find those bad actors is like a needle in a haystack. But, with the tools that we have, finding those needles is a little bit easier. And the tools allow us to show examples to our staff and other credit unions or banks, and build those relationships, meaning we're able to stop it not just at my financial institution, but other financial institutions as well.”

One of the major challenges in Donnelly’s role is that not only is the volume of fraudulent activity continuing to grow, but it’s also evolving. As soon as one form of fraud is identified and curtailed, fraudsters move on to a new scam. 

“As soon as we figure out something that they're doing, they're going to change their scheme one hundred percent, they're going to do a 180,” said Donnelly. “We saw an intake of synthetic identity fraud a couple of years ago, and even last year, and once we were able to stop that synthetic fraud they then changed gears on us and went more towards identity theft. So, we always have to be on top of everything, and always looking at our different reports and the tools that we have to be able to stop this.”

How transparent risk scoring helps financial institutions to detect emerging forms of fraud

The need to identify emerging forms of fraud is one of the major catalysts for going beyond the black box that simply provides a risk score with no real insight into how the score has been determined. But it’s not enough to rely solely on technology. As Ohol explained, it’s about utilizing expert human knowledge to help train machines to better detect fraud.

“We're always playing catch up to new forms of fraud,” said Ohol. “And I think that's why we realized that a black box score will only get you so far. What we really want to do is understand the evolution of fraud. And, with the help of subject matter experts, in this case, fraud investigators, incorporate vast amounts of knowledge as a feedback loop to then generate variations of these evolving patterns and identities, and then present this back to the front-end investigators to help them identify newer types of activities.”

While it’s a panel of subject matter experts who ultimately help to refine and improve the machine learning models that detect fraud, Ohol and his team of data scientists had to ask themselves a number of questions when defining the FiVerity approach to building a transparent risk score.

“How do you provide a risk score to an identity?” said Ohol. “But not only that, how do you provide indicators and attributes that are contributing towards a score? If an identity is 90% likely to be fraud, what are the attributes that suggest that it's more likely to be fraud? And on top of that, what are the attributes that bring the score down? Transparent scoring helps eliminate a lot of biases. And that could be a subconscious bias or a bias because we have been looking at the data over and over again with similar signals. It [transparent scoring] contributes to a more holistic and more informed decision-making process.”

Having discussed the theory behind transparent risk scoring, the panel referred to Donnelly for a real-world example of how it is used on a day-to-day basis. The key for her is having access to information that helps to make an informed decision that, if left solely to a black box, could otherwise negatively impact innocent people.

“Our financial institution has customers who are brand new to this country and are trying to establish credit,” explained Donnelly. “And that may bring their risk score up, because they might move around a lot at first trying to settle themselves, so they have a bunch of different addresses being reported on their credit report. We have to be able to identify between someone who's just coming into this country trying to establish themselves and bad actors. So even though we have that risk score, we must also look beyond that and not only teach the computer that not everybody is bad, but also teach ourselves as well.”

How FiVerity helps financial institutions to detect fraud quickly and enable greater collaboration

Solutions such as FiVerity harness the power of artificial intelligence to help detect fraud, but such tools haven’t always been available to financial institutions. As Donnelly explained, fraud analysts previously had to wait for an account to “go bad” before being alerted to potentially fraudulent activity. The difference now is clearly night and day.

“Once we get the alert from FiVerity, it really only takes me about ten minutes to determine if it's a synthetic [identity]. Before using the program, we would have to wait for a loan to go bad. So that can be 30 or 60 days, and then we have to wait for someone to alert us. And, depending on the volume, it can take months. So we've already lost that money, it's already out the door. Whereas with your program, we're able to stop it even before it gets opened. Or if it had already been opened, we're able to stop it, especially with a credit card, before any money is even spent. So, it's a huge time saver. And really, it saves DCU hundreds of thousands of dollars.”

While utilizing FiVerity helps DCU to identify and address fraudulent activity, Donnelly is also keen to share insights with other financial institutions. It's a collaborative ethos that is also at the heart of FiVerity’s approach to fraud detection.

“With the Patriot Act, we're able to give other institutions a heads up,” said Donnelly. “I've actually called another institution to say, ‘I see a credit card that hasn't been used yet, you might want to take a look at this individual.’ And, sure enough, it was a synthetic [identity] over there as well. And so, I'm saving my institution and another institution money. If we could have a database where we can spread this information that'd be fantastic.”

For further analysis of transparent risk scoring, the full webinar Beyond the Blackbox: Elevating Fraud Detection with Transparent Risk Scoring is now available to watch on-demand.

You may also like

Why A Continuous and Collaborative Approach is Key to Detecting Fraud
Why A Continuous and Collaborative Approach is Key to Detecting Fraud
16 November, 2022

Financial institutions taking a traditional, reactionary approach to fraud detection may take weeks, months or even year...

A Massive Spike in Fraud
A Massive Spike in Fraud
12 October, 2022

We are hearing from large and small banks, credit unions and financial services firms that there’s a massive spike in fr...

Fighting a Shared Threat: Fraud & Security Merges in Banks and FinServ
Fighting a Shared Threat: Fraud & Security Merges in Banks and FinServ
29 March, 2021

Separating anti-fraud and cyber-sec activities in independent operating groups within credit unions, banks and other fin...