Many of our customers use rules in their fraud workflows to identify good and bad actors. These rules, which are combinations of conditional statements that operate as data filters, can be very effective, but formulating “the right rule” can be difficult. That’s where the Ekata Field Data Science (FDS) team comes in to help customers formulate “the right rule” quickly by using machine learning.
The FDS team leverages decision tree-based machine learning models to build rule candidates. Decision trees are a type of algorithm that aims to unmix the target classes in a dataset through a series of conditional statements that splits or divides the data into different populations.
In the example figure below, the tree’s decisioning path seeks to isolate good (green) from bad (red) transactions. These conditional split points, also known as decision nodes, make decision trees highly interpretable models. And because you can extract the decisioning path logic from the models, decision trees make for a great way to build rules.
Rule Modeling Framework
Training models for rule building differ slightly from the normal data science modeling framework, as the overall model’s performance is not our primary focus. In this case, we care most about collecting all the conditional statements the model has used in all decisioning paths of the tree and evaluating each path as a rule candidate. Generating a single high-performing model is also not the end goal so we can create many models to build more diverse rule candidates.
We can build and extract logic from various models using different tree-based algorithms and algorithm settings, known as hyperparameters. The Random Forest is one such machine learning algorithm that has proven to be useful for rule building. This algorithm builds and aggregates the results of many different individual trees using different subsets of the training data. While a Random Forest model is not very interpretable due to aggregation, it perfectly suits the rule-building process since we don’t care about the final aggregated result. We can similarly extract the decisioning logic from every tree in the Forest to build rules. These rules tend to be more diverse because each tree in a Random Forest is built with a different subsample of the whole training dataset and exposed to different features at each decision node.
Once we have extracted the decisioning logic from our models, we must evaluate each rule candidate. This evaluation puts the rule back into the context of the customer’s workflow:
- How many transactions are caught by the rule?
- How many of these caught transactions are good?
- How many of these caught transactions are bad?
- What total chargeback amount would have been prevented with the rule in place?
During the evaluation, we look at common classification model performance metrics (including precision, recall and F1 score) and applicable business metrics to see how those translate into an overall ROI for the customer with Ekata products in place.
With the rule candidates and performance results collected, we examine the candidates and select the rule(s) that best meet the customer’s goals. The FDS team can recommend rules that meet the customer’s goals and make sense in the context of the fraudulent behavior they seek to differentiate.
The below example shows an Ekata-derived rule the FDS team may recommend to a customer to meet their goal. In this case, the rule is a three-figure conditional statement to isolate good actors.
Rule recommendations are just one component of the analysis the FDS team will present to a prospective customer as a starting point for further customer-driven analysis. Typically, our customers find the best results when they add Ekata products to supplement their internal data and use Ekata’s recommendations as a strong baseline.