Much of our lives these days may take place digitally, but when it comes to verifying identity, the physical address is still key to preventing fraudulent transactions without insulting legitimate customers.
Most companies are already looking at addresses as an additional signal to improve the performance of their risk infrastructure, as in monitoring frequency of use with velocity checks, or checking name-to-address matches. Yet, fraudsters can manipulate address strings (by changing St. to Street, for example) to bypass velocity and verification checks within risk models.
That’s why it’s imperative that companies have a strong, fast, and reliable way to standardize address strings at a global level. But what are your options?
Building an internal solution
Many companies work to solve this problem by either:
- Developing a simple rules/logic engine to standardize address strings
- Licensing a file to build internal validation and normalization logic around address strings
The challenge with both of these approaches is that neither of them are a set-it-and-forget-it solution. After all, address databases need to be updated constantly. At most companies, engineering resources are scarce, which means dedicating story points to tweaking address models or business development time to locating new files and working with multiple vendors—both of which are non-core business activities—fall by the wayside. Because of this, neither of these solutions evolve at the speed required in today’s business environments.
Rather than spending the resources to build a solution internally, some companies try to adapt off-the-shelf mapping solutions into address verification tools.
A common example of this is working with an API like Google Maps to run checks on physical addresses. But because Google Maps isn’t intended for enterprise-level address validation, it comes with its own set of problems. Response times are inconsistent and the API often drops valuable metadata about addresses, such as unit numbers. It’s also common for releases to impact the API’s use within a risk model, meaning that companies must scramble to repair models each time.
Global address verification with Ekata
Off-the-shelf solutions aren’t designed to handle enterprise-level risk management, and few organizations have the engineering and development resources to build and maintain a global address database. That’s where Ekata comes in.
After seeing the challenges our customers were experiencing in this area, we set out to create our own solution. To solve the problem of manipulated address strings, Ekata’s global address validation solution validates, normalizes, and appends essential data to gather important signals about addresses. Our API provides coverage around the world, dramatically simplifying the integration and offering response times comparable with complex on-premises solutions—along with the flexibility to scale with the throughput requirements of our customers.
Our data science and engineering team ingest billions of addresses on a monthly basis. As part of incorporating this data into our Identity Graph™, we’ve established a DurableID for every premise-level address in the word, which avoids problems with duplicate files and helps pinpoint fraudulent address string manipulation early.
Because Ekata works with many companies in the risk space, you can count on the fact that we continuously evolve our solutions to manage the global expansion of our business and customers. Learn more about our Global Address Validation API to help you normalize and verify addresses in over 170 countries.