Banish Bots at the Source to Improve Trust and Safety during Account Opening

Stop and Prevent Spam Bots From Disrupting Your Site



One of the hallmarks of a successful product, marketplace, or community is a large user base. It is important for the growth and survival of your platform, but the problem is that along with creating a group of users that powers your business, you also end up attracting unwanted users who want to market their services to that audience through bots.

To legitimate users, the presence of large numbers of bots on a platform is off putting. If they get too many unwanted solicitations from bots or other fake accounts, they’ll eventually stop using your platform or service.
To be successful, your company needs to nip that cycle in the bud, blocking bots before they can chase away your hard-earned legitimate users.

How bots Compromise Trust and Safety

Trust and safety are key tenets of the marketplace and platform-based companies. Trust is about platform integrity: is my information secure, and can I trust my interactions with other users? Safety is about ensuring people’s well-being in real-world interactions (especially for ride-sharing and home sharing companies, for example). The presence of bots on the platform is a risk to both trust and safety.

For example, dating services often attract bots that advertise adult content to legitimate users. Fake accounts on Facebook initiate mass friend requests in order to aggressively sell Oakley sunglasses and other goods. Twitter was plagued with innumerable fake accounts designed to shift the perception of political issues, and suspended over 70 million fake accounts in May and June of 2018.

In any of these cases, legitimate users find these interactions to be deeply off-putting—so how can companies sift through their legitimate users to find the bots?

Until recently, platform companies might have turned to Smyte, a software solution designed to monitor and stop online abuse, spam, and harassment. However, as of June, the company was acquired by Twitter, whose fake account problem has been severe enough to warrant an acquisition of some of the greatest minds in trust and safety.

Now that Smyte is off the market (they shut down their services externally), marketplaces have to solve the problem themselves.

Blocking bots at Account Opening

Most companies require a new user to authenticate that they’re a real person using a one-time passcode (OTP) with their mobile phone number or via email address. The primary difference between promo abuse and account opening by bots is that with promo abuse, while humans can get around OTP requirements manually, bot accounts work through automated scripts. In order to bypass OTP requirements on a massive scale with bots, fraudsters need to leverage certain types of carriers or email providers that make script-based account creation simple.

The answer to solving the problem of bot account opening is in identifying those carriers or email providers, and ensuring the phones and emails were not created en masse using scripts.

To curate your marketplace from large scale attacks, use lightweight identity data to prevent users from signing up using the types of phone numbers and emails designed to help create fake accounts. Using identity data at the top of the signup process can help companies identify email and phone number first-seen dates, whether they seem to be burner accounts, linkages with the user name, and note email and phone providers that facilitate script-based signups.

Accounts that are flagged as a result of that data can then be vetted manually or given greater friction in order to verify them, while legitimate user signups are allowed to happen with as little friction as possible.

Click here to read more about how Ekata’s Account Opening API can identify more legitimate users.

Related content