Social media platform stops account farming and spam

A well recognized social media platform (Social A), needed help fighting a surge in successful fake account registrations. The account creation looked like this: actors would use a series of VPNs and residential proxies to register counterfeit profiles in different regions.These registrations would pass bot protections such as captchas, SMS, and phone verification steps. Next, the new account would be “aged” for weeks to months at a time. In this stage, the account would perform minimal, but low risk platform activity such as popular account follows and occasional benign engagements that “built trust” for the account within the system.

Once a sufficient pool of accounts has been stockpiled, the actor would then move to monetize their work. In this case, the actor would sell bundles of accounts to other actors on dark web forums. These new actors would then use these accounts to perform activities like phishing, spam, view farming, trend manipulation, denial-of-service attacks, or other platform-degrading activities. An entire dark web economy had sprung out from this abuse.

Tooling for mitigation

In order to successfully combat this platform abuse, Social A needed to instrument several capabilities at once. The end mitigations could not harm the experience for legitimate users; for social networks, users and engagement is money. Unnecessary friction is unacceptable.

Spur was brought in to facilitate IP enrichment at account registration time. By always knowing the IP footprints for nearly 900 VPN and residential proxy networks, Social A could use Spur data to tag the account creation activity with important context. The Anonymous + Residential dataset allowed Social A to enrich registrations with information such as whether the IP was a datacenter IP, whether an anonymity service was being used, behavioral characteristics (headless browsing or geo-spoofing, for example), and high-level signals such as user-count estimates.

Internally, Social A implemented gateways for session legitimacy checks. These could be applied to key user actions that were frequently abused, such as direct messages and public replies. The key escalation ramp in this case was for manual identity verification, which none of these accounts could perform.

From detection to prevention

The last step was to tie everything together. Now that Social A had account creations tagged with Spur data, reasonable rate-limits could be enforced on user actions while using the Spur context tagging as the final weight in legitimacy scoring. For example: if an account performed a high number of likes in a day (say 200), a check would be done to see if the account was also registered through a known-abuse ridden anonymity network. Only then would an account be disabled or brought to the manual verification process.

This two-step approach dramatically reduced successful account harvesting operations on Social A’s platform. The dark-web economy marketing this access moved on and rebranded for other services.

Real advantages of using Spur

There are a few important takeaways from this approach:

First, by using Spur to track the specific VPN or proxy services being used, Social A has actually reserved the ability to still allow genuine usage and access from these services. Having a service attribution means distinctions can be made between things like managed IT, private VPNs, device VPNs, or even simply accountable providers who do not tolerate malicious activity from their networks.

Second, it’s essential to understand that not all fake accounts are bot-generated. These accounts were likely being generated through low-cost Click-farms. These call center-like facilities employ real individuals, real devices, and physical automations to scale these types of abuse operations. This often renders traditional bot detection tools irrelevant. The more holistic approach leverages data like Spur to factor in network signals which must be faked in those environments.