kaara

oh, to be [REDACTED]

  • it/she

1/4th of @staff


🏳️‍⚧️ t4t

i feel the wind in my dreams


as we continue down tech and AI's warpath, a certain line tends to keep emerging: In an effort to stop abuse, capital will have to suffice

In fact, nearly every optional-subscription web service uses "paying customer" as a signal of high trust

And in one way, they're totally right!

Fighting abuse is asymmetric warfare // Fighting abuse is a "reverse siege"


When it comes to dealing with abuse and spam, raw contests of power rarely get you anywhere. Abuse is persistent, mobile, dedicated, and dreadfully fast-moving. Just fighting battles head on and removing bad actors from a service will quickly lead to you being outmaneuvered by more agile groups. If your large-scale service handles abusive by having a queue of moderators ready to whack every mole that pops up, you don't stand a chance.

Most (but not all) websites know this pretty well by now. We've had a lot of time to figure this stuff out now and entire industries are being built around the work. So how do we defeat abuse at a bigger scale? Well, there's plenty of different answers and certainly no one right way, but one of the prevailing strategies is that of Resource Deprivation

Picture your service as the medieval castle on the hill. You are large, pretty slow to move, and oh so fortified. In a strange fusion of zombies piling themselves against the wall and highly organized factions of enemies, you are beset on all sides.

Raw artillery fire will do little, the zombie horde is unending - Precision strikes on key targets are neigh impossible, your enemies are elusive, spread out, hidden, and you are not actually in a war with bombs here, the metaphor only goes so far. From here it becomes clear that perhaps one of the only hopes you have of breaking this siege is to strike the enemies resources.


Playing Counter-Strike with friends is a lot of fun (and a lot of sad when you do it alone, but hey, I won't judge). It's not too uncommon to run into a smurf 1 or a cheater. In competitive environments, those behaviors can be particularly destructive and is easily categorized as abuse.

Banning cheaters when they cheat does a pretty good job of getting rid of that one account, but the zombie horde at your walls won't stop, and cheat developers are untouched. From there you can implement anti-cheat, but this is an arms race that is hard to win. As I said earlier, this is an asymmetric war. Your enemy will move faster than you.

You can use more advanced technology to ban a users Hardware ID, a unique identifier that can theoretically ban an entire computer instead of just an account. However, like every other technical issue, this too can be busted.

Valve's latest solution as CS:GO moved to a free to play model, then, is a pretty simple one: Pay up and give us your phone number

The idea is pretty simple. Cheaters will never stop, but cheaters don't have unlimited money. Slowly but surely, the cost of doing business will simply get too high for abusive behavior.


Now this is all well and good but it doesn't stop abuse on its own. All your other methods of fighting are still necessary, but as you cull the living dead, fewer will rise up to replace their ranks.

Once you notice this behavior, its suddenly easy to see it everywhere. Twitter, Discord, Facebook, Doordash, etc.. LOVE to get your phone number. In fact, some of you may have seen prompts like this before, saying pretty plainly "you look suspicious, give us your phone number" - i.e. Discord and Twitter

Getting more phone numbers is annoying, takes resources, and costs money - Even with every bit of effort automated, it will cost you a resource to do this. An average user with a second Twitter account getting locked and needing a phone number may be shit out of luck already, ending the behavior about as fast as it began. For serial abuse, its just the cost of doing business once again. But those costs may start to really add up, eat into your profits, and all in all, may drive you away.

Captchas today are used in much the same way. Captchas will drive away the bottom of the barrel, but its been LONG known the captchas are easily solvable. If a machine can't do it, a human will for cents on cents. Rates for these solves are in the couple of dollars per THOUSANDS of captchas. But if we KNOW that they aren't very useful, why keep using them? Again, outside of the bottom of the barrel work, costing abusers a couple extra bucks will stack up their operating costs. Add in phone number verification, email verification, and whatever other resources you can think to cost them, and you very well might drive bad actors away. Maybe you'll make their profits go negative. Maybe their budget isn't as big as your patience. Or maybe they just get real fucking frustrated. In any case, it has a better shot of getting you somewhere than just shooting zombies mindlessly.

The logical conclusion of all of this becomes readily apparent. If all these methods we implement seek to drain bad actors of resources, why not just directly take their money? In the Counter-Strike example, Valve did just that. But more and more companies seem to be joining the bandwagon. Twitter continues to be the prime example, but we'll likely see this method get employed, especially in the AI space.


The Obvious Part

This is the part of the post where I point out the obvious. If we are headed to a future where capital investment is the best way to determine trust, what happens when people get priced out? We've seen allegories to this already in the gaming world. Mobile games that are "pay-to-win" - requiring monetary investment to progress and the KMMOs of old come to mind.

I've tricked you, because I don't have some shocking conclusion to give you. No answer to give.

Paying for trust continues to grow, and I would not be surprised if we see free-to-use services changing their tune in greater numbers as time goes on. More and more of the open internet is on a path of being pay-to-play in the name of safety and the underlying systems of our society are not conducive to a very free and open world.

Put short, I think this fuckin blows lmao.


  1. a high skill player playing with a low ranked account - Imagine Magnus Carlson playing Chess against 800 elo scrubs


You must log in to comment.

in reply to @kaara's post:

Thoughts I've had on this but haven't thought through / don't have the experience to think them through, curious what your takes are:

  • The inversion of raising costs is lowering rewards, but is there a way to lower rewards that doesn't also lower rewards for legitimate users?
  • Is there a way to substitute time for money in terms of paying for trust? I feel like not really but I am drawn in general to the idea of choosing a valuable resource besides money that could still starve adversaries.
  • Twilio and others are kinda banking their business models in these areas on the assumption that enough data can be collected across multiple platforms to reliably identify bad actors, and that identification is a sellable product that forms a feedback loop of more signal -> better ID -> more customers -> more signal. This kinda feels like another losing battle to me, though?
  • It's tough to say, especially as it can depend on the platform. Abstracting this a bit, actions like rate-limiting and numerical caps on certain behaviors act as a kind of diminished return for bots that regular users are unlikely to run into. The problem also depends on what exactly the goal of each bad actor is - data exfiltration, for example, will more disrupted by things like ratelimits than targeted harassment campaigns. Keeping this abstraction, most "anti-spam filters" seek to address this side of the coin. But as abuse evolves, new attitudes like higher barrier of entry (or at least, higher barrier of trust) become more prominent.

  • Absolutely, but it's a really hard balance to strike. For automated abuse systems, time is basically unlimited. "Your account can't talk for 30 days" is a death sentence for a user and basically free for a bot or a user making accounts to sell. A better implementation of this would be something like VRChat's trust system, where playing and being active in the game slowly earns you the trusted status. But in reality, this is a really complicated, difficult to interact with system just to defeat lazy sockpuppet harassment. Time is really expensive for real people.

  • I don't know if its a losing battle, but it feels like a battle thats really hard to win now. Similar software offerings that seek to create internal or private trust networks just aren't there yet (and are severely hindered by privacy legislation) - The question you have to ask when dealing with these closed-source trust network models is what exactly are they going to be modeling for? Really, dramatically easy to create biases here.

I think another factor in recent new paywalls is the recession. When money was cheap, tech companies just wanted to get piles of users to impress investors under the promise that somehow they can turn a zillion eyes into a zillion dollars. Now that all the speculative investment cashflow has gone out with the tide, companies are having to shore up through more aggressive upfront monetization.

That said, definitely agree with what you said regarding the paywall being effective to reduce the amount of abuse by means of pricing them out, for better or for worse.

As more platforms shift to payment-based trust systems, I wonder if that will have an unintended side effect of making platforms a bit more vulnerable to bad actors who actually do have a lot of resources to invest.

In other words, if platforms become more expensive to bot, spam, and abuse, and therefore more 'pristine' because less people can afford to do it, will that make it easier for state-level actors or powerful misinformation firms to disperse their message (at a high cost) with less background noise from other sources of spam?

What happens in this case, in my experience, is you get that tier of bad actors that traffic in stolen credit cards. Asking for money does cut out a lot of it, but if the reward is high enough, just using someone else's money becomes the solution. (This is what happened in all the MMOs I worked on that had a secondary market - botters/cheaters/account sellers would open accounts with fraudulent credit cards.)