me:
@leafo i know you can't be clearer on the precise reasons for shunting accounts onto "direct to you"
but is it hypothetically possible that, for comparable reasons, the same might be done to an account that had only ever sold SFW games?
not to ask about a particular situation, but whether there could conceivably be any such situation
leafo:
Good question, I can think of at least one developer off the top of my head in our past where we may have considered using this restriction if we had it implemented at the time. It's the same justification: mitigating risk. We assume a liability & cost with running accounts in our Payouts system, and sometimes we have to make the difficult decisions for the greater stability of the platform.
Unfortunately I don't think it's appropriate to share who this is and why, as we do not make private account matters public.
this was everyone's first assumption, but itch never actually said anything of the sort, so... i asked
feel like this bolsters my theory, which i've since refined into "customer fraud". it matches almost everything:
- Reddit relies on mods as the source of truth and lets them either take down posts or post a mod comment at the top of the thread with a clarification.
- Twitter has crowd-sourced community notes that rank notes by their helpfulness based on whether a broad range of "viewpoints" all find it helpful.
- I haven't been on Facebook in a very long time but unless something has significantly changed you mostly have reactions giving this signal without diving into the comments.
- Discord kinda has reactions and mods but I'd argue the more impactful factor is that a chat model instead of a feed model limits how viral a single post can be. Both the hot take and any corrections have to be revisited as a conversation topic for more people to hear either of them.
- Broadcast media (podcasts, TV, news) don't have any mechanism besides the integrity of the source, except sometimes there's regulation from the government.
My first take after writing these down is that reframing the problem away from "how do we correct bad hot takes when they go viral" to something else, like maybe "what is the incentive for people to care whether the hot take they read was correct or not" is the most interesting path.




