If you're reading this on Cohost, this is me inviting you to please comment to be like "here's where you have factual inaccuracy / are probably conceptually wrong!" because this is my working theory and I'm sure I've messed it up somewhere. Putting a "read more" because it's kinda long.
BEFORE:
Humans talking to each other has always been:
- extremely hard to monetize 1
- a giant money sink 2
- a thing that everyone intuitively understands it'll be valuable to hold as property 3
NOW:
- AI/ML/LLM (pick your term) training makes those conversations worth money by being the cheapest way to speculate on AI 4
- but public-facing stuff will be autoscraped unless you wall it off
- basically any customer backlash is gonna be mitigated because you have a monopoly and also
- everyone else in the sphere is in the same position
SO:
- Basically every site 5 where people talk is at a point where the "optimal" move in terms of capitalism is to find a way to let robots read it for a fee
- A whole ecosystem of people-friendly robot stuff (mobile apps, cute bots, etc) existed b/c it was free
- That now dies and it would be a pr nightmare to use the above, soooo
- Various bullshit reasons are deployed
-
ads aren't great value prop for advertisers/sites tbh
-
moderation! is! miserable! it's hard and has to be done by humans, it does not scale
-
the last 10-15 years of tech have been increasingly speculation-driven b/c of the way the tech investment ecosystem works
-
the ways you can improve an LLM relative to the rest of the field are roughly: more super-expensive hardware, more comparatively cheap training data, the extremely expensive (because it's paying devs, almost certainly in the global north) attempt to make a better algorithm. Also check out @corolla94's notes in the comments on this, they're relevant!
-
with varying degrees of pressure- reddit feels this in a way that Cohost doesn't, I think, b/c of the nature of funding