• she/her, it/its

Pen name: Delila H. Smith (the H is silent). Thirtysomething trans lesbian, snugglemuffin, girlthing. Devil but in like a catgirl sort of way, perennial emotional wreck, too gay for this. Minors, please don't follow.


carrd with social media links (always up to date)
dizzythevoid.carrd.co/

arborelia
@arborelia

So this happened on the AI-generated, Seinfeld-derived, Twitch-streamed, uncurated anti-humor sitcom "Nothing, Forever" aka "WatchMeForever":

AI-generated "comedian" Larry Feinberg takes the polygonal mic and starts one of his rambles. He implies that he is out of ideas, then tries some bland hate speech against trans people with a dash of homophobia on top. It is too poorly stated to hurt.

But no one is laughing, so I’m going to stop. Thanks for coming out tonight. See you next time.

Where’d everybody go?

and at some point the channel gets banned for breaking the Twitch rules, which is correct because the Twitch rules do not have an exception for an AI algorithm that doesn't know what it's saying, and there shouldn't be such an exception.

Honestly I'm not surprised this happened, but I am surprised that it happened in the form of such on-the-nose meta-commentary about the world of standup comedy.


You must log in to comment.

in reply to @arborelia's post:

The funniest thing for me is that this happened because the creators behind Nothing, Forever wanted to temporarily use a different model to fix some bugs they were having with the one they had before.

The model they used apparently had poor content management stuff in part due to it's sources being random things on the internet. So a lot of transphobic rhetoric among other things managed to get into the model.

Turns out AI really can be as dumb as the people who manage it

but that's every large language model. They all require vast quantities of training text so they're all trained on random things on the Internet, many of which suck.

My understanding of their explanation is that they had a second model in place, designed to mitigate hate speech coming from a particular version of GPT-3, and the mitigation didn't work when they used a different version of GPT-3.

The mitigation would have failed at some point anyway, because a computer doesn't know what's hate speech any more than it knows what's comedy.