• they/them

i am into accessibility and game design. i go by sysopod on other platforms as well



arborelia
@arborelia

So this happened on the AI-generated, Seinfeld-derived, Twitch-streamed, uncurated anti-humor sitcom "Nothing, Forever" aka "WatchMeForever":

AI-generated "comedian" Larry Feinberg takes the polygonal mic and starts one of his rambles. He implies that he is out of ideas, then tries some bland hate speech against trans people with a dash of homophobia on top. It is too poorly stated to hurt.

But no one is laughing, so I’m going to stop. Thanks for coming out tonight. See you next time.

Where’d everybody go?

and at some point the channel gets banned for breaking the Twitch rules, which is correct because the Twitch rules do not have an exception for an AI algorithm that doesn't know what it's saying, and there shouldn't be such an exception.

Honestly I'm not surprised this happened, but I am surprised that it happened in the form of such on-the-nose meta-commentary about the world of standup comedy.


silasoftrees
@silasoftrees

what a weird thing to happen. a shame, i enjoyed watching it while laying stock still in bed and staring at the ceiling with a vr headset on, wondering if this counted as a Crime of the Future


You must log in to comment.

in reply to @arborelia's post:

The funniest thing for me is that this happened because the creators behind Nothing, Forever wanted to temporarily use a different model to fix some bugs they were having with the one they had before.

The model they used apparently had poor content management stuff in part due to it's sources being random things on the internet. So a lot of transphobic rhetoric among other things managed to get into the model.

Turns out AI really can be as dumb as the people who manage it

but that's every large language model. They all require vast quantities of training text so they're all trained on random things on the Internet, many of which suck.

My understanding of their explanation is that they had a second model in place, designed to mitigate hate speech coming from a particular version of GPT-3, and the mitigation didn't work when they used a different version of GPT-3.

The mitigation would have failed at some point anyway, because a computer doesn't know what's hate speech any more than it knows what's comedy.