• he/him

The challenge we face is not finding an alternative, but having the strength to drag those unwilling towards brighter possibilities. The future is not yet lost.


gamedeveloper
@gamedeveloper

"Toxic people are usually the ones about to burnout."

Tinybuild boss Alex Nichiporchik says the company is using AI to identify toxic workers, which according to the CEO also includes those suffering from burnout. In excerpts from the talk published by Why Now Gaming, Nichiporchik explains how the Hello Neighbor publisher feeds text from Slack messages, automatic transcriptions from Google Meet and Zoom, and task managers used by the company, into ChatGPT to conduct an "I, Me Analysis" that allows Tinybuild to gauge the "probability of the person going to a burnout."

Nichiporchik said the technique, which he claims to have invented, is "bizarrely Black Mirror-y" and involves using ChatGPT to monitor the number of times workers have said "I" or "Me" in meetings and other correspondence. Why? Because he claims there is a "direct correlation" between how many times somebody uses those words, compared to the amount of words they use overall, and the likelihood of them experiencing burnout at some point in the future.

Update: In a separate response sent directly to Why Now Gaming, Nichiporchik said the HR portion of his presentation was purely a "hypothetical" and that Tinybuild doesn't use AI tools to monitor staff.

"The HR part of my presentation was a hypothetical, hence the Black Mirror reference. I could’ve made it more clear for when viewing out of context," reads the statement. "We do not monitor employees or use AI to identify problematic ones. The presentation explored how AI tools can be used, and some get into creepy territory. I wanted to explore how they can be used for good."

Read more on this story, and hear Nichiporchik's response to our reporting, over at Game Developer.


bruno
@bruno

So as far as I understand it, Nichiporchik says the entire presentation was a hypothetical scenario and that it's not about how he's already using AI to spy on his employees – it's about how he wants to, in the future, use AI to spy on his employees. But also, he claims, he wants to do it "for good" to identify burnout (much to unpack there).

I'd be curious to see a take from someone who was at the presentation because it sounds insane. Someone was live tweeting it but I can't find the thread now.


johnnemann
@johnnemann

That's a pretty easy problem for regular old software.

My feeling is he made this all up to sound cool to other CEOs who get hard when they hear 'AI' BUT he's also a sociopath that no one should ever work for


You must log in to comment.

in reply to @gamedeveloper's post:

Absolutely fuckin' foul. Between this and Chucklefish (Both for it's mistreatment of development staff + some personal reasons it'd be improper to just throw in here) I'm growing incredibly wary of indie publishers.

in reply to @bruno's post:

in reply to @johnnemann's post:

"As your boss, I could hypothetically be spying on you with AI in an effort to recreate my own version of Black Mirror. I'm not, but I could be. But that would be evil and sociopathic. But it's totally possible, and here's how I would do it. But I won't."

You just need a word frequency counter. It's an afternoon project at worst. I wrote one years ago in about 150 lines of Rust that can chew threw gigabytes of text in minutes. All without paying OpenAI one thin dime.

Ironically enough the one confirmed commercial user I know about was an AI startup...