25, white-Latinx, plural trans therian photographer and musician. Anarcha-feminist. Occasionally NSFW

discord: hypatiacoyote


NireBryce
@NireBryce

reefraf64 on twitter says:

In one day, Sam Altman got fired from OpenAI, YouTube started threatening unlabeled AI content with a perma ban, and Discord discontinued their AI without explanation. Is something happening behind closed doors that we don't know about yet?

h/t @thephd


ann-arcana
@ann-arcana

No big secrets really, just simple economic reality: AI doesn't make any money.

OpenAI is a giant money fire, burning hundreds of millions of dollars, to run a wildly unprofitable service that can't do any of the things it claims to do.

It turns out running giant GPU farms with the energy footprint of some small nations to spit out bad autocomplete is absurd, and the economic reality is catching up to them fast. User numbers are declining, the big launch of the new "faster" GPT-4 Turbo tanked their servers so hard they had to suspend new subscriptions, and in any case they lose money on every single user and search.

We're in an economic crunch, and now it's coming for AI, just like anyone with brains knew it would.


You must log in to comment.

in reply to @NireBryce's post:

in reply to @ann-arcana's post:

it's almost like this could have been avoided if OpenAI didn't keep their models closed-source on grounds of """""""Ethics"""""""" and instead made them free to access and download for end users rather than paygating the service, maybe even put them up with torrents and the like. like it would absolutely raise the bar of entry and you're still running huge models, but they don't have to pay for the massive server costs that come with mass AI services. the saddest thing is that when this all collapses, the proprietary models and software and all that shit they're using will just be flushed down the toilet... which, hey, might be a good thing, i mean most of it is built on content theft anyway

People talk about LLMs and other current ML models as if they're "learning" anything but ... they're not. They're incapable of it. All they do is take an a bunch of inputs, and try to compute the statistically most likely next element in the sequence. It's literally like the autocomplete on your phone (in fact I think most of the current mobile keyboards are using some kind of ML model now), just on a mass scale. It doesn't understand anything it's spitting out, or even remember the last words it gave you, so it can't follow context or anything. It's just brute force guessing its way into imitating human speech. Monkeys on typewriters as a service.

that's a good perspective, i agree! i have noticed that LLMs struggle with context, and with this additional context i'm starting to feel like it's a complete dead end. they're just too big and unwieldy to be useful, and even when they are used there's fairly limited applications

i'm gonna be the boring open source buzzkill here, i don't think stealing content en masse at internet scale and then giving it away freely is much better than stealing and selling it

like yeah okay they're not directly profiting i guess but the profit motive is a footnote on the laundry list of theft and plagiarism