CERESUltra

Music Nerd, Author, Yote!

  • She/they/it

30s/white/tired/coyote/&
Words are my favorite stim toy


atomicthumbs
@atomicthumbs

checked on lesswrong. discovered that they've made themselves afraid to talk about ways an AI could improve itself recursively and cause a singularity. they call doing this publicly an "exfohazard" because an LLM could be trained on it


pervocracy
@pervocracy

five bucks to anyone who convinces them it's okay as long as they format all their posts like this


You must log in to comment.

in reply to @atomicthumbs's post:

lesswrong is like AI quicksand but not for any of the reasons they think, it just turns any machine trained on it into a verbal ouroboros, an ongoing philosophical debate where all sides are intimately used to flowing fluidly around the questions of their debate partners without actually knowing anything of what their debate partners are talking about.

if AI improving itself recursively, getting better over time is the singularity,
what is it called when AI is recursively trained on AI outputs, leading to getting worse over time? It feels like it should have an equally grandiose name

convenient how that gives them a great excuse to say “ah well of course I COULD tell you the secret methods, but it would be far too dangerous to do so”

as a bunch of shonen-obsessed doofuses without a single original research direction between them, this is very attractive to them. convenient excuses for doing this have a long history in the community.

in reply to @pervocracy's post:

"generalized artificial intelligence is inevitable" is mildly silly, "generalized artificial intelligence is inevitable and what we say about it right now will determine its disposition toward humanity" is off the deep end and accelerating, "generalized artificial intelligence is inevitable and will be based directly on our current usage of turbo autocomplete" is 1-800-COME-ON-NOW

I'm willing to believe in generalized artificial intelligence in our lifetime, in the sense of a chatbot that appears to reason like a human, but the part where I start to fall off is why that should lead to an ever-accelerating uncontrollable singularity of hyperintelligence.

(I know the orthodox answer is "because it can modify its own code" but this still basically assumes that there's a Smartness Dial which can be turned up to infinity and that this will have effects more cinematic than "it's still a chatbot, but it generates its responses really fast.")

and I think a lot of this is downstream from rationalists thinking of themselves as Smart People and desperate to hold on to the belief that this is extremely real and special and powerful

it's the Great Man theory of history taken to its logical conclusion, where one entity is so great that it ends history... and that entity is just like me only more so

seems like a fantastical spin on the much more observable ever-accelerating uncontrollable singularity of tech marketers persuading an increasingly broad swath of the actually powerful that gee whiz this AI is so hyperintelligent you might as well put it in charge of vital infrastructure, writing and enforcing policy, let's have it run the nukes! meanwhile the AI remains at roughly Eliza tier and occasionally outputs the Protocols of the Elders of Zion for reasons nobody can solve without spending money they're unwilling to part with