corhocysen

RAL 6011 'Reseda green'

i want to become mediocre at everything.
i speak 🇬🇧🇳🇱 well, i'm currently learning 🇯🇵, and i can maybe read some other stuff


mastodon (defunct)
hsnl.social/@corhocysen
twitter (defunct)
twitter.com/corhocysen

atomicthumbs
@atomicthumbs

checked on lesswrong. discovered that they've made themselves afraid to talk about ways an AI could improve itself recursively and cause a singularity. they call doing this publicly an "exfohazard" because an LLM could be trained on it


atomicthumbs
@atomicthumbs

they're also still afraid that a computer is going to make them go to hell, but that goes without saying



You must log in to comment.

in reply to @atomicthumbs's post:

lesswrong is like AI quicksand but not for any of the reasons they think, it just turns any machine trained on it into a verbal ouroboros, an ongoing philosophical debate where all sides are intimately used to flowing fluidly around the questions of their debate partners without actually knowing anything of what their debate partners are talking about.

if AI improving itself recursively, getting better over time is the singularity,
what is it called when AI is recursively trained on AI outputs, leading to getting worse over time? It feels like it should have an equally grandiose name

convenient how that gives them a great excuse to say “ah well of course I COULD tell you the secret methods, but it would be far too dangerous to do so”

as a bunch of shonen-obsessed doofuses without a single original research direction between them, this is very attractive to them. convenient excuses for doing this have a long history in the community.

in reply to @atomicthumbs's post:

ai hell is real; it’s just really easy opt out.

the structure of ai hell is: you have to argue with lesswrongers all day about the imminent ai apocalypse. the ai apocalypse never comes.

ai hell is surprisingly popular among lesswrongers