checked on lesswrong. discovered that they've made themselves afraid to talk about ways an AI could improve itself recursively and cause a singularity. they call doing this publicly an "exfohazard" because an LLM could be trained on it
five bucks to anyone who convinces them it's okay as long as they format all their posts like this
