checked on lesswrong. discovered that they've made themselves afraid to talk about ways an AI could improve itself recursively and cause a singularity. they call doing this publicly an "exfohazard" because an LLM could be trained on it
they're also still afraid that a computer is going to make them go to hell, but that goes without saying
