At least to me, is that they imagine a sufficiently intelligent AI can basically dream up new knowledge within itself unconstrained. It's not going to know how to make a deadly nanobot or whatever because no one knows which nanotech ideas work and which don't, there's a huge space of possibilities to search and the only way to find out is experiments or simulations of experiments! Plus a lot of existing knowledge is probably flawed or incomplete in ways that aren't detectable without attempts to replicate, frustrating attempts to even make those simulations realistic. Unlimited intelligence doesn't solve bad or missing foundational measurements.
