At the root of it, SQL injections are basically programs not distinguishing instructions from data - if your cool app passes user input as-is into your SQL database, someone might decide to change their name to have it interpreted as a query to delete every database table, which the database then dutifully proceeds to do - that's what the query said to do, after all.
With instruction-tuned language models you have the same problem in the form of prompt injection except you have no way of architecturally distinguishing between instructions and data: underneath, the instructions and user input will be treated as one big bundle of text, and the language model will just suggest a way to complete that text. It's just generating text, after all, it's the mapping layer that turns it into an email/API call/deleted database that does the actual damage.
As Simon goes into, most of the mitigations being proposed don't really attack this core problem, but boil down to phrasing more clever instructions, or trying to validate the input or output in some form.
"The program can't distinguish your instructions from the user's" feels like the explain-like-i'm-5 explanation of SQL injections, but also a simple straightforward explanation of prompt injection