Sorry bro I can't be amused by all those memes of Google search AI giving insane answers like Goku helping test that your chicken is at a safe temp of 100F because they're all fake and you are being tricked into thinking these systems aren't as capable as they actually are and we don't need to worry about the effect they will have on the world.
You've got to understand half of their danger is in their subtle errors, not in their obvious ones.
I really don't give a shit about your philosophical stance about how an LLM can't be creative or your misconception that they piece together chunks of the images they ingest or your "even critically engaging with LLMs is playing into their hands, if you're not ignoring them you're complicit" venting disguised as rhetoric.
Anthropic is already getting results isolating concepts as features in their models and tweaking them to intentionally change the behavior much more reliably than just by prompting. Imagine an LLM that can literally have the concept of LGBT people disabled so that it doesn't consider them when generating responses, in a way that may not be detectable from the prompt.
I want to stay up to date on their capabilities so that when I have professional opportunities to organize against them I can do so. I don't think we can afford to ignore them, but the opposite of ignoring them is not necessarily embracing them.
I totally get where this is coming from, and I agree with it. It's through the subtle distortions that genAI is and will be harmful to society and to individuals, in a very real sense. But, I if I go and tell my neightbour or coworker or whatever, "genAI can use ideological shifts in language to alter your subjectivity and change your consciousness of material social conditions," they're not going to know what the fuck I'm talking about. We understand that problem. The best way to reach them about it is to break the illusion that it's competant.