Also like... you are not immune to propaganda so I would caution to consider whether your 'nuanced argument' is not just an OpenAI talking point in disguise.
Specifically I'm talking about arguments about 'intelligence.' Now intelligence is a slippery and ill-defined term, but I've seen people whom I'd hope are well intentioned talk about the "emergent behavior" of LLMs and suggest that that's a kind of "intelligence."
This is very much the Sam Altman position – 'intelligence' is a kind of spectrum, with 'strong' 'general artificial intelligence' on one end, LLMs on the other, and humans somewhere in between.
But the thing you're meant to believe from hearing this argument is that if only they can pump even more precious resources into an LLM, it will eventually become 'more intelligent'. And that's... simply not true. What an LLM is doing is just qualitatively different from what a human mind is doing. You can call that 'intelligence' if you want, I guess, but that's extremely misleading.
Or, to put it another way: you can say that I can cook, and you can say that a microwave oven can cook, but those are wildly different definitions of what 'cooking' means; and no advance in microwave technology can make it capable of producing, say, a caesar salad.
Ultimately OpenAI wants 'intelligence' to be a singular, obviously measurable thing, and they want you to believe that their product can be said to possess that thing. This is both an inaccurate view of the world and a harmful one; people seem to very easily miss how bound up in colonialist and racist thinking the AI boosters' view of 'intelligence' is.
Above all, it's extremely easy to be credulous about LLMs because the technology is frankly intended to trick people. The tech itself is propaganda for the tech. LLMs generate text that's statistically likely, that resembles 'real' text written by someone; that's the trick. It is of course very difficult to tell a statistical average apart from the real thing, except in various ways that really matter when the rubber meets the road.
Pumping more electricity into these data centers and munging more text is not going to produce something that has the qualitatively different capabilities that a human mind has. This is like saying "cars are getting faster all the time so soon they'll be able to fly." Or thinking that VR headsets getting better will lead, merely through incremental improvement, to full dive experiences that are indistinguishable from real life.
