i find that i have difficulty interpreting if articles about LLMs are written with the assistance of said LLMs. not because LLMs are good at writing, but because people writing about using LLMs are often bad at writing

i find that i have difficulty interpreting if articles about LLMs are written with the assistance of said LLMs. not because LLMs are good at writing, but because people writing about using LLMs are often bad at writing
I've seen people sharing stories written with the assistance of LLMs. They are always EXCEPTIONALLY BORING OMG. It goes beyond lacking creativity. It goes into how human language has tonal conveyance through text that is impossible to replicate without the words being interpreted by a human.
Things like varying sentence length to keep reader interest, intentionally creating run-on sentences, nesting ideas with commas and parenthesis, adding tangents to explain concepts you just introduced. None of that is possible with these machines.
most of those things you suggested are primarily syntactic so there’s no reason they shouldn’t be easily replicable but it’s got such an enduring bad style
the more I think about it, the more I realize that the statistical model trying to find the right next word in of itself is probably why the output after a while just ends up boring.
Sum up the internet. All the fucked up bits, all the informative segments, and even all the made up anecdotes. Now shove it through a content filter made to remove any semblance of opinion, politics, or creativity. You get LLMs.
The data fed to the model has been cleaned so objectively that what is left is this sterile language construct that can barely be novel, inventive or even original.
basically it works great for corporate speak!
honestly I could see some people who are forced to write press statements for big companies pumping the fist that they do longer have to do their miserable job.
Ages ago I wrote a bit of fiction that was a metaphor for social justice and economic inequity, and recently I decided to feed it to ChatGPT to see what it'd do with it.
First I asked it to summarize the themes, which it did a surprisingly competent job of.
Next I asked it to evaluate the actions of the status quo enablers in the story, the ones who were causing harm through passivity and acceptance, and it failed to understand that they were even present and refused to evaluate their actions (or inactions).
Finally I asked it to write another story along the same lines, and it wrote out the most amazingly awful bit of milquetoast bootlicking drivel I've ever seen, with a style reminiscent of a third grader's essay on Christopher Columbus.
This shit only passes muster for people with absolutely no critical reading skills or who don't value anything about writing as an art form.