perfectform

#1 Cryptolithus Fan

  • ordovician limeshale she/they

Mais il n'y a rien là pour la Science. Editor, New York Review of Wasps.


It seems like there's a lot of people who have only partially internalized that systems like ChatGPT are statistical models, not constructive reasoning models--you'll see quite compelling demonstrations that sexist or racist trends in the underlying corpus have been replicated in the model, followed by "asking" the text model to "explain" this element of its response to the prompt--but there's no reason to think that an "explanation" outputted by the model is any more than superficially related to the actual reason for the generated response, as the reason would in fact not be a tidy-seeming-description of a line of thought but would rather be the transect of training data and reinforcing interactions feeding into that component of the original response. These outputs are only "explanations" in that they resemble explanations in the training corpus and previously-outputted "explanations" that met with user approval. The nature of the encoded biases are best understood through the biased responses themselves, not the generated follow-ups that statistically resemble real-life defensive follow-ups.


You must log in to comment.
Pinned Tags