chimerror

I'm Kitty (and so can you!)

  • she/her

Just a leopard from Seattle who sometimes makes games when she remembers to.


nex3
@nex3

strongly considering becoming one of those people who insists on never referring to a particular thing by its common name for political reasons because framing big probability models as "artificial intelligence" and pretending they're anything like HAL 9000 or Data is doing so so much damage to laypeople's understanding of a technology that primarily exists to make their lives worse


chimerror
@chimerror

Strongly agree with this, especially with the current topic of "artificial intelligence"/"machine learning" which is why I was very pleased to learn of the term "statistical learning".

Though the "learning" still implies something more, I think the "statistical" part hits at something so important that it moderates the "learning".

That being said, I'm still very much cautious of us being too certain that human brains are radically different, especially given the absolutely unthinkable reality that this type of statistical learning seems to work at least well enough in the way that we do see images in generated images.

And on the third, secret, hand I feel that getting in depth about the technical details here or even the relationship to art or if it's "real" or not is like focusing on the exact chemistry that makes guns work rather than the fact of who is granted access to guns (oppressors) and what purposes they use them for (oppression), and what necessity is there for those who don't have them (the oppressed) to gain them to be able to counteract their current use (pretty fucking high, sadly).

In conclusion: confused wailing and gnashing of teeth.


You must log in to comment.

in reply to @nex3's post:

Blaming the matrices is a category error, the problem is the organizations that create them and how they are used.

I started to say the matrices are not fundamentally cursed but on reflection some of them do embed racism, copyright infringement, etc. But the problem is not the matrices, it's the systems and incentives that cause them to be created with disregard for such issues.

To be fair to the people working on those models, I think most are pretty hesitant to compare their models to true intelligence. I recall reading a Yann LeCun post where he pretty plainly states that ML models do not have the learning capacity of a baby, and that they are not even close to having it. The people who seem to brag the most about ML models are adjacent people like journalists and marketing folks, or people in other, cursorily related fields. As usual, people doing the actual work are shouted over by less informed people with the most potential to profit.