shel

The Transsexual Chofetz Chaim

Mutant, librarian, poet, union rabble rouser, dog, Ashkenazi Jewish. Neuroweird, bodyweird, mostly sleepy.


I write about transformative justice, community, love, Judaism, Neurodivergence, mental health, Disability, geography, rivers, labor, and libraries; through poetry, opinionated essays, and short fiction.


I review Schoolhouse Rock! songs at @PropagandaRock


Website (RSS + Newsletter)
shelraphen.com/

I do hope people understand that like, OpenAI is trying to automate my job out of existence too by claiming GPT-4 can replace a librarian. I do have some personal stakes in the AI fight too. It's not just freelance graphic designers on the line here. That people take that post as "pro-AI" is really disappointing to me. I'm trying to shift attention from arguments that get lost in the cartesian dualism vortex and towards arguments about securing good jobs and income for workers whose jobs are affected by this technology—and towards things like how GPT-produced content sludge affects our media landscape in terms of spreading misinformation. What are the materially measurable things we can strategically target through things like union organizing (Like WGA did as imho a model of how we should be approaching the AI issue) instead of arguing about if living creatures infuse parts of their soul into art which to me feels like arguing about if the wine literally become the blood of jesus christ or if it only spiritually becomes the blood of jesus christ.


You must log in to comment.

in reply to @shel's post:

Yeah, I can see plenty of useful tools and small-scale uses of the tech for automation purposes, but like so many technologies it basically cannot exist under capitalism without being exploited to destructive ends for profit. It's a shame that the perfectly rational hatred of exploitation poisons the well when discussing the ways technologies can be used for people's betterment.

So I think what might've happened here is everyone got themselves worked up about crypto which has zero positive benefit and is basically the perfect vehicle for scammers. And now that crypto has basically evaporated and all those scammers have moved to AI, people are unable to wrap their minds around the idea that AI might have some nuance to it.

im glad! i think the difficulty focusing on why interacting with these tools is an actual problem beyond the emotional level makes it easier for ppl to dismiss it when we rly shouldnt. i hope as we ideally get a better understanding of LLMs, stolen data, surveillance, mode training, their impact, and their limitations more ppl come to realize exactly why these programs shouldnt be making the decisions they already are!

Yeah I think also focusing on the tech that isn’t generative “creative” AI generally makes it much more Immediately Clear what the harms are of the technology as a technology. Most of my concerns with most AI I see has nothing to do with Art

generally all engagement with it in its current state creates demand and feeds it data in some capacity (some forms worse than other). as well as normalize it to the public or implementing it to avoid culpability or ethics regulation. the video i lead off with makes a lot of good points around like ... needing to not normalize beliefs that computers cant lie or are more logical than us. especially when a particular tool isnt even a program we can open up and change and analyze (LLMs are black boxes. we dont know how they give us their results and cant figure out how they do), all you can do is train it on data. but no data is ever enough. and it really struggles to understand nuance of any variety. but data surveillance and data selling industries want to argue all it needs is more data ... u can see where the cycle goes

It does kind of seem like for a decade and a half companies have been using mass-surveillance to collect and sell our data but without an entirely clear sense of what some of that data is even useful for, just that it inherently has some value. And now these algorithms create the perfect little beast to feed it all to in order to convert surveillance data into capital. Like how pigs convert garbage into calories.

The fallibility of GPT is one of the major things I'm constantly reminding people of. I've been impressed by GPT-4 giving surprisingly nuanced answers to intentionally loaded questions trying to get it to confirm biases, only to have it challenge those biases instead. But GPT-4 is currently pretty locked down behind paywalls and the LLMs that the masses are using like search engines to generate nonsense that agrees with them is Bard, Bing, Notion, GPT-3 etc. which will basically just confirm anything you want it to no matter how made up. If you don't know that that's what it's doing, and you think you're just asking it questions and getting true answers, then we've replaced research with an alternate reality bubble generation machine where everyone can "do research and ask the AI" and get any worldview validated.

What are the materially measurable things we can strategically target through things like union organizing (Like WGA did as imho a model of how we should be approaching the AI issue)

I want to hear more about this! Because yes, I think it is worth it to gently educate the teenagers fooling around on Character.ai about the issues present around the technology, but I have a hard time imagining that leading to material, systemic change, as opposed to organized pressure on the powers that be. Knowing something is bad isn't the same as doing something about it, or even knowing what to do about it.