bc i still remain primarily a twitter user, its likely u have not been keeping up with my studying of "AI" or related algorithmic / automated systems as it exists in our lives already. here is a small selection of videos and articles ive been sifting through for ppl interested in the evolution in wedding industries to "AI" as well as environmental impact. the primary one im going to recommend is the video at the start of this post, "AI does not exist but it will ruin everything anyway". its about an hour long and by an astrophysicist who has worked with some forms of these tools in her job. it breaks down the limitations of "AI" and what it can and cannot do. it also explains the basic science behind black box systems (what makes up a majority of the tools we refer to as "AI").
its worth noting i do not care for copyright law and that is not going to be the focus of anything i share. my issues with this technology is based on how it already has been and very well may be used to play a negative role in our lives through a) widespread false information b) being used to make decisions, especially decisions about human lives and c) what it means to presume we can automate processes at all, when we can't even asses how a tool or program arrived at a conclusion.
-
"Machine Bias" and "How We Analyzed the COMPAS Recidivism Algorithm" two articles by ProPublica in 2016. they delve into the COMPAS program, which analyzes if someone in the prison system is likely to be a re-offender and whether or not they should be granted bail or parole. It has a massive racial bias against Black people as false positives, and inaccurately rates white people with higher rates of false negative results. it's possible this type of judiciary program for digitized and predictive policing serves some of the basis for more recent work on predictive AI used to arrest people before they commit crimes..
-
"Is AI Making Us Less Human?", 30 min video by Lily Alexandre exploring AI as an extension of "content sludge" and how it risks increasing interpersonal paranoia. she also touches on how the designs of different forms and interfaces of robots or "AI" tools is manipulative in premise, as a way to collect more facial and consumer data. Cracked Labs has a useful summary from 2017 on how corporate surveillance aims to collect this data and how and why they use it.
-
"The Pain Was Unbearable. So Why Did Doctors Turn Her Away?", a 2021 article from Wired about NarxCare: the program near-universally used by hospitals, prescribers, and pharmacies in at least 40 states. (The remaining states use something similar but not that exact program. only Missouri does not yet fully have such a program set up). NarxCare measures the "risk" of someone being likely to abuse "narcotics, stimulants, or sedatives" by using predictive and cumulative medical data to generate a "Narx Score" and requires medical professionals to refuse to serve people with scores deemed too high, or else they may be raided and lose their license. What the algorithms of this program contain specifically is impossible to verify.
-
"Pegasus: What you need to know about Israeli spyware". Pegasus as a program and it's impact on Palestinians in particular is so large I admit I don't know where to begin in sharing how this spyware works and the impact it has. Certainly it's what allows Israel to monitor private device use in ways that result in"Palestinian accused of blocking Israeli colleague on social media arrested by police". for awhile the company that created the program also sold and offered services for anyone who could pay for the setup fee of installing Pegasus and continuous monitoring of groups of people. "The spyware can monitor calls, capture text messages, track a user's location, and collect passwords, photos, and other data". This year The New Arab put out "Automated Apartheid: How Israel's occupation is powered by big tech, AI, and spyware". To be clear, I am not claiming Pegasus is a generative AI program, though it's used for their developing predictive policing programs. But it does play a role in the overall world of data collection, which you cannot separate from "AI" tools, because data collection provides what models are then trained on.
-
"How much water does AI consume? The public deserves to know", Shaolei Ren, associate professor, researched and published a general overview for OECD.AI; an international hub for AI public policy and its risks if used inappropriately. it goes into the two primary forms of water consumption (and water withdrawal) and how the uses of water in coolant systems contributes to environmental impact and emissions. this coincides with reports this year that Microsoft's water consumption rose 34%, and Google's rose 21% from 2021 to 2022. (Not to mention the issues with OpenAIs internal culture and that many of its team has been very vocally pro-Israel and pro "AI in warfare".) It's worth noting several of these deals for expansion may aim to be partnerships with the UAE, alongside the massive deal Intel has struck with Israel as of two days ago, to build a new plant in "southern Israel". The corporate push for this tech and development of it has large potential environmental impact, and also coincides with some very obvious geopolitical and military related aims in American, Israeli, and UAE collaboration in dominating and under-developing other nations in MENA/SWANA.
Some books high up on my list to read on these topics include, "Surveillance Valley : the Secret Military History of the Internet" (Yasha Levine), "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence" (Kate Crawford), & "Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor" (Virginia Eubanks). if you've got more, and even better if they have actual marxist or anti-imperialist analysis as part of their ethos, please send recs my way!
i dont like AI for a lot of reasons, but i think its not super helpful to discuss this technology in inaccurate ways, as any issues with the philosophy of certain technological design is better to articulate if you actually study and analyze how these things in our lives work and why you should actually be bothered by them. while i dont like how ai art tools impact me as an artist, i oppose interacting with ai tools from the perspective of not helping give these tools our data or possible training material, as from a political standpoint, that is objectively a lot worse!