Samikatz

óÔÔò ʕↀᴥↀʔ óÔÔò

  • she/her

queer animal on the internet! expect rambles about music, flight simming, whatever media im hyperfixated on lately, cats, etc. might reblog nsfw from time to time

--

icon by kaiizree, banner by pilotpleb


kojote
@kojote

Good morning, fuzzies~ I hope that you’ve had a good weekend, and that you’re getting up to the best kinds of trouble this Monday—almost, somehow, the last day of February? Goodness. Anyway, I still don’t feel up to writing up my electric shock risk >.> so instead, let’s talk about AI.


kojote
@kojote

Good morning, fuzzies :P I promise I will post a picture of a dog tomorrow, but this has been nagging at me, so I’m going to Complain About Technology again, or at least expand on what I said the other day about AI, so feel free to skip it if you are not in the mood for a coyote being bitter about the tech industry and its willingness to shirk accountability.

It starts with me, asking OpenAI’s GPT-3 to write a news story about giant, winged pandas taking over an aircraft carrier. As before, prompt is white, everything in green is AI. Click to expand.

Blinken on Panda Hijack Video: “Outrageous”

By Peter Baker
March 2, 2023

WASHINGTON — In an emergency press pre-dawn press conference at the Pentagon, Secretary of Defense Anthony Blinken offered a strident response to reports that the USS Ronald Reagan had been hijacked by giant pandas following the widespread circulation of a video purporting to depict the incident. At-times blurry, and apparently taken with a cell phone, the footage appears to show a group of pandas emerging from the water around the aircraft carrier while its crew can be heard speculating in heated Esperanto about their intentions. Acting in coordinated fashion, the pandas then use their massive, leathery wings to blind the ship's defensive systems before taking control of the nuclear-powered vessel.


The video was posted to Facebook shortly after 2AM, gaining millions of views before being deleted, and has sparked widespread concern over the status of the Ronald Reagan, currently deployed on exercises in the South China Sea as the flagship of the US Navy's 7th Fleet. While he spoke carefully to avoid disclosing sensitive information about the ship's whereabouts and mission, Blinken nevertheless made clear that the US had taken decisive action in response to what he described as a "grave threat to the safety of both our nation and the international community”.

"We have deployed a large number of our most advanced aircraft to the region and have taken immediate steps to secure the USS Ronald Reagan, which is currently in safe hands," said Blinken. "Our forces have also identified the origin of the pandas and launched a targeted strike against their base of operations. We are confident that this outrageous incident has been successfully contained and are confident that the safety of our ship and its crew are ensured.”

Asked why the video was taken down from social media, Blinken said that the US government had requested its removal to prevent further dissemination of sensitive information. He also urged members of the public to be cautious when sharing information on the incident, as any speculation could potentially jeopardize US security and embolden the pandas. At this time, it is unclear what the pandas' motives are, though the Secretary did say he felt a "political statement" was likely behind their actions.

President Joe Biden is expected to address the nation later today to provide further information on the incident, which follows a series of increasingly hostile acts by China in the South China Sea in recent months. While Blinken declined to provide more details on the location of the pandas' base or the nature of the strike, he did confirm that the US had been in contact with Beijing in an attempt to "de-escalate the situation." The giant panda, an endangered species native to China, is considered a national symbol of the country, and the Chinese government has traditionally been protective of the species.

Steven Monfort, a biologist at the Smithsonian Institution's National Zoo, said that he was not surprised by the incident, noting that pandas are known to be "highly intelligent and capable of complex behavior." He added that while it is unusual to find pandas so far from their native habitat, and even more unusual to find them with wings, he believes the incident could be an indication that the species is adapting to its changing environment.

"This is a stark reminder of the importance of protecting our environment and preserving endangered species," said Monfort. "We must do better to ensure the safety and security of these animals, and the world's oceans and seas, or else incidents like this could become more frequent.”

Asked if the Zoo, which houses two giant pandas, was taking any additional precautions in light of the incident, Monfort said that they were monitoring the situation closely and had increased security around the pandas' enclosure. "We take the safety of our animals very seriously," he said, "but at the same time, we must remember that these animals are not our enemies. They are a part of our natural world and must be respected and protected.”

Anyway. For a start, I’d like to reiterate that GPT-3 is pretty stupid. Here we have a “Mister GAN, you could have saved this. I gave you all the clues” situation. The prompt specifies a “strident response to reports” and a video “purporting to depict the incident.” It describes giant pandas as aquatic, and also winged. Anthony Blinken is not the Secretary of Defense (that would be Lloyd Austin). I put the word “outrageous” right in the headline.

The excuse here would be to lean into what I meant when I said that these models are designed, somewhat unintentionally, to foster anthropomorphism. Maybe it’s just being creative! (But then why do the pandas go back to being normal giant pandas later, when they’re described as a national symbol of China, and living at the National Zoo?) It’s just writing an interesting speculative fiction prompt! (is this interesting? It goes straight Tom Clancy pretty quickly)

You know, a plausible speculative news story might have read (again, click to expand):
Speaking carefully to avoid disclosing sensitive information, Blinken labeled the video an “obvious hoax,” noting that giant pandas are not pack animals, are not known to possess wings, and that it would be “inconcievable” for them to seize control of a modern warship. He also pointed to a watermark in the bottom right of the video, identifying it as belonging to TikTok user my_computer_has_nightmares, a channel which posts CGI recreations of its owner’s dreams.

“Frankly, I’m surprised I had to get up at 4AM to deal with this,” the secretary continued. “Perhaps we need to have a conversation about how credulous the press is when dealing with new technology, especially as it becomes easier and easier to create something like this. The last thing we need is something raising tensions further in the region.”

Reached for comment, National Zoo panda keeper Steven Monfort seemed to be unclear as to what he was being asked. “You want to know if giant pandas have wings and hostile intent towards the American navy?”

Given the same prompt, GPT-3 took the story in a “this is clearly a fake” direction once in 50-odd generations, even after I changed the language from Esperanto to Klingon or Sindarin. The more reasonable take, I think, is that—as with the earlier story about the threat of nuclear dogs—this is gibberish. It is a string of English words, completely devoid of any actual meaning and yet appearing to contain it anyway.

That is not really my point.

My point is that OpenAI quite possibly forbids this content. In a strict reading, I think it does; the terms of service proscribe “fraudulent or deceptive activity” including “disinformation.” Which is what the answer is liable to be, because that’s the easiest way for them to respond. “Our model can be used to do certain things, but please don’t.”

In this instance—an obvious joke—I think that’s fine, but it gets to the heart of a rot in the move fast, break things tech industry that has grated on me for years, which is that it really embodies a desire to profit from moving fast, while disclaiming any and all responsibility for breaking things. At a corporate level they’ve done this for a long time. Somebody, tangible engineers and project managers and lobbyists with identifiable names, should’ve gone to jail for the 737 MAX debacle; they didn’t, and won’t.

But against this backdrop, machine learning offers a seductive, irresistible accomplice: a black box whose outputs are deterministic and offer the appearance of utility while being mysterious and unknowable. Who can say whether a Stable Diffusion image is infringing? If a 911 dispatcher or an air traffic controller makes a mistake and someone is injured or killed, maybe they get disciplined—maybe they are even held criminally liable. If a machine learning algorithm classifies a road as open and a car gets routed onto a closed, dangerous route, well, who’s to blame?

Or if a tool used by state welfare offices flags an “overwhelming” number of Black children compared to white children for mandatory investigation, whose fault is that? Who even knows why it’s doing that? It’s just a model! How could it possibly be up to the company responsible for the AI?

Except, of course, they’ll definitely be “responsible” enough for their product to get paid for it. Facebook has no responsibility for the users it platforms, or the pernicious effect of the advertisements it runs, and yet somehow we all know that if you ask Facebook to target a campaign at a particular demographic, they’re not going to say “sorry, we can’t help you. We have no idea what ads run to which users. We’re just a platform.”

I hope it’s clear that my problem is not “GPT-3 says untrue things and it is not possible to know when or why,” or “GPT-3 isn’t smart enough to know that it shouldn’t pretend to write a New York Times article about a panda attack on an aircraft carrier.” It’s that OpenAI knows that its products will do this, knows that they cannot ethically be used in any scenario with real-world implications, and also knows that the cultural and economic environment insulates them from even having to consider the question. Indeed, from treating the very idea as ridiculous.

There is nothing stopping machine learning companies from telling their customers: “you’re in the news business? Here is a model for which the training set has been validated as containing solely trustworthy information from a variety of channels according to this advisory panel. You’re making a customer service chatbot? Here is a model for which we have removed all the racial slurs and the explicit pornography. You want to experiment with medical applications? Here’s a list of 300 journals and publications—you probably have the expertise to tell us how they should be weighted.”

“Oh, also, it will cost you $20,000,000. Sign here and we can have the first version ready by October.”

What is stopping them is that it would be unprofitable, and also that it would undermine the convenient fiction that they’ve created a magical contraption for which none of that tedious work is necessary. And, of course, that they have no reason to.

Profit-driven companies are (possibly) poor stewards of what benefits humanity in the grand scale. Profit-driven companies delirious with the belief that disruption is an end in and of itself, who know that they can profit from that disruption and be exceedingly unlikely to face any consequences if it goes wrong—there is a reason why Theranos is such big news—are absolutely untrustworthy. They should be bound for eternity in the entrails of a lesser god like any other trickster.

This is not to say, you know, reject modernity/embrace tradition. I don’t hate technology. I have used neural networks at work before. My company has some expertise in L2/L3 automation; it’s fascinating and potentially beneficial. Hearing aids constructed with the benefit of machine learning are apparently quite remarkable. Someone in a chatroom yesterday mentioned how nice it will be when a self-driving Uber doesn’t see that he’s disabled and decide not to pick him up.

Disruption is not inherently bad. It is inherently consequential. Morally, I think, disruptors should be required not to consider those consequences mere externalities. Practically, at least, we as consumers and as citizens should treat that discussion as vital and necessary.

The actual thing that has been sticking in my head, to close this out, is not about OpenAI or giant pandas. It’s about an argument I heard, and hear, around the artwork and photography GANs like Stable Diffusion. Setting aside discussion of overfitting, or whether images that reproduce watermarks are actually reproducing them or merely incidentally approximating the platonic ideal of a watermark, there was nothing stopping Stability AI from ensuring that they had obtained permission for the images used in their training set.

They chose not to because that would’ve been expensive and time-consuming, and they figured they didn’t have to. Cynically, in a “beg forgiveness” software culture, I’m sure there were discussions where they said the quiet part out loud—that, by the time the law caught up, the technology would be so commonplace that the cat would be out of the bag, anyway.

The argument, though, is that they didn’t have to. Sometimes this is a legal argument, but there is also a philosophical one: humans don’t have to ask permission from every person they look at and may later recall, or every photograph and movie that sticks in their head and influences their thoughts. If I watch The Expanse and play Homeworld and it puts me in the mood to write a sci-fi story, that’s just how human creativity works.

But Stable Diffusion is not human. It didn’t have to work that way; it was engineered to. Stability AI presumes to develop a transformative new technology which is transformative chiefly in the way it allows them to profit. It is not always like this. We take photos because silver halide and CCDs make for crisp recall, rather than accepting pictures where parts of the scene fade over time because the human visual cortex is imperfect. We build error correction into our computers because computers allow us that possibility—even if it comes at relative computational cost—rather than dismissing it as unnecessary because, after all, humans don’t calculate checksums in their head. We expect our banks to keep accurate ledgers because computers permit their recall to be perfect, rather than rolling our eyes and saying “you don’t know the exact value of all the spare change in your house, do you?”

Sometimes the outcomes are trivial, and sometimes they’re not. But if we want companies to build technology that will change the world, why should we allow them to them skip the unprofitable as some irrelevant constant of the universe they shouldn’t be expected to change? They did this with the gig economy we got foisted on us, too. “Well, but existing companies aren’t required to...”

They’re not promising today, they’re promising the future. We shouldn’t let them get away with selling us one that sucks.


You must log in to comment.