I like writing and writing byproducts
🧉💜✨🌹


bruno
@bruno

So, a company sets up some service that they claim is automated and runs on 'AI.' Behind the scenes, they are running some imperfect and noisy data through an ML model; this ML model, being imperfectly reliable, is backed up by some large volume of invisible, precarious human labor.

At which point do we call this a mechanical turk? What's the hit rate the ML needs to achieve before we're no longer allowed to say said ML service is "just some guy?"

More importantly, why would we cede discursive power to these companies by going along with their assertion that their 'AI' is doing something? Why would we join them in asserting that the labor that backs up this service is in some sense being 'done by' an AI, rather than adopting the posture that humans are doing this labor in a way that is mediated through an ML system?

Of course, companies like Amazon do not divulge the success rate of their systems; they don't divulge how much human intervention is required. In the case of 'Just Walk Out', it sounds like rather a lot of human intervention was required.

But whether human review was being used in 90% of cases, 50% of cases, 10% of cases or 1% of cases: the reality is that it is absolutely fair and correct to say that this 'AI' was Just A Guy. Because ultimately this system would not have worked at all without the human labor; automation was not actually removing the need for human labor, it was just... automating the more obvious parts of the task.

I need people to stop being so reflexively afraid of being 'unfair' to fucking Amazon that they lose sight of the actual realities at play here. Fundamentally, an ML system was being used to render human labor obscure and invisible, in the interest of disinforming everyone involved so that this labor could be rendered precarious and disposable. Fundamentally, claims of automation were being exaggerated by a corporation with an interest in building this type of potemkin technology.

It is also important not to get dazzled by technical bullshit. "Ah, but those workers were merely 'training' the system." That is Amazon's claim. There are two problems with the way people are taking this claim at face value.

First, it is an unsupported technical claim that this 'training' would ever lead to a system that would function without human intervention. Are we meant to simply take Amazon's word for it on this?

Given that Amazon themselves seem to be abandoning this model, perhaps we shouldn't take their claims about its future capabilities (not even current capabilities! future!) at face value.

Furthermore, very clearly the human labor was required for the system to function satisfactorily at all in its present state. We don't have to countenance bullshit claims about future capabilities.

One way or another, to 'train' an ML model is still meaningfully to do the labor required to make that model function. Once again, Amazon's framing of the circumstances here is meant to make those workers disposable; they are an on-ramp to full AI management of the system. When we accept Amazon's framing and decide that we need to be 'fair' to them in this way, we aid them in their intent to use up these workers and then discard them.

Fundamentally, when an Amazon spokesperson says something, you don't have to immediately assume that they are lying but you do have to recognize that they do intend to mislead you. They are trying to enforce an epistemology of the world that is favorable to them and disfavorable to humanity. Please remember who you are hearing from and remember that they are not a good-faith participant in discourse.

Amazon's intent was absolutely to get people to think (investors, consumers, regulators, etc) that this was a closed loop with no human intervention, that they had automated away the need for grocery checkout workers. And that is simply not true, and it is right and fair to point out that it is not true. You are not 'correcting misinformation' by pointing out that the ML system obviated the need for direct human intervention in some number of cases, you are missing the point entirely.

So yes, Just Walk Out was literally Just A Guy; to shriek at people that they Don't Understand the Technology for saying so just makes you a sophist doing Jeff Bezo's work for him.


bruno
@bruno

tl;dr I need people to stop thinking of 'AI' as a 'technology' and start thinking of it as what it is: a rhetorical strategy used to change how we perceive and talk about a related group of technologies.


mrhands
@mrhands

I see two main types of AI grifting going on right now:

  • Literally Some Guy
  • The Wise and Unknowable Oracle

When you see headlines about "AI", you can usually substitute one or the other:

  • Amazon uses Literally Some Guy to figure out what you're buying as you walk out of their stores
  • Lockheed-Martin deploys The Wise and Unknowable Oracle to find high-value targets on the modern battlefield

You must log in to comment.

in reply to @bruno's post:

(moving because I realized I commented on the wrong post my bad)

So yes, Just Walk Out was literally Just A Guy; to shriek at people that they Don't Understand the Technology for saying so just makes you a sophist doing Jeff Bezo's work for him.

"Just A Guy", or the specific wording from the original post, "just some guy watching you on camera and totaling up what you take", makes it seem like there was no doomed-to-fail ML system there at all—if you don't understand that the system even existed how can you understand all the nuance you've outlined about it above?

The clarification wasn't anything close to "shrieking", although now I can't go back and re-read it to confirm because the OP rightfully saw this kind of reaction coming and deleted it. It got Gizmodo's article wrong but unless I'm completely misremembering I took the same points from it that you just made—that there was an ML system, that it couldn't really ever do what it claimed to, and needed tons of human labor.

You're being way too hasty in portraying someone as going to bat for Amazon just because they wanted to clarify what actually happened.

I, for one, understand that they had an ML system that they were aspirationally training. I don't think anyone is trying to make it seem that there wasn't. It was, for example, the thing they showed to investors, which is described in all the recent reporting about this I've seen.

The ML model is not an important thing to describe about how the system worked, because the ML model didn't work. The only part that worked was the people watching you on camera.

It is important, though. It's important to know that it didn't work vs that it was never there in the first place. It's critically important to know what the tech is and isn't capable of if you have ethical concerns about it.

The original post had no context whatsoever and was very much unclear about whether there even was an ML at all. I don't think that was intentional, nor do I think anyone is intentionally trying to claim that. What I am saying is that this pushback against even trying to understand the facts of the system is an overreaction.

The hopes and dreams of technologists are not essential context.

"Amazon operated a grocery store by watching what you took on cameras" is a true statement.

"Amazon operated a grocery store by watching what you took on cameras, and at the same time spent a lot of money on the aspiration of training an ML model" is another true statement that doesn't contradict the first.

You can keep adding context, like how they would sometimes test the non-viable ML model on customers with the effect of giving them free stuff, but adding that context is not our job, it's the job of their PR people.

Their story was that the ML model could tell what you bought. It couldn't. They only hoped for it to do that. Only the people watching on camera could tell what you bought.

I think policing the tone and framing of random internet users you agree with the important things on is part of the reason we still have these problems, frankly.

Like do what you want, I'm not a cop, but I just think this whole argument about semantics is more about ego than anything else.

To be fair, it's common in tech to announce a pretend product, sell it, and have just-some-guy™ pull the strings.

A company I worked for sold more than one product like this, pulling in millions while their salaried staff had to log in remotely every morning to every site that purchased, and do the actual work the product was supposed to, manually. One product eventually existed after over a year, only for the next product to be sold before creation, etc.

No one saw anything wrong with this because anyone else connected knew that for the most part, many competitors were doing the same (except the people that had to do it manually -- we hated it).

It's a viable business model because they rarely get caught doing it. So while it's technically possible there was an ML involved, it's pretty likely that they could have started working on it after shipping the idea to test if it was worth putting more resources into it, and trying to sell it to others. And effectively the result is the same.

You're right! The whole point of clarifying that they did attempt and fail to make such an ML system is so that we understand that this is not what happened and that they did attempt to build it and found it was unviable. Otherwise we only know that Amazon lied and not about whether the tech could be made and abused down the line or by another company.

in reply to @bruno's post:

given enough time our human weaver participants will no doubt teach our special wood-leveraging artificial intelligence which we've named "Spinning Jenny" to spin entirely unaided, so it's not really accurate to claim our industrial process is "just a new urban proletariat"

I really feel like we shouldn't be surprised when the company that has a division called "mechanical turk" specifically for outsourcing AI-like behaviour to developing countries claims they have AI-powered solutions that we're not allowed to see inside and it turns out to be mechanical turks all the way down