i'm logged on and ready to go

posts from @v21 tagged #ai

also:

it was about a guy who needed to make content for "National Avocado Day" despite thinking it was a fake and bullshit thing. he AI generated a script, stumbled over the words despite reading off an autoprompter, then was able to use transcript-based video editing tools to crop that bit out. it was a good ad - it felt vaguely relatable, it demonstrated the way the tools worked, it did some work in terms of framing the shots to make a guy sitting in a chair using a mobile phone look somewhat dynamic.

and it definitely struck me that it was an advert for Vimeo specifically. Vimeo used to have a brand for being the place where the good & cool video content went. the stuff that was well considered & had had serious time spent on it, not the content churn of YouTube. i understand they did not make much money from that, and have since pivoted to making tools, doing video hosting, generally providing "solutions". and the tools demonstrated in this ad were good! i mean, not so keen on the text generation part, but i can see the use of it, and the other parts are, while not groundbreaking, genuinely useful parts of a workflow. i guess this is their new business model filtering through to consumers, and while i'm pretty sad that the part of their business which supported genuinely interesting culture is dying, i can understand it in terms of business case.

but.

it really does strike me how the thing that is being sold here is content that no-one wants to watch, not even the person creating it. it's "content" in it's purest sense, something that fills up space, something that happens on a schedule, something that fills time but does not necessarily convey value. like "bullshit" as defined by Harry Frankfurt, content may possess value (convey meaning, express emotion, be a genuine creative act, etc), or it may not - the characteristic of content is not the absence of value but the way in which value's presence is entirely optional. spam and poetry on an equal footing.

and that's the usecase for AI tools - when you are expected to say something, but you don't especially care about what you are saying. the appearance of effort (a person being friendly and personable, talking about avocados!) but the effort is entirely in the appearance.



I saw this Reddit post linked on Mastodon this morning, and I've gotten a little stuck on it.

The core of the thing is an uneasy feeling at AI being slipped into the taking of photos, and being done so in such a way that the presence of AI isn't apparent. This matters a lot when it's pictures of people's faces - when the selfie you take is hotter than you are, when your camera automatically lightens skin, when blemishes and pores are removed, leaving you to compare yourself against a artificial version of not even celebrities, but your friends posting candid shots.

But this isn't really a post about self-image, it's a post about the moon. The moon! It's such a particular thing. It's a singular object that everyone can see for themselves, it's the singular object we can all see together (ok ok, there's the sun & the planets & the stars, but they're more like points of light, they don't have features in the same way).

My friend Gab runs a Tumblr account where she collects the moon depicted in videogames., and this kind of gets to the same feeling. The moon, the same moon, but shown in all these different ways, in all these different contexts, and in all these sometimes-fictional worlds.

And all of this kind of peaked when I read this article trying to understand if the moon is fake when you take a picture of it with a new Samsung phone. I think the article is kind of... confused? But, like, actually very illuminatingly so? It's full of folk theories about how the software works, little conspiracy-minded rabbitholes of reverse engineering, and a continual redrawing of the lines of what is "fake". It asks people and keeps the score on what side they come down on without ever really examining what those sides are.

Now that we’ve established that the Moon photos from the S21 Ultra are most definitely not fake, how is Samsung pulling off the seemingly impossible? How is the S21 Ultra’s 100x zoom taking a photo that bests even a $4,800 camera setup? Simple: AI.

The line seems to be: if it's compositing in a texture, then it's fake. If it's using AI to "sharpen details" based on identifying the object, then it's real. To which my literal-minded brain cries out: but the AI is just storing the same details, in a form that is less straightforward to access! Just because a process is harder to understand doesn't mean it's not happening. The camera sensor, the optics, they are literally not able to resolve these details, but they must be coming from somewhere. "Scenes and objects that aren’t recognized by the Scene Optimizer will likely look like grainy mush at 100x zoom."

But the really interesting thing that's happening in this article is not really the technology, but seeing culture collectively trying to make sense of it. Trying to decide on the boundary lines, trying to define what a "photograph" is. Photography has never been a neutral process, it has always involved setting up lights & staging & pushing the exposure & dodging & burning & airbrushing & composites & all the fussing called "editing". And at the same time, it's always derived a lot of its power from its uneasy relationship to that promised neutrality, to that idea of the objective flat capture of the world. It's "Photoshop", the software manifestation of the place where photographs are developed, that is the catch-phrase for manipulated images.



from Alex Hern's tech newsletter:

We’re at the crossroads of two very different AI futures. In one, the companies that invest billions in training and improving these models act as gatekeepers, creaming off a portion of the economic activity they enable. If you want to build a business on top of ChatGPT, for instance, you can – for a price. It’s not extortionate, a mere $2 for every 700,000 words processed. But it’s easy to see how that could one day result in OpenAI being paid a tiny sliver of a cent for every single word typed into a computer.

You might think that no company would give up such an advantage, but there’s weakness of that world: it’s an unstable one. Being a gatekeeper only works while there is a fence around your product, and it only takes one company to decide (willingly or not) to make something almost as good available for free to blow a hole in that fence for good.

The other world is one where the AI models that define the next decade of the technology sector are available for anyone to build on top of. In those worlds, some of the benefit still accrues to their developers, who are in the position to sell their expertise and services, while some more gets creamed off by the infrastructure providers. But with fewer gatekeepers in play, the economic benefits of the upheaval are spread much further.

There is, of course, a downside. Gatekeepers don’t just extract a toll – they also keep guard. OpenAI’s API fees aren’t a pure profit centre, because the company has committed to ensuring its tools are used responsibly. It says it will do the work required to ensure spammers and hackers are kicked off promptly, and has the ability to impose restrictions on ChatGPT that aren’t purely part of them model itself – to filter queries and responses, for instance.

No such limits exist for Stable Diffusion, nor will they for the pirate instances of LLaMA spinning up around the world this week. In the world of image generation, that’s so far meant little more than a lot more AI-generated porn than in the sanitised world of Dall-E. But it won’t be long, I think, before we see the value of those guardrails in practice. And then it might not just be Meta trying to jam the genie back in the bottle.

This is excellently put, imo, and gets to the biggest question I have around the future of AI. And I think that... I don't trust private companies to guard against abuse. They're terrible at it! And they're more concerned with weird apocalyptic "alignment risk" fantasies than everyday harms. So given that... fuck a platform, fuck structuring a marketplace so you get a little rake any time something happens, fuck, I guess I am saying, profiting due to your use of capital rather than of labour.

Anyway. That's why I'm glad that Stable Diffusion exists, despite the harms it's also causing, and why I'm glad for this LLaMA leak. Here's to software you can run on your own computer! Here's to making a weird tool built on a weird tool someone else has made.


 
Pinned Tags