jckarter

everyone already knows i'm a dog

the swift programming language is my fault to some degree. mostly here to see dogs, shitpost, fix old computers, and/or talk about math and weird computer programming things. for effortposts check the #longpost pinned tag. asks are open.


email
mailto:joe@duriansoftware.com
discord
jckarter

caffeinatedOtter
@caffeinatedOtter

"Society's rules and laws still apply, even on the moon" is actually a fabulous example of the way that "AI" chatbots can spit out text that reads and vibes correctly and should not be taken seriously for anything other than passing entertainment.

Because "society has rules" is indeed a general statement, but "society has laws" means something different: that any given society has laws, which apply (only) within their own jurisdiction.

(Quick now: what jurisdiction's laws would apply to a murder committed on the moon? Has this ever been established? What inter-jurisdictional precedents can be considered applicable?)

Basically, none of these text generators can read and they don't understand anything. What they're improving at is generating text that vibes correctly as text; any stronger claim is wrong, and made either by a rube or a liar.

Vibing is fine right up to the point where meaning comes into play. See how the platitude "society has rules" is fine? See how everything falls apart the moment it says "society has laws" because there's actual-factual definite and semantically important meaning involved?

These things are toys at best. Unfortunately, they're vastly expensive to be nothing but whimsical 5-seconds-at-time's worth of amusement, so The Capitalists are trying to crowbar them into any gap in society currently filled with people getting paid salaries, in order to destroy the industry that pays them and reduce them to gig workers.

Their "pink slime of information" quality is also appealing to fascists, for whom the destruction of certainty and the reliability of fact is an end in itself.

(Techbros and fash...but I repeat myself.)

The tech will never work, in the sense they keep promising.

They know that.


You must log in to comment.

in reply to @caffeinatedOtter's post:

The tech will never work, in the sense they keep promising.

Playing devil’s advocate here, but I don’t get the logical jump from “this tech does not do what techbros promise it does right now” to “this tech will never work”, assuming the promise stays the same.

There’s a big difference between “this machine can replace a big chunk of what artists/programmers to today” vs “this machine will replace artists/programmers outright”. And even that is nuanced, yes, advances in technology have replaced vast amounts of people e.g. people feeding punch cards to computers. Or people manually copying frames of animation movies from one sheet of paper to the next.

Statistical text generation (the thing that's actually possible, and which they keep demoing) categorically cannot do anything that requires the text to be understood (what they keep promising, which currently cannot be done in the general case by any approach at all, and which literally nobody has any roadmap for getting to that doesn't involve '2. ???' wishful thinking in the middle).

I will grant you that nobody was asking "can computers be programmed to pick chess and/or Go moves to a standard where they can beat competent human players" 100 years ago, but that might be because it was 1922.

I … am not so sure the gap between imitation and understanding is particularly large tbh. Babies start with imitation. Birds can imitate. I don’t know if understanding is a different kind of brain function or just imitation over longer times with more neurons/resources, to the point where you can start to generalize over patterns.