• they/them

plural system in Seattle, WA (b. 1974)
lots of fictives from lots of media, some horses, some dragons, I dunno. the Pnictogen Wing is poorly mapped.

host: Mx. Kris Dreemurr (they/them)

chief messenger and usual front: Mx. Chara or Χαρά (they/them)

other members:
Mx. Frisk, historian (they/them)
Monophylos Fortikos, unicorn (he/him)
Kel the Purple, smol derg (xe/xem)
Pim the Dragon, Kel's sister (she/her)


ireneista
@ireneista

is that it presupposes a very specific shape that the stuff will take - a quickening in pace of change, everything that has come before ceasing to matter, etc

and like

personally, we do fully believe that humanity will someday have computers that can talk with us as equals

no current technology is that. even as someone highly technical with a lot more than average knowledge of how ML works, we do not feel qualified to say for sure whether current technologies could be part of that, with additional innovations, but no current technology is that.

but that isn't even the point. the point is that our personal mental model of the possibility-space around such a thing being invented and the effects on society doesn't have anything singularity-like in most of the branches. it certainly doesn't have social problems or preexisting harms of ML going away.

also, humans don't tend to use our intelligence to become even more intelligent, so there isn't a ton of reason to expect that a computer would. also also, being intelligent does not tend to be highly correlated with being able to drive change in the world, except to the extent that that change starts and ends with building things, and the types of change that can be made by building things are quite limited really.

this is all stuff that we could be wrong about; this is our personal model, not any sort of guarantee. but that's the point: nobody else knows, either. we all need to be able to talk about this stuff, because humans are better at working through big questions when we do it together. (still pretty shitty, but people going it alone is even worse)

and this is all stuff that people who have worshiped this possibility are often ill-suited to reason about, because the level of excitement becomes a cognitive distortion. when you've gone your entire life thinking hey, wouldn't it be cool if someday... it is emotionally challenging to hear that actually the things you're talking about could play out in other ways, too.

all our love to people who are dealing with that. this is a period of history when this stuff does wind up mattering in real, practical ways, as you can see in the public debate around ML policy this past year. it is really important that people be able to hold their excitement and study what's actually happening, not just make assumptions based on what they want to happen.

thank you. <3


pnictogen-wing
@pnictogen-wing

this is a mathematical singularity, for example—a "cusp" in a function. it's a place where the curve isn't differentiable, and experiences a sudden break, but it's also a simple change in direction. I take comfort in this realization, even though I don't know how to map mathematical singularities onto social ones; I feel like the lesson is that any sharp change in social direction can be construed as a "singular point" and such changes don't need to be catastrophic. ~Chara


You must log in to comment.

in reply to @ireneista's post:

I'm pretty sure there's a path dependency problem: a capitalist bullshit singularity (in which we become unable to make any sense of the world because it's bullshit reacting to bullshit as fast as it can) is almost certainly possible before an intelligence singularity is! There are probably others I've missed.

I'm certainly wary of assuming it's exponential rather than an S curve! Even superlinear can do plenty of damage given how common it is to, well, model everything linearly though.

Or to put it another way, I can think of a few historical precedents for something that smells a bit like "capitalist bullshit singularity" that thankfully didn't become what the relevant crowd call a hard takeoff.

even if we could define a metric that might plausibly experience dramatic growth as a result of these technologies, which nobody has managed to do to our personal satisfaction, the rate of change would in our view be the wrong focus

even sublinear change can cause a lot of societal damage

We're certainly teetering on the edge of something in the UK, and I remember when the rate of change in various things looked sublinear, to the extent we can hand-wave and pretend there's anything continous there (which, yeah, it's not really that shape of thing at all),

I've a nasty feeling the pandemic's reset people's ideas about what's a big enough number to care about after what was done to the welfare system here in the '10s and the deaths that resulted.

I spend so much time in my day job trying to teach my (philosophy and game design) students to describe what is actually happening in whatever future or computational scenario they have before them, rather than recycling material from prevailing headline or marketing narratives.

The same question bears asking with 'intelligence' itself - when Kurzweilian/Moore's-Law types and their descendants talk about 'intelligence' 'increasing', what feature of the material world are they describing, if any? Sometimes you'll see a measurable quantity being brought in to stand in for it - 'total FlOPS available to humanity' or something - but the more quantifiable that property is, the weaker its analogy will be with ordinary applications of the word 'intelligence' to humans (precisely because in English we use the word 'intelligence' in an extremely inconsistent mess of different ways). When you say 'being intelligent does not tend to be highly correlated with being able to drive change in the world", is the problem really that 'intelligence' is too incoherent to be assessed for correlation with anything else?

oh well said

our observation about driving change is from our experience as an activist: it tends to be more about diligence, awareness of political terrain, relationship-building, decision-making processes that function smoothly and allow the person or organization to seize the moment as it arrives, .... and of course all of that is usually overshadowed by whoever has existing wealth and structural power in the relevant domain. being a subject-matter expert, having good deductive or problem-solving skills, etc, only become relevant once you've already solved all those other problems.

so there's really two meanings of intelligence at play in the comparison, the stuff humans are doing now and the hypothetical stuff that future technologies might do (but are not presently doing). it's quite hard to speculate about the latter, and we would indeed call it incoherent. to the extent that it gets applied in the same ways that humans working for change apply ourselves, we would expect it to run into the same constraints, and our core observation with that remark is really about the constraints more than it is about intelligence.

dunno if that really addresses your question though :) thanks so much for asking it!