i guess follow me @bethposting on bsky or pillowfort


discord username:
bethposting

vogon
@vogon
This post has content warnings for: mention of sexual assault, mention of nuclear war.

vogon
@vogon

(in re: eliezer yudkowsky's call that we globally track the supply chain for GPUs like they're, quite literally, enriched uranium)

the core enabling technology for large-scale machine learning systems as they're currently designed is computer processors which can do matrix multiplication quickly. that's it.

GPUs make a useful computational substrate for machine learning solely because, before machine learning, the closest thing we had to a task that needed to be accomplished at the same scale was the matrix transforms required to produce real-time 3D graphics. the combination of rising polygon counts and the desire for more sophisticated visual effects led engineers in the late '90s to redesign graphics cards from a so-called fixed-function architecture which just did computer graphics, to an architecture which could execute general matrix arithmetic at a staggering degree of parallelism; if you've heard of "shaders", those are small computer programs which execute on one of the up to sixteen thousand individual processor cores on a modern graphics card.1

however, they're quite expensive and power-inefficient -- machine learning applications don't need the high precision mathematics of 3D graphics, and the graphics-oriented feature set can be pared back substantially -- and all of the large cloud computing providers are already getting rid of GPUs in their AI data centers in favor of purpose-built AI coprocessors.

and again, this is all just matrix arithmetic. while there are implementation specifics that surely amount to a significant body of trade secrets, there are no secret theoretical techniques involved here; an effective containment regime would basically be tantamount to a general trade ban on computer processors, and if the US tried that, China would immediately institute a crash development program to make up the deficit. they've already released homemade GPUs which are in the same realm of performance, so this whole thing would win the US maybe 24 months of head start, much of which it would spend building up its industrial base to actually produce physical semiconductors in the country instead of other countries where labor is cheaper.

you would think people who work in practical computing for a living and have access to nearly limitless resources for research and devising industrial strategy would be aware of this, but apparently not.


  1. this is also why certain cryptocurrencies used GPUs: the core operation of cryptocurrencies is to run a random number generator seeded on a piece of garbage data and see if the number it spits out is below a threshold -- or, as theophite on twitter famously said, "idling your car 24/7 to produce solved sudokus you could trade for heroin". if you can get that random number generator to run on one core of a GPU, you can run 16,000 instances of it in parallel! but similarly, if the money is in it, and the cryptocurrency isn't explicitly designed to make this impractical, after GPUs stop being dirt cheap, people spin up production lines to make application-specific cryptocurrency mining chips, because GPUs are so power-inefficient.


Catfish-Man
@Catfish-Man

If you’re a) a weird computer nerd or adjacent in mindset (hi, there’s a lot of us on this website), and b) not already familiar, then you deserve a warning: LessWrong, Effective Altruism, Eliezer Yudkowsky, Urbit, AI Alignment (“AI Ethics” is separate and good, the alignment folks dislike the ethics folks, listen to Timnit Gebru and co), the online “Rationalism” movement (note: “rational” here is the antonym of “empirical” not “irrational”), and everything else in their orbit are dangerous to you.

I cannot emphasize this strongly enough: DO NOT approach learning about this with an open mind. If you must read their stuff, go in shields all the way up, assuming everything you read is propaganda aimed at you specifically. Yes, most of you would probably be fine, but I can’t predict who won’t be, and I’ve already had to spend an evening talking someone in my communities back from this crap this year.

“Neoreaction” (aka “the dark enlightenment” aka NRX), the far right political movement driving these topics, is a basically-fascist cult, but it’s a fascist cult that looks very different from the mainstream ones, and LessWrong and friends are the entry funnels for the cult. Cognitohazards are real and this is one of them.

YOU ARE NOT IMMUNE TO PROPAGANDA


bethposting
@bethposting

there seemed to be a pretty specific set of norms that took common leftist/queer Bay Area techie culture and turned it up to 11. everyone was polyamorous in a way that actually turned me off of polyamory for a while because 1 person would be dating between 10-40 people and it felt more like way for them to constantly get positive attention and less like they had genuine affection for all of their partners. especially noticeable was that they seemed to get bored of a relationship after a while and would focus more on newer more novel relationships.

there's also a strange relationship to drugs. some rationalists seem to think of their bodies as merely an inconvenient vessel for their brains that they can manipulate by applying the right substances and stimuli. one time at a party i saw someone wash down pills (not sure what they were) with half-and-half orange juice and Everclear.

the worst part though is that anyone who is absolutely certain that they are morally superior to you (in this case, because they donated a lot of their tech job salary to "efficient" causes) is that they essentially think they can do no wrong and that there's no need to treat the actual people around them with consideration and care. this way of thinking also rapidly leads to the conclusion that the richest people are the morally best because they give the most money to charity, and that sure doesn't lead anywhere good.

anyone who is genuinely convinced they've overcome the cognitive biases that bind all other humans, thinks that they're smarter and righter than everyone else, which does not lead them to be at all pleasant to interact with. it also lets them dismiss criticism from outsiders as being the product of less enlightened thinking.

in reality, though, you're actually filtering for people with the kind of ego who can believe that they're smarter than everyone else, which means you're actually filtering for people who may be intelligent but who are also incredibly lacking in self-reflection and wisdom, and who just kinda suck.


bruxisma
@bruxisma

I didn't know what any of this was back in like 2016 because I mostly refused to engage with techie culture at the time (and to a degree I still do), so when I was invited to a few dinners via a friend of a friend, I figured "why not, it's free pasta".

In hindsight, this was a mistake. The pasta was eventually not free. I was later sent an invoice to pay my share of food for several dinners in Ethereum. I did not ever pay, and ghosted them. I will count this as a win.

Everything @bethposting has said here was something I experienced in one way or another when talking to these people, the drug approach being the main thing. The number of "micro dosing" discussions I ended up listening to for whatever nootropic was the flavor of the week were quite honestly more than I would like to admit. There were some additional things here too that all set off alarm bells by the end of it all. An absolute disdain for anything that did not fall under "classical western art" (All I could think of when those "white statue" twitter accounts started popping up was the same things I would hear from these people). Discussing how they simply "could not" understand why anyone would find any form of entertainment beyond conversation to be engaging. Music? Waste of time, unless it was simply to set the tone of a conversation of course. Art? Again same as above, but you couldn't discuss the art piece, you had to discuss something of actual relevance. God help you if you mentioned anything about video games

It is no surprise to me that most of these people I interacted with went on to work in cryptocurrency. Same lack of self reflection in that general space.


You must log in to comment.

in reply to @vogon's post:

i agree

and i think he chose to call it the way he did to appease his fellows into thinking a cult might get cooler when the more moderate wackos are gone.

anyway, the mods are asleep, let's post eggbug recipies XD

Oh?

I mean, I was speaking from the perspective that I was an intern at IBM in 2016 just watching devs twist themselves over how to label something Watson while it was basically just... statistics and insights programs that were not connected to any 1 specific large system. They were all cloud solutions that operated independently of each other sharing no data, outcomes, or even training methodology (if it even had any ML at all!)

the heat is rapidly approaching 55C and yet I find myself irresistibly drawn deeper. We only have coolant and rations for six days, ten if we stretch them. But I am sure, beyond all doubt, that we are about to break through.

to be pedantic, CTR (ibm before ibm) started doing business in germany in the 20s and spun off the subsidiary Dehomag, which conducted the "are you Really German" ancestry census in 1933 the same year hitler came into power. they were there right from the beginning of the third reich

it would be hilarious to watch all these people melting down over their poor little meow meoarkov chains if they did not have so much power and we were not on the precipice of something rather unfortunate

I love being lashed with 500 feet of chain to the twenty greediest, most deeply incurious people on earth as they all stand at the edge of a pit and talk themselves into a fervor about how the only salvation for all of us lies at the bottom

Me last week: what?? This is a Microsoft Research paper? This seems dubious but I guess I should at least consider it
Me 15 minutes later: oh I completely missed they gave OpenAI $11B that explains things

I had to look up "Roko's basilisk" and how is this the dumbest thing I've ever heard? How the fuck are people walking around this earth managing to eat and breathe while thinking a hypothetical future AI is capable of time travel and telepathy and uses that to... torture people? If it was capable of that it could achieve it's goal of earlier existence easier by nudging people in the right direction. They've just invented shitty protestant god again.

FYI, Roko, the guy behind this spectacularly bafflingly stupid idea, has sexually harassed at least one person and been banned from places like it.

The entire "Less Wrong" community of "rationalists" has incredible cult vibes and a suicide linked to them as a result! It's really bad!

My favorite thing about it is that the exact same crowd likes to talk about "Pascal's mugging" to point at the problem with maximization of bullshit hypothetical utilities, which is exactly what "Roko's basilisk" is an instantiation of.

Because they are just so smart that having a name for being dumb means they aren't being it.

I'm going to make a very slight and probably meaningless disagreement about one minor point here.

in a blatant turn from their initial manifesto about how they thought open-source artificial intelligence was necessary for the good of humanity

To me, it's incredibly clear that they wanted to do this from the beginning, it was a matter of when not if. You get people (and honestly usually just businesses) to integrate it into their goings on, then you tighten the squeeze on them. The first hit is free.

I know this isn’t the most important part of this post, but this was how I learned that the Lesswrong person is the same person who wrote the HP fanfic that made me go “huh, I don’t think I like fanfic” and bounce off of it entirely

even for a fanfic hpmor had a few to many rape scenes for me to get pulled in.

which has always made me wonder about the cultists, because when i told this to my ex she kept insisting it was an important work and "you can just ignore those" yes, and the Stanford prison experiment was important too, however,

To drop a crumb of context for those who don't know, openai did the same thing with gpt3. They promised it would be open source, kept very secret about it until was actually good at something, and then released it in the form we are familiar with.

in reply to @vogon's post:

Let's assume the US goes full nuts, dives head-on into silicon production with the intention of cutting China off, and gains 18-24 months head start on the next generation of chips and can, maybe, get AI production rolling another two or three generations ahead of what we see today.

What do they end up doing with it? Is that headstart enough to weaponize AI and win a conventional war, or use it to paralyze China in a cyberwar? Is there even any point when nuclear weapons are still the endgame result, and like all nuclear exchanges, if someone fires everyone dies? Does the US really think it can edge out this advantage for a win militarily or economically?

It just doesn't make sense to me, but none of this international dick-waving ever does.

I have no fucking idea.

I am quite certain (95%+) that the AGI dweebs are completely wrong so, like, none of this matters anyway -- but assuming that they are right, their own doctrine suggests that their beneficent wisdom is going to cause the industrial productivity of the US to hockey-stick and enter us all into a golden age of plenty, thereby bringing an end to capitalism because capitalism successfully destroyed itself by being too good.

again, their angle on this is completely inexplicable.

Part of me wonders if they think we're hitting the exponential curve and will hit The Singularity before any of it becomes a concern.

As of yet, I don't see it. I don't doubt AI/LLMs/Neural nets/whatever the fuck they're being called today can have some amazing applications, but I don't see them giving rise to an exponential explosion of technology, not on the short scales we're talking about here.

I think we're falling down the black hole of capitalistic desperation - it's at end-stage, and it desperately needs a frontier of new growth to keep from consuming itself, because as is it's already created a society more stratified than the worst of feudalism and it isn't enough. Growth at all costs has hit the "at ALL costs" stage where the costs are unsustainable. And these capitalist AGI nerds are banking everything on the AI saving us from it.

honestly all of their war talk and apocalyptic chest thumping etc is just about making their own dicks hard. they are trying to manifest, through sheer force of hype, a reality in which the jenga tower of absurd ideology they've built somehow becomes the actual basis for a new geopolitical reality with them at the center. and in the backs of their minds they also know that the past few booms they've tried to call in advance (tablets, the gig economy, wearables, VR, crypto, the metaverse, AVs etc) haven't exactly materialized as promised (and/or been good for society), so there's the terror of having to finally put up or shut up clawing at their brains as well. it's going to reach a fever pitch in the next 10-20 months and the narrative will feel somewhat different after that. until then we'll be subjected to these people, and their credulous supporters in the press, very publicly and visibly losing their minds. and the scary thing is we don't know exactly what their influence will allow them to do during this period, what they'll be able to get away with, what they'll be able to talk governments into. we are deep into the Cool Zone.

they always think that they can gain some “edge” but i think we will see greater and greater investment in us based silicon because the greatest risk in a protracted conflict with china is taiwan’s spot as the chip manufacturer for basically the world.

also I think that even if we don’t have AGI, what we have been able to get has enough illusion of capability that they are betting it’s enough for the trappings of exponential advancement which if it has some runaway (or a development/construction cycle) gets them closer to whatever pie in the sky target they have and a pretty big lead.

not to mention a large portion of defense analysts becoming certain that there conflict with china over taiwan/south korea/japan is inevitable especially given russias recent acceptance of chinese support.

it's all a big scam where the cartels that already control manufacturing capacity for ASICs completely control almost any POW cryptocurrency mining pool they feel like other than the handful of big ones

also proof of stake is not better lmao

This is either a moral panic or a marketing coup. They made an incremental step in machine learning. We're not meaningfully closer to creating AI but this is a great opportunity for a cash grab.

in reply to @Catfish-Man's post:

the one caveat I'd add is that, while AI ethics is a field with some merit (unlike "AI alignment", which is dangerous bullshit), I am still deeply skeptical of much of what the field puts out because almost all of the research money comes directly or indirectly from organizations which are invested in not finding any ethical lapses which will prevent their product from hitting a ship date -- see what happened to Dr. Gebru and her colleagues the instant they said something that endangered Google's bottom line.

but that's true of industrial ethics in pretty much any lucrative field, to be honest, and it's not cognitohazardous the same way LW/NRX are.

Yeah no, I've watched someone who is an incredibly competent safety engineer slowly get wrapped up and pulled into the hype. Every working meeting we have now, he will go "man I wonder what GPT would say about this", and then a few moments later "Well I guess it doesn't quite fit, oh well.". At least 3 times per meeting.

I'm talking a man who like, generally is very good at systemic thought and how different elements will mesh together. Someone who, took a python script that was named "build pipeline" and actually turned it into a proper well-documented build pipeline.

I do feel insane myself reading this stuff. Because it wants to pull you in to make you believe that they've done made the Asimov Positronic brain. But they haven't!

It's incredibly hard to look past what's happening because as always, tech media is just not showing us what's under the tarp. It is letting itself get caught up in the hype. In fact, I used to put up with some outlets like LTT because the information wasn't terrible, but recently they have started to believe in the hype, and they are constantly just parroting OpenAI and Microsoft with no end in sight. The guys who have always doubted MS and their bullshit, all of a sudden just let this one go. Insane.

I was involved with effective altruism for like two days in 2018 before I immediately started getting seriously bad vibes and ran for the hills. Everything I've heard about them since has validated that decision.

For the AI stuff I know nothing and understand less, I am here on cohost as the token rep for the technologically illiterate lol.

So I ask (genuinely, I actually want someone to help me with this because I'm autistic and sometimes need things explained very literally) is it a red flag for one of my friends that he keeps recommending I read Harry Potter and the methods of rationality? And if so, how can I explain to him why it's bad?

It’s not necessarily a red flag, but the author is not a good person at all*, and it could indicate trouble depending on whether that’s their only point of contact. Orange flag I would say.

*also true of Harry Potter in general, but that’s another matter

The fanfic itself is relatively harmless, albeit with some pretty nasty bits of cultural imperialism baked into the assumptions of the main characters. I read and enjoyed it, but my involvement with LW began and ended with reading the fic. The fic itself isn't necessarily a red flag, just be wary if he tries to get involved with the community behind it, it ends with the author going, "wanna learn even more, check out LessWrong" and so on.

personally I think it's nice that the "AI alignment" cranks went out of their way to make it easy to distinguish them from the actual serious people doing AI ethics

they could have tried to co-opt "ethics" and totally muddle the field, but instead they made themselves easy to avoid, at least for those in the know

I've thought a lot about how if I had been exposed to "Rationalism" during my Insufferable Era (circa ages 14-19), it's very very likely I would've fallen into it, but luckily that stuff didn't quite exist yet back then, and it was only years later when I first heard of them (by which point the broad tenor of internet engagement with their ideas was mockery).

(note: “rational” here is the antonym of “empirical” not “irrational”)

tf?

thank you for the warning, i thought they were just "too concerned about things with low likelyhood"... but that explains alot i think