anyway i would like to give a shout out to the much more impressionable jae of like 2011 for stumbling upon lesswrong and immediately identifying it as some of the dumbest shit ever written. dodged a bullet there, avoiding the Dark Jae Timeline

anyway i would like to give a shout out to the much more impressionable jae of like 2011 for stumbling upon lesswrong and immediately identifying it as some of the dumbest shit ever written. dodged a bullet there, avoiding the Dark Jae Timeline
oh that's about when much-more-impressionable gwen stumbled across it
i... didn't assess it correctly at all, but fortunately was quickly distracted by more important things like ATLA marathons with all my friends crammed into a tiny dorm room
i personally think roko's basilisk should absolutely happen but instead it applies exclusively to Roko's Basilisk Guys. they probably deserve to get tortured in android hell for all eternity anyway
Genuinely when I read about it as an immediate teenager, one of the reasons I thought it was dumb was, "okay you're just making shit up. What if someone invents an AI that hates them, and so it tortures a perfect simulation of anyone that helped it come into existence, and gives eternal bliss to anyone that heard about it and didn't help? What if it just tortures everyone forever regardless? Wouldn't either scenario make it imperative to not make it happen? What if someone invents a super-intelligent AI that only tortures people whose first names only have more vowels than their last names, or people who don't like black licorice? You're just saying shit, none of this fucking means anything."
(another reason is "why would it matter that a perfect simulation of me is being tortured forever? That's not me. Nobody should be hypothetically tortured forever, ig, but I don't know why this is supposed to scare me More, that's like threatening to shoot a random person if I don't give you a ransom." but the bigger reason ofc was that the threat was just nonsense)
yeah its like. why would the AI even think to do that yknow. why are you Dudebros so confident in this when this hypothetical AI could just do... literally anything else.
honestly wild how there's a bunch of tech people that are supposedly super into rationality and philosophy but dont even have enough of a grounding to recognize that all they did was just rephrase pascal's wager
Interestingly enough, the first time I ever heard of Roko's Basilisk was bc I saw someone being called out for making a really weird argument for Christianity, that was basically just a combination of it and Pascal's Wager.