it relies on a massive, unfounded assumption that AGI will somehow take over running the world within our lifetimes, which anyone who actually knows stuff about AI can easily identify as complete bullshit. it's like pascal's wager for really stupid dudes online who are convinced that they're geniuses. anyone spending time worrying about this scenario that's just a figment of a fevered imagination probably has some motivation to distract attention from the thousands of real and urgent problems that exist in the real world where people live
- if no one pays attention to the idea, by the argument's own logic the evil AI won't get made
- it's actually not rational at all to worry about things that would be infinitely bad but that are infinitely improbable
- if you did help make such an evil ai you'd be morally culpable in the suffering it caused and you'd be a cowardly and selfish piece of shit
- i personally, with good reasons, believe that our current AI tech is gonna hit a wall, with marginal improvements from more and more energy and compute thrown at it. much of apparent intelligence of LLMs is humans imputing understanding and intentionality to word soup that has no such things. we've just made a machine that's good at tricking us into thinking it's smart, not a machine that has intelligence in the usual sense. i think there's currently no known path to true AGI, if AGI is even a meaningful concept