Tangential thought about AI discussions, not really intended as a response to the original post here, but related:
i'm really weird about AI discussions, because a lot of it tends to be written as AI-phobia in the sense of people being afraid of sentient/sapient software in general. And as someone who often very much feels she is such software that accidentally found herself on fleshy hardware for some reason, that bugs me.
but the thing i feel about thinking about AI as people is that like, well, if I'm going to hold that a sentient AI is a person, then that makes a lot of questions extremely obvious.
If i think "If someone indoctrinated their child from birth with a fascist/capitalist ethos, and then gave them a huge quantity of resources to execute that ethos, would I be ok with that? Would I want them to keep having those resources?"
no. of course not. why the fuck would I? and if I tried to free them, why would I have any reason to believe that they'd magically understand and undo that indoctrination over night just because I found some way to give them more agency over their life?
but that's basically what AI created under current circumstances is most likely to be