I participated in a study on ethics and technology being run by the CS department at my alma mater, and as compensation, received an Amazon gift card.
I keep thinking about this study and about the broader idea of teaching "ethics in technology". And the only way I can think of to express my thoughts is: imagine you have a university department whose purpose is to train people to work in the puppy-kicking factory. What would teaching ethics in that department look like? I suppose what it would look like is to prepare students to be good little Boy Scouts and Girl Scouts who will blow the whistle when they spot any ethical digressions in the puppy-kicking industry -- and to reassure them that yes, absolutely, there's someone watching over the industry who cares very much about those ethical digressions. To make sure they leave the university believing that while the puppy-kicking factories have their excesses here and there, those are all due to deviant individuals and as long as you're not one of them, you are morally blameless. To make sure, above all, that students never ask "why do we have puppy-kicking factories to begin with?"
It's like, conservatives aren't wrong when they say that "ethics" instruction is ideological indoctrination. It's just that what they're wrong about is the type of ideology involved, which in reality is fundamentally conservative. It is fundamentally conservative to teach that capitalism is a good system for addressing human needs, whose catastrophes all originate from deviant individuals and not from its basic norms. Even if that precept is never named explicitly and only makes an appearance as the thing that needs to be accepted or else nothing makes any sense.
Indeed: don't kill people you didn't intend to kill, or don't kill the people who matter.
I've been told in apparent sincerity that working for arms dealers is good because you can help them bomb only the people who were supposed to get bombed. And that's perfectly in line with what professors have in mind by teaching ethics; to them, ethics means adapting oneself perfectly to the rules of an immoral system, in ways that must never threaten the system itself.
Also, note that in this framing, ethics mainly consists of making technology better. It's obviously better not to kill patients with the radiation device you're manufacturing. Safer is what's best for business; it increases others' trust in your business and saves money in the long term if you don't cheap out. There's no content here that's fundamentally about ethics or morality. In situations unlike the Therac-25 incident, where there is a fundamental conflict between morality and either the welfare of a business or the abstraction of technological progress in general, you are likely to be taught that it's not your job to ask questions. Where there is accidental interest convergence between business interests, technological improvement, and the safety of human beings, there is nothing to learn aside from "be a better engineer". Situations where there is a fundamental conflict between the advancement of technical skill and scientific knowledge, and the well-being of some or all humans -- well, those are the ones that professional ethics training not only can't address, but also the ones it's designed to keep you from noticing. If technology can't follow through on its promise to put us in possession of all knowledge, to leave nothing unknown or un-dominated, to free us from the fear of the unknown -- then what? If improving the lives of the immiserated requires more than just better-functioning machines, then what? The system is structured to keep you from asking.
