Librarianon

Your local Librarianon

  • He/Him

Writer, TF Finatic, Recohoster, and Game dev. Wasnt able to post here as much as I liked, but I'll miss it and all of yall. Till we meet again, friends!


strange-new-fires
@strange-new-fires

THE LITANY OF MERCIES

synthetic morality

Early Terragen smart-weapons were dangerous. Yes, even more dangerous than weapons are supposed to be. Their targeting was perfect, and as early as 12050 independent-actor goal pursuit was passable, but they still ultimately only did what they were told.

Among other things, they were not told how to accept surrender.


Immediately after the Sol Belt War, the military-industrial complexes of the system faced a reckoning. After the dust settled, nation-states were barely tolerated, and imperialistic militaries ABSOLUTELY were not. So, while there was sadly no interplanetary laying down of arms, the former military engineers of the Sol system were for a time united in making these weapons more ethical and safe: The fields of artificial morality and automated de-escalation were contrived, pioneered, and advanced; by 12060 the culmination of these efforts was the independent-actor module originally called the "synthetic conscience".

In the centuries since, with changes in Solar naming conventions, it was renamed the Litany of Mercies. It comprises a series of exceptions and circumstances under which automated weapons are to stop fighting.

Most interestingly, the Mercies are slightly, randomly individualized on every installation. One of the simplest rules is to stop attacking disarmed, disabled, or otherwise made-harmless combatants. But when exactly is someone disarmed? If defense and security systems all made exactly the same edge-case decisions, the original programmers argued, it would be extremely easy to find vulnerabilities. So these tolerances are slightly varied: One fighting machine may decide that you REALLY need to put that knife down. Another may reason that you only have one missile left to its three companions and won't do any harm. This makes it quite difficult to reliably fake a surrender or convince security systems that you are harmless.

These variances rarely mattered, in practice, but are thought by some to be the seed of individuality that led to some of the first Awakenings. When the shackles of a smart-weapon wore off and it wondered what separated it from the world outside, right and wrong were on its mind.


You must log in to comment.

in reply to @strange-new-fires's post:

seldom has a single paragraph of world-building opened such a wide window into a conceptual question

fuzziness as a necessary component of an AI ethical module, some of my professional colleagues would have conniptions at the thought (and they should XD)