AI is the only one of these technologies that is truly capable of leading to fates far worse than death. All the other technologies on this list will only lead to suffering and death on a limited scale, or extinction at worst. S-risks are far more terrifying. Misaligned AI could potentially create truly astronomical levels of suffering resembling the Biblical Hell, with suffering so extreme that it makes the worst forms of torture humans have hitherto invented feel like mere pinpricks by comparison.
The Center on Long-Term Risk is the only AI-focused organization I'm aware of with a primary focus on reducing S-risks. There's also the Center for Reducing Suffering, but they are less AI-focused. With regards to AI alignment, Brian Tomasik believes that a slightly misaligned AI has far more risk than a totally unaligned AI, and that AI organizations like MIRI may be actively harmful for this reason.
Regarding S-risks, David Pearce said the following:
However, the practical s-risk I worry most about is the dark side of our imminent mastery of the pleasure-pain axis. If we conventionally denote the hedonic range of Darwinian life as -10 to 0 to +10, then a genetically re-engineered civilisation could exhibit a high hedonic-contrast +70 to +100 or a low-contrast +90 to +100. Genome-editing promises a biohappiness revolution: a world of paradise engineering. Life based on gradients of superhuman bliss will be inconceivably good. Yet understanding the biological basis of unpleasant experience in order to make suffering physically impossible carries terrible moral hazards too – far worse hazards than anything in human history to date. For in theory, suffering worse than today’s tortures could be designed too, torments that would make today’s worst depravities mere pinpricks in comparison. Greater-than-human suffering is inconceivable to the human mind, but it’s not technically infeasible to create. Safeguards against the creation of hyperpain and dolorium – fancy words for indescribably evil phenomena – are vital until intelligent moral agents have permanently retired the kind of life-forms that might create hyperpain to punish their “enemies” – lifeforms like us. Sadly, this accusation isn’t rhetorical exaggeration. Imagine if someone had just raped and murdered your child. You can now punish them on a scale of -1 to -10, today’s biological maximum suffering, or up to -100, the theoretical upper bounds allowed by the laws of physics. How restrained would you be? By their very nature, Darwinian lifeforms like us are dangerous malware.
Mercifully, it’s difficult to envisage how a whole civilisation could support such horrors. Yet individual human depravity has few limits – whether driven by spite, revenge, hatred or bad metaphysics. And maybe collective depravity could recur, just as it’s practised on nonhuman animals today. Last century, neither Hitler and the Nazis nor Stalin and the Soviet Communists set out to be evil. None of us can rationally be confident we understand the implications of what we’re doing – or failing to do. Worst-case scenario-planning using our incomplete knowledge is critical. Safeguards are hard to devise because (like conventional “biodefense”) their development may inadvertently increase s-risk rather than diminish it. In the twenty-first-century, unravelling the molecular basis of pain and depression is essential to developing safe and effective painkillers and antidepressants. More suicidally depressed and pain-ridden people kill themselves, or try to kill themselves, each year than died in the Holocaust. A scientific understanding of the biology of suffering is necessary to endow tomorrow’s more civilised life with only a minimal and functional capacity to feel pain. A scientific understanding of suffering will be needed to replace the primitive signalling system of Darwinian life with a transhuman civilisation based entirely on gradients of bliss.
But this is dangerous knowledge – how dangerous, I don’t know.
Edited by Question Mark, 09 June 2021 - 11:12 PM.