• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

S-risks/risks of astronomical suffering

s-risks future suffering cryonics singularity artificial intelligence agi

  • Please log in to reply
No replies to this topic

#1 Question Mark

  • Registrant
  • 25 posts
  • 4
  • Location:Pennsylvania
  • NO

Posted 16 June 2021 - 09:34 PM


What are your thoughts on S-risks, i.e. risks of astronomical suffering? Does the possibility of worse than death scenarios deter any of you from signing up for cryonics, or trying to reach longevity escape velocity? How realistic do you think being reanimated into a dystopian posthuman civilization is, and how do you think the risk can be reduced?


There was a LessWrong post that suggested the following:


If you are worried about a particular scenario, you can stipulate to your cryonics organization that you would like to be removed from preservation in intermediate steps that make that scenario more likely, thus substantially reducing the risk of them occurring. For example, you might say: 


- If a fascist government that tortures its citizens indefinitely and doesn't allow them to kill themselves seems likely to take over the world, please cremate me. 

- If an alien spaceship with likely malicious intentions approaches the earth, please cremate me. 

- If a sociopath creates an AI that is taking over foreign cities and torturing their inhabitants, please cremate me. 


In fact, you probably wouldn't have to ask... in most of these scenarios, the cryonics organization is likely to remove you from preservation in order to protect you from these bad outcomes out of compassion. 


For those of you signed up for cryonics, are any of you planning on doing something like this? Are there any other opt-out clauses that you would add to this list?


On the topic of cryonics, Brian Tomasik said the following:

Personally, I wouldn't sign up for cryonics even if it were free because I don't care much about the possible future pleasure I could experience by living longer, but I would be concerned about possible future suffering. For example, consider that all kinds of future civilizations might want to revive you for scientific purposes, such as to study the brains and behavior of past humans. (On the other hand, maybe humanity's mountains of digital text, audio, and video data would more than suffice for this purpose?) So there's a decent chance you would end up revived as a lab rat rather than a functional member of a posthuman society. Even if you were restored into posthuman society, such a society might be oppressive or otherwise dystopian.

What are your thoughts on this? Is he right?

 

The only organizations I’m aware of that are directly focused on reducing S-risks are the Center on Long-Term Risk, which is primarily focused on AI, and the Center for Reducing Suffering, which is primarily focused on other risks. Is anyone else here familiar with these organizations, and if so, what are your opinions of them? With regards to other AI organizations, Brian Tomasik argues that organizations like MIRI might be actively harmful, due to near-miss risk in AI alignment.

 

Other relevant links:

S-risks: why they are the worst existential risks, and how to prevent them | Max Daniel

S-risks | Max Daniel | EAGxBerlin 2017

Tobias Baumann – The Moral Significance of Future Technologies

Can we decrease the risk of worse-than-death outcomes following brain preservation?

Mini map of s-risks

Other LessWrong posts


Edited by Question Mark, 16 June 2021 - 09:40 PM.






Also tagged with one or more of these keywords: s-risks, future, suffering, cryonics, singularity, artificial intelligence, agi

2 user(s) are reading this topic

0 members, 2 guests, 0 anonymous users