• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Singularity Introduction :: Eliezer Yudkowsky


  • Please log in to reply
No replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 25 August 2002 - 06:02 AM


Eliezer Yudkowsky graciously granted BJKlein an interview about the Singularity and the implications for Human Immortality.
June 2002

Posted Image


[From: www.singinst.org] Eliezer Yudkowsky is one of the foremost experts on the Singularity, with more than a megabyte of his work available online. Yudkowsky is the author of General Intelligence and Seed AI and Creating Friendly AI, and has been one of the prime movers in the creation of an organized Singularitarian community. Secretary/Treasurer of the Singularity Institute.


BJKlein: What is the Singularity?
Eliezer: Sometime in the next few years or decades, humanity will become capable of breaking the upper limit on intelligence that has held since the rise of the human species in its current form. We will become capable of technologically creating smarter-than-human intelligence - perhaps through enhancement of the human brain, direct links between computers and the brain, or Artificial Intelligence.
This event is called the "Singularity" by analogy with the singularity at the center of a black hole - just as our current model of physics breaks down when it attempts to describe the center of a black hole, our model of the future breaks down once the future contains entities that are smarter than human.
Since technology is itself the product of intelligence, the Singularity is an effect that snowballs - the first smart minds can create smarter minds, and smarter minds can produce still smarter minds. That's why the Singularity is the most critical point in humanity's future.

BJKlein: Is the quest for immortality and the quest for the Singularity one and the same in your mind?
Eliezer: Certainly, I believe that the quest for Singularity is the most efficient way to pursue the quest for immortality, but they are not - quite - one and the same. The quest for immortality, as a human aspiration, has a much longer history than the quest for the Singularity. And the quest for the Singularity adds a lot of ideas that are not necessarily present in the quest for immortality. Some of these ideas, such as becoming smarter, are necessary implications of the quest for immortality, but don't appear in it historically because, well, necessary implications do get overlooked a lot of the time.

I see the quest for Singularity as a very efficient way of pursuing the quest for the good, and immortality as one kind of good to be pursued.
BJKlein: As we have discussed before, I’m of the opinion that Life is the only alternative. Therefore, this makes the pursuit of immortality the ultimate goal. Also, I would say that the goal of immortality trumps all other pursuits. How would you respond?

Eliezer: Well, I do tend to be a stickler about coming to the right conclusions for the right reasons, and from that perspective I am forced to disagree with your logic. The statement "Life is the only alternative" conveys no information except "I really, really dislike death", and disliking death isn't enough to prevent you from dying. Many people have absolutely refused to accept death, then died. Determination isn't enough. To defeat death you have to know what to do, and how. Death is a technological challenge, not an emotional one.

Similarly, it doesn't follow from the unacceptability of death that immortality is the only or primary goal. Love and laughter require avoiding death, but defeating death is a quest that eventually comes to an end - you've beaten death, you're finished, you don't have to worry about death any more and you can get on with love and laughter and the pursuit of mathematics. If immortality is the ultimate goal, what do we do after the ultimate goal is achieved? Saying that immortality is the ultimate goal gives death too much credit. Immortality is a bare necessity for any reasonable standard of living.


BJKlein: So what is your driving force for bringing about the Singularity?
Eliezer: The desire to accomplish as much good as I possibly can.

BJKlein: Good as in...
Eliezer: The usual. Truth. Happiness. Life. People getting what they want. An end to involuntary pain, death, coercion, and stupidity.

BJKlein: What portion of society will be aware of the Singularity in the days and weeks before it happens?
Eliezer: Depends on the form of the Singularity. I tend to be a fan of the "hard takeoff" theory of the Singularity, in which smart minds build smarter minds, and fast minds build faster minds - a very sharp ascent once it starts. It would certainly be possible for a mind in the process of doing that to reveal itself at any point, but even a smart, altruistic mind - one concerned with human welfare - might still decide to stay quiet until it had enough technology to usefully help people.
If immortality becomes available at a certain point, it would be really, really pointless to get killed in a riot a few days before that point. And if you delay that point for a week, you lose everyone who dies during that week - around 150,000 people per day. It isn't really my job to make that decision, but I can see how the moral decision might be to stay quiet during the process of a hard takeoff. I'd do it if my AI asked me to.
BJKlein: Right...
Eliezer: If so... the portion of society that will be aware of the Singularity in the days and weeks before it happens will be effectively zero. Fortunately, there are years ahead of us in which you can say anything you think needs to be said about the Singularity - as long as you get around to doing it today, instead of waiting until the last minute.

BJKlein: So, you are of the mind that an AI built by humans can be "trusted" to help us... as long as we get the Seed AI right?
Eliezer: My position on Friendly AI is a mix of "It's likely to be AI first no matter what we do, so we may as well make it Friendly AI" and "A human is a mind, an AI is a mind, there is no theoretical reason why an AI can't be at least as benevolent as any human who ever lived, maybe a lot more so." If you *really* *seriously* want to code *the* AI, then it takes more than one programmer - the Singularity Institute doesn't have the funding for that yet.

BJKlein: And are you working on code now for such an AI? If so, how is that project going?
Eliezer: Right now it's advance design work. And right this minute, I'm actually taking off from that, to do more site content for SIAI's website. (www.singinst.org)

BJKlein: Do you have plans for a book on the Singularity?
Eliezer: "Levels of Organization in General Intelligence" will appear as a chapter in the forthcoming book "Real AI: New Approaches to Artificial General Intelligence".

BJKlein: What can the readers of the "Immortality Newsletter" do to bring about the singularity?
Eliezer: Well, when I asked myself that question, the answer I arrived at was "Just build a self-improving AI already and get it over with!" Hence the Singularity Institute. We take the direct approach.

BJKlein: So, the key is the AI, the code is the key?
Eliezer: Actually, my guess is that the code is around one-third to one-quarter of the work... but, anyway: I think it really is true that the direct approach maximizes effectiveness... there are a lot of more ordinary things that contribute peripherally to the Singularity, but many people are already doing those things for other reasons to get to the Singularity most quickly and to maximize the effectiveness of any given effort, you strike for the heart and act on the Singularity directly - that's how it looks to me.
Aside from that, you can spread the word about the Singularity... become a cognitive science researcher... work for Intel or AMD... spread pro-technology ideas to your friends... but, at the end of the day, I still think that it makes sense to just go for the Singularity directly!

BJKlein: If writing the code is one-third the work, what is the other two-thirds?
Eliezer: Of the other two-thirds, one-third is experiential learning - guiding the AI through scenarios in virtual microenvironments, watching the patterns it forms, trying to teach it how to coordinate a thought internally - holding the AI's hand while it creates the content of its mind and the one-third is a grab-bag - working to get the AI into a cycle of self-improvement that doesn't just peter out, Friendly AI experiential work, a certain amount of time spent talking to the AI once it knows how to talk (a lot of content comes out from here, but it requires less time and effort once you're at that stage) plus all the things we don't know about this far in advance

BJKlein: Thank you. It has been a pleasure.
Eliezer: Likewise.
  • Well Written x 1




2 user(s) are reading this topic

0 members, 2 guests, 0 anonymous users