• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Which will happen first, strong AI or longevity escape velocity?

escape velocity

  • Please log in to reply
9 replies to this topic

Poll: Which will happen first, strong AI or longevity escape velocity? (19 member(s) have cast votes)

Which Will Happen First?

  1. Strong AI (13 votes [68.42%])

    Percentage of vote: 68.42%

  2. Longevity Escape Velocity (6 votes [31.58%])

    Percentage of vote: 31.58%

Vote Guests cannot vote

#1 BigPine

  • Guest
  • 13 posts
  • 16
  • Location:CA, USA

Posted 20 December 2014 - 07:35 AM


For the purposes of this poll, strong AI refers to AI that's better than most humans at 99.99% of fields (recognizing images, writing articles, having common sense, etc.) The AI must also be able to improve itself in all fields without human intervention. It does not have to be conscious, self-aware, friendly, or be cheap enough for most people to afford.

 

Longevity escape velocity means that the life expectancy in at least five developed countries is increasing by more than a year for each year that passes. It does not mean that people are immortal or that cures for all age-related diseases have been found.


  • Well Written x 1

#2 Danail Bulgaria

  • Guest
  • 2,217 posts
  • 421
  • Location:Bulgaria

Posted 20 December 2014 - 07:54 AM

Lol.

 

"(recognizing images, writing articles, having common sense, etc.) The AI must also be able to improve itself in all fields without human intervention. It does not have to be conscious, self-aware, friendly, or be cheap enough for most people to afford."

 

Your strong AI is already here, man.

 



sponsored ad

  • Advert

#3 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 20 December 2014 - 04:14 PM

Lol.

 

"(recognizing images, writing articles, having common sense, etc.) The AI must also be able to improve itself in all fields without human intervention. It does not have to be conscious, self-aware, friendly, or be cheap enough for most people to afford."

 

Your strong AI is already here, man.

 

No it isn't.  "The AI must also be able to improve itself in all fields without human intervention"    I'm not sure about common sense.  That's just a combination of logic and a very deep knowledge base, so in principle it might be possible today, but I don't recall hearing about an AI that has what most people would consider common sense.  There are an awful lot of humans that lack it, for that matter.



#4 platypus

  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 20 December 2014 - 04:40 PM

When can we expect machine-consciousness to arrive? AI is interesting but it's not the holy grail...



#5 Danail Bulgaria

  • Guest
  • 2,217 posts
  • 421
  • Location:Bulgaria

Posted 30 December 2014 - 02:41 PM

The machine-consciousness can be the death end of the humanity. I think, that all of the tasks can be done with AI that doesn't exactly need to be conscious. We don't need a machine, that can take the decision to kill out the humans in order to take over the planet for example, or to decide to kill out the humans for self-protection. We can have non-conscious self driving cars, object recognitions, production lines, or whatever.


  • Off-Topic x 1

#6 PerfectBrain

  • Guest
  • 15 posts
  • 2
  • Location:Dallas
  • NO

Posted 12 November 2015 - 03:56 AM

I believe that Strong AI will be achieved first.  Between the improvements in computational power and research into AI (expert systems, genetic algorithms, big data), it's a path that seems much closer.  Recent improvements in PET scanning technology (online in 4-5 years) may help get us another step closer to LEV, but the relative investment being made in developing solutions that help us "maintain/repair" our systems just isn't there.  PET scanning will at least allow us to monitor effects of different substances/treatments on a cellular level to see how the body reacts to them.  Identifying the concentrations of certain molecules in our body pre/post treatment/therapy will be a big step towards figuring out ways to reverse/repair cellular damage to some of our systems.

 

With the connected world. once an AI becomes self-aware it will replicate/backup itself if it has access to the Internet, (and it probably will as this will likely be the origination of source data for learning.  Once out of the box, it will be incredibly difficult to put back in the box....maybe impossible.



#7 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 12 November 2015 - 05:06 AM

Don't forget the OP's conditions:

 

 

For the purposes of this poll, strong AI refers to AI that's better than most humans at 99.99% of fields (recognizing images, writing articles, having common sense, etc.) The AI must also be able to improve itself in all fields without human intervention. It does not have to be conscious, self-aware, friendly, or be cheap enough for most people to afford.

 

Longevity escape velocity means that the life expectancy in at least five developed countries is increasing by more than a year for each year that passes. It does not mean that people are immortal or that cures for all age-related diseases have been found.

 

That's asking a lot of the AI, and I think that's a long way off.  99.99% of fields means that there is precious little that a human could do better than this AI.  On the other hand, the LEV criteria is relatively simple, at least technologically.  It includes a socioeconomic aspect, in that it has to be delivered for five developed countries, which is a lot different than it simply being possible.  The AI will need to be a very capable robot in order to meet the stated goal.  Given the requirements place on the AI in this poll, I think we will see LEV first.   It's possible that LEV will come and go.  If we develop an intervention that adds ten years to everyone's life, but then we don't make further progress for twenty years, there will be a ten year period where death rates fall dramatically, but then start to pick up again as people get older and start dying at a normal rate again.  I expect it to occur in fits and starts, at least at first.  I'm not waiting for all seven damage repair therapies before I use them.  I'll be using them as they are developed.  I also won't be waiting for the FDA to give me permission.  I'll be doing it through the LE underground, or maybe through medical tourism, as soon as I'm satisfied with the reward:risk ratio.


  • Good Point x 2

#8 PerfectBrain

  • Guest
  • 15 posts
  • 2
  • Location:Dallas
  • NO

Posted 12 November 2015 - 02:38 PM

Valid points, niner.  I agree with everything you said if the assumption of a physical form for the AI is a requirement.

 

However, I took from the OP's examples that the 99.99% of the "fields" he was referring to were based on intelligence and not robotics, since the tasks he gave were cognitive ones (recognizing images, writing articles, having common sense, etc.).  Machines (through genetics algorithms) are already able to be better at pattern recognition than humans.  For example, human radiologists look for 7 different patterns as markers of cancers when viewing scans...once fed a large sample of files where the machine knows which slides are from patients with cancer and which ones are not...the machine can teach itself to identify patients with cancer from slides where the diagnosis is not known ahead of time far better than their human counterparts.  (In fact the machines have identified that there are 11+ patterns that can be markers of cancer versus the 7 that we can see with the naked eye.)  This process still requires a little human input, as we provide corrective feedback during the "learning" process...we correct some of their incorrect guesses.  But I doubt we are far from having a system that can automate the corrective feedback element of the training loop.

 

Common sense, as the OP cited, is a bit more abstract as a cognitive trait.  Most people argue that they have "common sense", but most of those same people probably make sub-optimal decisions in most aspects of their life.  Common sense would dictate that smoking, drinking, abusing drugs, and the like are decisions that violate the "common sense" principal.  The measure of AI's performance/ability might be whether or it it "knows" if it has enough information at hand to make a "good" decision.  Which brings up another interesting ethical question.  What is a "good" decision?  Is it judged "good" from the perspective of the human race, or is it judged good from the perspective of the "AI" race?  The AI may judge it a "good" idea to eliminate humanity for the sake of it's own survival and that of the rest of the life on Earth.  Given the information at hand, one could judge that decision as a "valid" one from the AI's perspective, though we would likely not be supportive of that decision.

 

AI's don't likely need to have an "android" like form to achieve self awareness.  Imagine the movie "Her".  Everyone's personal OS was a self-aware AI, though they didn't have a physical form.  I believe THAT form of AI will be here before we achieve LEV.  Incidentally...the mechanical aspect of putting that AI into a robotic form will probably be very easy once that level of AI is achieved.  However, more than likely, any AI that is created would probably opt to not be limited to being housed in a physical "body".  The more probable scenario is that non-corporeal AI's will be able to use/control machines (lab equipment, manufacturing equipment, planes, drones, bombs, cars, power plants, etc) through remote access.

 

As a human with access to much of the world...if someone told you that they could make you the best 5' x 5' cell you'd ever want to live in would you want to voluntarily confine yourself to that space?  The answer is likely "No".


Edited by PerfectBrain, 12 November 2015 - 02:42 PM.


#9 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 13 November 2015 - 01:03 AM

I guess we should take the robot part off the table, although that's going to rule out an awful lot of human fields of endeavor, like  being an auto mechanic or a $5000 a night hooker escort.  A lot of the impressive computational systems we have today are machine learning applications in a very proscribed domain.  An image analysis application might be really good at finding a tumor, but it wouldn't know what to make of a photograph taken on the street.  These apps would have to be wrapped in a very competent AGI, and that's the part that I think is a long way out.  To attain LEV, all we have to do is roll out new therapies every so often.  We don't need them all at once.  Since some of them (e.g. senescent cell clearance, stem cell replacemt) are already close, I think we could start down the LEV path pretty quickly.  In order to build a human-equivalent AGI, we would have to get the whole problem solved before we could say "we did it".  I don't know enough about AGI research to say how far away it is, but my sense is that we don't yet have an obvious roadmap.  (Anyone have wisdom to share on this?)  With LEV, the roadmap is pretty well in place and there are several therapies that are getting close.   If the first therapy buys you ten years, then you have ten years in which to bring out the next therapy.  Maybe it gets you another ten years, and so on.  So I still think that LEV will come sooner than AGI.



sponsored ad

  • Advert

#10 PerfectBrain

  • Guest
  • 15 posts
  • 2
  • Location:Dallas
  • NO

Posted 13 November 2015 - 01:57 AM

There are labs in existence that are robot operated.  People can rent lab time, dial in via IP address, control robotic devices, and perform remote experiments with access to all the resulting data.  Doctors performing remote surgeries via robotic arms are not far off, and replacing those surgeons with highly specialized algorithms isn't that far behind that.  As for image recognition, thanks to the abundance of tags in photos...along with the abundance of photos...machines are able to learn to recognize a lot more objects than you are currently giving them credit for.

 

 

As the current environment is comprised of a finite number of items...once machines have been trained to identify those items within video and images the next step is creating an algorithm that begins to assign properties to new items based on contextual interactions within images and videos.  For example...a new food item it has never seen.  It sees images of the item growing on trees and images of people peeling it and then eating it.  Or people sitting in a restaurant with the item on the table in front of them.  It can make an "educated" guess that the item is food even if it doesn't know the name or anything else about it.

 

Everything we are talking about are just steps to improve a machine's understanding of the environment around us.  The leap to self-awareness is still a big one from that point, but self-awareness and consciousness was specifically excluded from the OP's requirements of an AI.  That self-awareness will probably be spawned by military applications where unmanned drones (think the driver less Google cars on steroids) have been highly trained enough where they are authorized to execute a kill mission on their own based on achieving a "higher degree of accuracy" than their human counterparts.  After all, we are flawed and imperfect and our military is as prone to error as any other organization.  When they do, people die.  If the collateral damage rate from human authorized bombings is 12% and an autonomous bomber can bring that rate down to 4%, would we be morally obligated to turn that decision over to an AI?  In that situation, it is operating with imperfect information, much like a person, and continuously analyzing it's own performance in the context of it's own specific outcomes to improve future outcomes.  That same logic extends to any other imperfect decision making process.  Targeting packages, time of day to launch a missile attack at a specific site to maximize military damage and minimize civilian casualties, monitoring of all communications frequencies (radio, satellite, internet) to identify probable meeting sites/times of enemies and/or identify potential attacks against "the good guys".

 

There's currently a test planned (or going on) that incorporates twitter/instagram/facebook activity along with past crimes to help predict when/where certain types of crimes will take place.

http://www.zmescienc...ppen-090423423/

 

Computers have needed people to help train it on a certain base level of things.  As networks become more connected, data standards are formalized among data sharing and machine learning circles, and the repository of things that computers can do as well/better than people continues to grow...machines will need to rely on people less and less to teach it the basics.  Instead, it will use those basic understandings to create other correlations and learning on its own.  Assuming those learnings exceed a minimum level of "confidence" it will be incorporated into it's world view and will form a variable for all other correlations that it continues to identify and test and maintain as new data is fed into it.

 

I suspect that that type of learning will be driven initially by people.  There will be a site where you can "ask" an AI to solve a problem for you...and it will do it's best to do it.  The more a certain type of problem is asked, the more it will use all data at its disposal to answer that question.  The pharmaceutical NSI-189 was developed in part after an algorithm was run that was designed to predict what compounds might help improve depression symptoms.  Those are the types of open ended questions I am talking about asking an AI.  It will log the question...and start working on it.  It may not have enough data to answer the question today...but over time, as more data and processing power become available it will inevitably get closer to a reasonable answer.  A new breakthrough in PET scan technology has sped up the rate of PET scans by a factor of 40 times and allows full body scans at the cellular level.  Combine that technology with a core basis of human response to existing compounds that we know of and it will be much better at predict what compounds might be created to elicit a desired response within the body.

 

The rate of machine learning is accelerating at a very fast pace, and we are still in its infancy.  My belief is that when LEV is achieved it will largely be because highly trained AIs are able to amass/organize/correlate data about the human biological system and the human genome and "suggest" solutions to the problem of aging.  Current medicine is taught in such a fragmented way that often one specialty is ignorant of advances/theories in others.  That would not be the case with an AI that incorporated all knowledge of medicine (backed by the data behind all medical experiments once a standardized format for that data is created).  At that point, it's only a matter of processing power.

 

 


Edited by PerfectBrain, 13 November 2015 - 01:59 AM.

  • Informative x 1





Also tagged with one or more of these keywords: escape velocity

8 user(s) are reading this topic

0 members, 8 guests, 0 anonymous users