• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

DeepMind’s Breakthrough AlphaStar Signifies Unprecedented Progress Towards AGI

deepmind sc2

  • Please log in to reply
22 replies to this topic

#1 theone

  • Life Member
  • 167 posts
  • 620
  • Location:Canada
  • NO

Posted 24 January 2019 - 09:57 PM


DeepMind shared that AlphaStar was trained by playing against itself for a total of 200 year’s worth of game time, making use of accelerated parallel computing.

 

Game Stats:

AlphaStar beat MaNa 5-1 in 6 Protoss vs Protoss games.

AlphaStar  beat LiquidTLO 5-0

 

 

I was extremely impressed with AlphaStar style of play (been playing this game for nearly 20 years).  I might have learned something new today :)

 

Replays:



#2 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,373 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 January 2019 - 08:44 PM

I have a hard time celebrating this accomplishment knowing there is currently little restraint in the "arms" race to develop AGI. To what end? Certainly not for the benefit of human beings.


  • Agree x 2

sponsored ad

  • Advert

#3 theone

  • Topic Starter
  • Life Member
  • 167 posts
  • 620
  • Location:Canada
  • NO

Posted 26 January 2019 - 05:16 AM

A superior benevolent AGI entity would be preferable to weaponized advanced algorithms.


  • Cheerful x 1
  • like x 1
  • Agree x 1

#4 MichaelFocus22

  • Guest
  • 331 posts
  • -16
  • Location:San Jose
  • NO

Posted 05 February 2019 - 03:05 AM

   It's obvious, that the smart people here we need to start outlawing this type of technology before it becomes dangerious.  Let's be frank here, if the capitalists do not need you they will expend you. This is very straightforward. It's not quite there yet, simply because the  AI is inefficient with it's learning mechanisms, humans don't require 200 years of doing something 24/7 days a week to reach this level of superhuman play. Next for it to be AGI, it needs to be multi-modal and generalizable so that it can actually present a really threat. I'm not really concerned about the AGI but I believe development will need to be outlawed once it reaches a certain level, it's utterly impractical to  develop an intelligence without it being controllable. Finally, this isn't exactly really intelligence yet, because their merely just doing bruteforcing except through learning rather than through number trees. So until these computers can ask Why? Then they will always be limited to inputs and outputs. People need to start to realizing the maleviolence of intention that these tech companies have. Do you really want a tech company in charge of superhuman intelligence? Intelligence, that is FREE, UNPAID FOR, UNREGULATED and never gets tired? Who knows what they could do with this type of technology. A simple outright ban will be useful along with a BREAKING up of the monopolies of facebook, google, twitter and other MNCS who are getting far too powerful. I suspect this won't happen until my generation gets into power but it seems development is speeding far faster than anticipated. Which is bad for us.


  • Well Written x 1
  • Agree x 1

#5 platypus

  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 05 February 2019 - 10:17 AM

I'm sceptical of benevolence and sanity of an AGI. How do you ensure that the AI does not go berserk insane in some tens of milliseconds of wall time? 


  • Good Point x 1

#6 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,373 posts
  • 2,000
  • Location:Wausau, WI

Posted 05 February 2019 - 04:25 PM

All of these university labs and corporations working to develop the most dangerous weapon ever conceived. No laws. No regulations. No restraint. Unlike a nuclear weapon, AGI could wipe out all life on the planet.



#7 Dream Big

  • Guest
  • 70 posts
  • 90
  • Location:Canada

Posted 09 February 2019 - 02:28 PM

Because they DON'T fear it. They don't think it is nearby or real. It's because most people think too high of themselves, they think robots and algorithms are far lower than themselves and not nearby or plausible, even if they see it work. There's some sort of spiritual feeling in them, they won't accept it that we are just machines following the laws of physics with no control and no magic. And it's not here yet. And it's said to be wacky difficult to understand.

 

Even I sometimes or often want to be 'alive', despite I know this. All I can say is I know it. Besides that, I want to have fun too.

 

But also, they don't know much about they AI field, and the higher up people are fewer and just aren't regulating it.

 

However I wouldn't regulate it too much, we need to invent AGI to stop the gizillions of deaths and pain in living creatures. And anybody at home can attempt to tackle it in some way anyhow.

 

By the end of the day, all humans care about is having fun, money.


Edited by Dream Big, 09 February 2019 - 02:37 PM.


#8 theone

  • Topic Starter
  • Life Member
  • 167 posts
  • 620
  • Location:Canada
  • NO

Posted 10 February 2019 - 07:54 PM

I am genuinely surprised that this post has provoked such a reaction.

Never forget that you’re going to die, and someday the people you love are going to die. Over 100 billion members of our species have already died.  We collectively have to act to accelerate the progress we have already made.

AGI will develop solutions to problems that seem impossible today.
It will help humans go beyond our biological legacy.


I am confident we can minimize risks and maximize benefits.


  • Agree x 2

#9 MichaelFocus22

  • Guest
  • 331 posts
  • -16
  • Location:San Jose
  • NO

Posted 10 February 2019 - 10:41 PM

I am genuinely surprised that this post has provoked such a reaction.

Never forget that you’re going to die, and someday the people you love are going to die. Over 100 billion members of our species have already died.  We collectively have to act to accelerate the progress we have already made.

AGI will develop solutions to problems that seem impossible today.
It will help humans go beyond our biological legacy.


I am confident we can minimize risks and maximize benefits.

 

 

               I'm refreshed by your own naivety but this optimism is really dangerous when you realize the true power of an intelligence that you aren't capable of understanding.  Realistically, I'm not really all that afraid about a general AGI below are intelligence and this is fine. Again, I'm more concerned about the people that are going to use it especially the government. Don't be so naive and assume that humans are going to be in some type of utopia and all of are problems will be solved. Life doesn't work like that and we all have are own conditional self-interest that determine what we do and why we do it.   Realistically, MNCs and government have already proven that they will use technology to further enslave and to further monopolize the power of production and the means of controlling people. We already see it with alogirthems that could easily write any fake story or companies using program bots to stop dislikes on youtube videos. This type of technology is capable of distorting reality, so that it becomes impossible to decipher anytype of evident truth.  This is all predicted in 1984 and we are only becoming more and more orwellian by the day. If you think this isn't possible then you have failed to realize, that we are already in 1984, we just want to play ignorant about it.  These globalists are evil people and their not going to fix your problems simply because they can.  Next, your supposing that we can even fathom or even comprehend such an intelligence, it's probable that their moves will be incomprehensible to us. Any intelligence beyond us will be indecipherable from magic. So implying that their will be a situation in where we can control the outcome while minimizing the risks I say is far more optimistic. Again though, we aren't even close to real intelligence. These alogirthems are basically just bruteforcing probabilities so that it appears as if they are "learning" or have intelligence or strategy behind their outcomes. It's not real intelligence because their is no strategy and their is no creation of process occuring actively and above all it needs to have an inner eye or awareness that can reflect upon it's own internal states. This is what we call conciousness. Effectively all they've done is doing bruteforce learning by running hundreds of years of strategies and outcomes. Yet we humans are significantly more efficient with our learning process and are process is dynamic and fluid, it's not constrained on inputs and outputs. We have dynamic domain sets of outcomes that we have to decipher from real chaos around us. When a machine can do this, then they will be deadly. Yet, they don't really want real AGI, when I speak in terms of the globalists. They just need something that is good enough to replace all the people whom they are dependent upon to make a mass profit and then they will expend you in totality. Think about it. We have already seen that companies will make profit by any means necessary which I applaud them for but the conditional damage is the destruction of human population. TBC.



#10 theone

  • Topic Starter
  • Life Member
  • 167 posts
  • 620
  • Location:Canada
  • NO

Posted 10 February 2019 - 11:52 PM

The majority of AI algorithms are publicly available and opensource. Anyone with the right skill sets can use them.

Far from enslaving us, these AI algorithms will level the playing field.  

Just imagine how lawyers might be replaced by artificial intelligence. This alone will further democratize the law and eliminate barriers to the lowest status individuals.

 



#11 MichaelFocus22

  • Guest
  • 331 posts
  • -16
  • Location:San Jose
  • NO

Posted 11 February 2019 - 12:10 AM

 Again like I said, you don't address any of my points. We don't even know the implications of what will occur. Democracy in it's present form would not exist in such a society.  Don't fool yourself.  Progress for progress sake is a meaningless concept just because you can do something doesn't always mean you should do it.   Your already enslaved, look around you. Look at your phone. Look at the way society is increasingly going. Alex Jones was recently sacked, he's been basically neutralized  as a former counter opposition. Everything that was supposedly conspiratorial has been PROVEN correct as go increasingly towards 2030.  He was banned on all radio shows and all accounts were essentially neutralized. Now he's facing 4 lawsuits from the former traitors of the United states.  Sex has beocme a weaponized agent against young men. Feminism is running ramphant and is destroying huge swaths of society. Look around you,your optimism doesn't match reality.  AGI will be abused and it will need to be banned or the concentric power circles will concentrate their power like never before. Look at the Neoliberals, they are evil, they anti free speech and against anything narrative that doesn't suit there narrative of how they believe the world ought to be. We are at war right now and it's one we are loosing with every passing day.  Look at your money, it's deprecating by the day as expenses are rising faster than what we combat. This is just human exploitation, now imagine what they could do with AGI and what they could achieve. People said the same thing about the internet and now it's been controlled and monopolized. The internet hasn't liberated anyone it's only become  a greater means of control and isolation.  Google can track all that you search and do and they funnel it into alogorithems to which they can track what you say, do and think. This is no trivial matter. Now extrapolate the nature of these people towards AGI and see what they would do? You have young children being indoctorinated in STEM and paying them disgustingly high salaries like sheep  only for their inevitable expendability.



#12 Dream Big

  • Guest
  • 70 posts
  • 90
  • Location:Canada

Posted 11 February 2019 - 11:49 AM

Go through his whole site, read it all and learn.

https://sites.google...e/narswang/home

 

This is a robotic scientist, it takes its predictions to the test.

https://www.ncbi.nlm...les/PMC2813846/

 

There are logical AGIs out there too as I have shown here, that use language and knowledge, using text. They use deductive and inductive logic and try to generalize in their knowledgebase.

 

See here for a surprisingly well thorough introduction to it all:

https://en.m.wikiped...ientific_method

 

And if you want to see another and try one, click here for this one on Mondays:

http://artistdetective.com/arckon/

 

Start now though, hurry, I don't want to see this nightmare go on for longer. Too much death and pain for me.


Edited by Dream Big, 11 February 2019 - 12:09 PM.


#13 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,373 posts
  • 2,000
  • Location:Wausau, WI

Posted 11 February 2019 - 06:01 PM

I think narrow AI can definitely be a net bonus for human well-being.

 

When it comes to AGI, then nobody knows. Because AGI would theoretically achieve levels of intelligence orders of magnitude beyond humans (in the blink of an eye), no one can can predict what will happen. No one. Many advanced deep learning projects already act in ways that are not decipherable by the people who code them. This will only get worse.

 

It is likely that I will die at some point in the future. I would rather it be 1000s of years from now, rather than at the hands of an AGI produced by the careless engineers at Google in 2021.


  • Agree x 1

#14 theone

  • Topic Starter
  • Life Member
  • 167 posts
  • 620
  • Location:Canada
  • NO

Posted 11 February 2019 - 10:23 PM

It is likely that I will die at some point in the future. I would rather it be 1000s of years from now, rather than at the hands of an AGI produced by the careless engineers at Google in 2021.

 

 

I think that is where we have a difference of opinion and a difference in approach. I have my doubts that we will reach Longevity escape velocity without AGI (within the next
10-15 years).

The real question is how to know if the risk is worth taking. The answer to this question is "it depends".  The answer is highly personal and depends on your age, lifestyle and health. Current age being the most important factor.



#15 QuestforLife

  • Member
  • 1,611 posts
  • 1,182
  • Location:UK
  • NO

Posted 13 February 2019 - 10:16 AM

I think that is where we have a difference of opinion and a difference in approach. I have my doubts that we will reach Longevity escape velocity without AGI (within the next
10-15 years).

The real question is how to know if the risk is worth taking. The answer to this question is "it depends".  The answer is highly personal and depends on your age, lifestyle and health. Current age being the most important factor.

 

The question is: is it worth taking the risk and creating AGI?

 

Personally, IMO, I think the answer is NO. We can beat aging without AGI, probably in the next 20 years. 

 

Who knows what AGI would do once created! It might solve aging, it might upload us all into an endless VR paradise, it might kill us. It might do all three, all on a whim. Far better would be to develop AGI slowly and carefully, only implementing it when we ourselves have gone far beyond human in terms of intelligence, and can handle and integrate with it. 

 

Sadly I don't think we can stop AGI now however, so we have to hope the form that is implemented will be benign. Or that the problem is much harder than we think and will take 50 or 100 years, by which time we just might be able to handle it.


  • Enjoying the show x 1

#16 theone

  • Topic Starter
  • Life Member
  • 167 posts
  • 620
  • Location:Canada
  • NO

Posted 14 February 2019 - 09:15 PM

There seems to have been a misunderstanding.  The post above describes an ASI (Superintelligence). The transition from AGI to ASI may actually take more time.  According to Kurzweil AGI in 2029, ASI in 2045.


  • Disagree x 1
  • Agree x 1

#17 Heisok

  • Guest
  • 612 posts
  • 200
  • Location:U.S.
  • NO

Posted 15 February 2019 - 09:41 PM

Perhaps the long term fears might inhibit beneficial leaps.

 

On-the-other-hand, One could argue that the data being gathered could be used against individuals due to the ability of machines to analyze great deals of "health" data and essentially narrow down to the individual level in spite of supposedly anonymized data. They can cross reference many different data sets. A portion of 23 and me was purchased allowing access to data. Users already had access cut to more of their sequencing. Recently a criminal was apprehended at least partially due to family members having had their DNA run for ancestry issues. They were able to compare a sample to what was available on line, and eventually pinpoint the bad guy. Nothing can stop the same thing happening with health, and longevity related information.

 

AI Is Rapidly Augmenting Healthcare and Longevity
 

"When it comes to the future of healthcare, perhaps the only technology more powerful than CRISPR is artificial intelligence.

Over the past five years, healthcare AI startups around the globe raised over $4.3 billion across 576 deals, topping all other industries in AI deal activity.

During this same period, the FDA has given 70 AI healthcare tools and devices ‘fast-tracked approval’ because of their ability to save both lives and money.

The pace of AI-augmented healthcare innovation is only accelerating.

In Part 3 of this blog series on longevity and vitality, I cover the different ways in which AI is augmenting our healthcare system, enabling us to live longer and healthier lives."

In this blog, I’ll expand on:

  1. Machine learning and drug design
  2. Artificial intelligence and big data in medicine
  3. Healthcare, AI & China

https://singularityh...x10hq234uzomstj

 

"A Major Drug Company Now Has Access to 23andMe’s Genetic Data. ... Consumer genetic testing company 23andMe announced on Wednesday that GlaxoSmithKline purchased a $300 million stake in the company, allowing the pharmaceutical giant to use 23andMe’s trove of genetic data

 

http://time.com/5349...xo-smith-kline/


Edited by Heisok, 15 February 2019 - 09:43 PM.


#18 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,373 posts
  • 2,000
  • Location:Wausau, WI

Posted 15 February 2019 - 09:46 PM

There seems to have been a misunderstanding.  The post above describes an ASI (Superintelligence). The transition from AGI to ASI may actually take more time.  According to Kurzweil AGI in 2029, ASI in 2045.

 

My understanding is that the transition from AGI to ASI with be in the blink of an eye, which is dangerous, especially considering that current human coders barely understand how their deep learning algorithms operate.



#19 Avatar of Horus

  • Guest
  • 242 posts
  • 291
  • Location:Hungary

Posted 16 February 2019 - 01:42 AM

It's obvious, that the smart people here we need to start outlawing this type of technology before it becomes dangerious.  Let's be frank here, if the capitalists do not need you they will expend you. This is very straightforward. It's not quite there yet, simply because the  AI is inefficient with it's learning mechanisms, humans don't require 200 years of doing something 24/7 days a week to reach this level of superhuman play. Next for it to be AGI, it needs to be multi-modal and generalizable so that it can actually present a really threat. I'm not really concerned about the AGI but I believe development will need to be outlawed once it reaches a certain level, it's utterly impractical to  develop an intelligence without it being controllable. Finally, this isn't exactly really intelligence yet, because their merely just doing bruteforcing except through learning rather than through number trees. So until these computers can ask Why? Then they will always be limited to inputs and outputs. People need to start to realizing the maleviolence of intention that these tech companies have. Do you really want a tech company in charge of superhuman intelligence? Intelligence, that is FREE, UNPAID FOR, UNREGULATED and never gets tired? Who knows what they could do with this type of technology. A simple outright ban will be useful along with a BREAKING up of the monopolies of facebook, google, twitter and other MNCS who are getting far too powerful. I suspect this won't happen until my generation gets into power but it seems development is speeding far faster than anticipated. Which is bad for us.

 
First, yes, something like this. I don't know the exact details of the current development of the project of the topic, but as it currently seems this is just some brute force algorithm so this isn't real AI.
 

...
AI Is Rapidly Augmenting Healthcare and Longevity
 
"When it comes to the future of healthcare, perhaps the only technology more powerful than CRISPR is artificial intelligence.
Over the past five years, healthcare AI startups around the globe raised over $4.3 billion across 576 deals, topping all other industries in AI deal activity.
During this same period, the FDA has given 70 AI healthcare tools and devices ‘fast-tracked approval’ because of their ability to save both lives and money.
The pace of AI-augmented healthcare innovation is only accelerating.
In Part 3 of this blog series on longevity and vitality, I cover the different ways in which AI is augmenting our healthcare system, enabling us to live longer and healthier lives."
In this blog, I’ll expand on:

  • Machine learning and drug design
  • Artificial intelligence and big data in medicine
  • Healthcare, AI & China
...

 

 
Yes this is true too IMO, and we'll probably need some kind of AI (see below) to achieve unlimited lifespan, given that biology as a whole is way too complex (many orders of magnitude) for a human to comprehend.
 

My understanding is that the transition from AGI to ASI with be in the blink of an eye, which is dangerous, especially considering that current human coders barely understand how their deep learning algorithms operate.

 

I can do programming, so I don't agree with the second part of your argument; because a programmer can understand their code, a good one in turn any code for that matter.
But the first part is correct, at least in the scenario of technological singularity, since that is the very definition of it.

BTW this is why I linked back then in another topic the following videos, they are from a sci-fi movie, but what they describe is accurate, in the real world too:

Transcendence: Humanity's Next Evolution


"For two million years the human brain has evolved more than any species in existence. But what if all our knowledge, all our accomplishments and all our achievements could be learned in mere minutes? What if humanity's next evolution wasn't human at all? What happens when artificial intelligence becomes self-aware? Is it the key to immortality or is it the path towards annihilation?"


I Call It Transcendence


"For one hundred and thirty thousand years our capacity for reason has remained unchanged. The combined intellect of the neuroscientists, engineers, mathematicians pales in comparison to even the most basic AI. Once online, a sentient machine will quickly overcome the limits of biology, and in a short time its analytical power will be greater than the collective intelligence of every person born in the history of the world. Some scientists refer to this as the Singularity, I call it Transcendence."

 

And regarding the type of AI and the dangers:

I think the AGI, the General type can be the real danger, because of the above AGI > ASI / singularity and the other things that were mentioned in the topic already, the unknown actions / reactions etc. of a superintelligence. And therefore this direction should be regulated or even prohibited actually.

And instead some mixed type of weak/strong AI should be allowed and focused on, which has superhuman intelligence, or just understanding, yes, but only in a narrow field of knowledge, and doesn't have the ability to LEARN on his own, the unsupervised learning as it is called, only the so-called semi-supervised type. So they can only learn what you confirm. Then this AI would be more like a super search-engine, which can understand and answer such topics and questions like: X disease is caused by this and this, because the Y gene and/or the Z protein does this and that and so forth; or what is aging, what does cause it and how, etc. (BTW a while ago I started to write a short paper on this whole AI thing for another project, I'll post it to the forum when it'll be finished.)



#20 Dream Big

  • Guest
  • 70 posts
  • 90
  • Location:Canada

Posted 16 February 2019 - 11:11 AM

Have yous seen my post on the kurzweil's predictions thread near top of the thread list? I post cool tech there.



#21 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,373 posts
  • 2,000
  • Location:Wausau, WI

Posted 16 February 2019 - 11:52 AM

 

I can do programming, so I don't agree with the second part of your argument; because a programmer can understand their code, a good one in turn any code for that matter.
But the first part is correct, at least in the scenario of technological singularity, since that is the very definition of it.

 

https://hbr.org/2018...-explain-itself

 

The problem is with complexity and emergent behavior/properties. As programming has moved away from step-by-step logical modes, and toward complex networks of code interacting to produce output, our understanding of the process diminishes. There are scores of papers highlighting how the programmers were "surprised" by the results. They set a complex non-linear process into motion and then just sit back to observe the results. The results could not be predicted ahead of time. Here is one very simple example that was reported last week: https://voxeu.org/ar...g-and-collusion

 

With programmers working on AGI or ASI it is akin to sitting in a room with a big red danger button and saying "The only way we can find out what this thing does is by pushing the button".

 

Paul brings up a good point though about the urgency of developing AI. The older you are, the more likely you would want to hurry the advances in computing, or try risky treatments. Like I mentioned earlier, I think the current narrow AI/expert systems are sufficient to bring rapid progress, without the dangers of developing AGI/ASI.

 


Edited by Mind, 16 February 2019 - 11:53 AM.


#22 Dream Big

  • Guest
  • 70 posts
  • 90
  • Location:Canada

Posted 16 February 2019 - 01:18 PM

I'm 23 and am in an extreme hurry to build AGI. Lol. Do I feel/worry about being old? Yes. So that may hold true to be honest.

 

Humans used to die around 25/30 back then.


Edited by Dream Big, 16 February 2019 - 01:21 PM.


sponsored ad

  • Advert

#23 theone

  • Topic Starter
  • Life Member
  • 167 posts
  • 620
  • Location:Canada
  • NO

Posted 16 February 2019 - 05:38 PM

My understanding is that the transition from AGI to ASI with be in the blink of an eye

 

I think it depends on our understanding of the laws of nature . It might be that the laws we believe to be immutable are actually mutable.  If that's the case, then the transition from AGI to ASI can happen in the blink of an eye.

 

 


  • Disagree x 1




2 user(s) are reading this topic

0 members, 2 guests, 0 anonymous users