<- from bjklein
When will smarter-than-human intelligence be created?
Posted 23 August 2002 - 07:43 AM
Posted 25 August 2002 - 01:18 AM
Posted 25 August 2002 - 01:47 AM
Posted 25 August 2002 - 04:34 AM
Ehh?I do not think exponential growth takes place,
Development? as in what kind of development? What we're talking about with AI is improvement in processing speed and storage capacity.. this has been increasing exponentially for a while now... just take a look at the computer your using to read this post with.even if it did, exponential growth does not equal exponential development
We only need one good program (Seed AI) that has the ability to write and improve it's own source code, no connectededness neededeven if it did exponential development does not warrant ex potential connectedness
Who says we need to break any physical laws of physics... other than the heat death problem.. which is a little ways off, there's no physical laws that we need to break in order to reach the singularity.even if it did, it would not get around the law of physics
Well, I may agree with you.. but I doubt it will take much more time after that.even if it did, it is not feasible to occur in the next two decades (*hysterical giggle*)
GoodnessI am not saying that "Singularity" is not feasible, or that it will never happen. I am not saying that the concept does not entail very interesting philosophical problems that challenging (if not very useful) to explore... But to ask me and others to vote for at most "30 years+" is an insult.
Posted 21 September 2002 - 01:23 AM
Posted 21 September 2002 - 09:33 PM
Agreed! But still, the increase of universal computation is exponential.
I completely agree, but (no offense), I'm not sure if you've gone over enough literature to give you a clear sense of when the Singularity is coming! See the resources above. What date do you set for the Singularity and why? Have you ever heard of "strong self-improvement"?
Posted 22 September 2002 - 04:16 AM
Mangala wrote:
Just because we can fit faster computers into smaller spaces does not mean that an AI will rise out of the internet randomly. In order to build a self-improving machine that actually knows what improvements are, we must organzie the code, infrastructure and structure specifically as to create a working intelligence that is anything more than a simplistic insect brain. Increases in computation and increases in organization of source code or for that matter advanced improvement code do not necessarily go hand in hand.
Currently I imagine the first self-improving artificial intelligence will come about in the year 2025
first intelligence just capable of surpassing human intelligence in most fields will come about in...oh....60 years.
I strongly believe in the idea that its all too easy to make technological intelligence seem a function of computational improvement.
Posted 22 September 2002 - 03:15 PM
For example, it only costs $31 million as of early this year to buy human-equivalent computer power (not software complexity!) in a Beowulf cluster.
Posted 01 October 2002 - 05:48 PM
...quickly skimming, as in my opinion it wasn't worth total scrutiny
Posted 01 October 2002 - 09:38 PM
Posted 02 October 2002 - 03:57 AM
Posted 02 October 2002 - 05:31 PM
Well Psycho I find that we are basically making the same arguments at the same time, in different places.. I also agree that it is a little silly to be taking or giving bets at this time about the fabled "Singularity".
But I, as I keep trying to emphasize to Michael think that it is more than a little premature to rule out a quantum level advance in human cognition from the game.
Posted 03 October 2002 - 03:22 PM
Omnido says:
Personally, I see no need for a singularity at all, but that's my opinion.
Posted 03 October 2002 - 03:50 PM
Sure we have lots of computing power, and sure it's growing pretty damn quick, and sure it's going to change the world and positively affect a lot of different fields... but come on... it is a well-known fact in the IT industry that software lags behind hardware - far behind, and where software in general lags, AI in particular lags even more. The notion that our development of a human-equivalent or able-to-upgrade-itself-to-human-equivalence cognitive system will coincide with our development of the hardware requisite to successfully implement this system is - in my honest opinion - ridiculous.
For example, it only costs $31 million as of early this year to buy human-equivalent computer power (not software complexity!) in a Beowulf cluster.
Anyone who thinks a greater-than-human intelligence is not coming before 2030 should re-read the above quote.
My reasons for this are mostly due to the obvious obstructions of human evolvement; i.e. Social politics, religious dogma, and capitalistic self-interests.
Before we can expect to see a singularity, those issues will have to be addressed.
I don't see such an addressment anytime soon, due to many factors; including corruption, hedonism, greed, and selfishness.
Posted 03 October 2002 - 05:44 PM
Psychodelerium says:
Well, we may agree on a number of points, but I don't think this is going to be one of them. In all my ponderings about minds and consciousness, I've grown increasingly intolerant of that old Penrose axe about quantum minds, and now subscribe to the theory that the phenomena in question are to be explained solely in terms of system interactivity and computation. The quantum effects on human brains simply do not occur on a high enough level to mean anything interesting, if they occur at all. Penrose has suggested that information processing in biological neural networks is somehow influenced by quantum effects occurring in the microtubules of neural cytoskeletons, but there is no evidence for this, nor would this be evidence for anything else if it turned out to be true. Most of the talk about quantum minds is premised on a hunch that computation just isn't enough, rather than on any thorough analysis of the evidence, but I digress.
Posted 04 October 2002 - 01:55 AM
Plus theres the problem of age. Most of the senators we have today in power were born from 1950 and back. They still have the mindset of sci-fi movies and shows that seemed to say that if any technological version of intelligence were ever built, it would have human feelings, human wants and human needs. That is, if an AI were ever government funded and built, it would automatically try to seize power, that is why there is absolutely no reason to build such a thing because it is way too dangerous too humanity.
Do you really think that Tom Daschle would ever try to draft a bill calling for this kind of program to be implemented anytime in the near future?
Posted 04 October 2002 - 02:34 AM
Posted 04 October 2002 - 02:36 AM
Yup.Here what I agree with, and I think Michael will too, even many others on this list, is is that we need a separate thread on the subject.
Posted 04 October 2002 - 02:47 AM
Lazarus Long wrote:
I also agree that it is a little silly to be taking or giving bets at this time about the fabled "Singularity".
To a certain exetent. Much of what "the AI" will behave like depends on what causes ver cognitive system generates. I also would like to point out here that we're not talking about an AI here, but a superintelligence. Totally different things.That said I also think that to a certain extent we will reap what we sow. Meaning that much of what AI ultimately comes into existence as and acts like, will be pedicated on how we define this period of development.
But I, as I keep trying to emphasize to Michael think that it is more than a little premature to rule out a quantum level advance in human cognition from the game.
There we go, I have a better idea then Friendly AI. How about Optimistic AI?
Posted 04 October 2002 - 02:54 AM
OmniDo wrote:
I do not personally think that the "Singularity" will occur anytime soon. When I refer to "Anytime" I mean, not within the next century.
My reasons for this are mostly due to the obvious obstructions of human evolvement; i.e. Social politics, religious dogma, and capitalistic self-interests.
Before we can expect to see a singularity, those issues will have to be addressed.
Posted 04 October 2002 - 03:23 AM
Before we can expect to see a singularity, those issues will have to be addressed.
Why is an engineering problem dependent on social politics, religious dogma, or capitalistic self-interest?
And from the private sector I have built houses for people that spent tens of millions of dollars on their toys, so a 31 million dollar price tag is not beyond the scope for the more serious players. Those that can pay are going to begin to force developmental trends. The "Game" is competition. Now it becomes every more a race between competiting interests to see who gets to develop Seed AI first and with what Ethical Architecture.
Posted 04 October 2002 - 12:58 PM
Also - my definition of the Singularity is the "creation of greater-than-human intelligence" - what's yours?
As already stated, such an intelligence can never objectively be realized. However, the resources are not in question, neither is the possibility of the advances that a "subjective singularity" could bring. It is a matter of what will be addressed below that will determine its success/failure.And how many resources do you predict it will take to create greater-than-human intelligence?
Why is an engineering problem dependent on social politics, religious dogma, or capitalistic self-interest?
Posted 04 October 2002 - 02:54 PM
Posted 06 October 2002 - 07:16 PM
Posted 07 October 2002 - 07:34 PM
Also - my definition of the Singularity is the "creation of greater-than-human intelligence" - what's yours?
Then you are speaking of that which is a logical impossibility.
As I posted before, "Greater-than-human-intelligence" must first be quantified.
If you are referrering to "Greater-than-human-speed-of-thought" then that's another subject.
As I previously outlined, anything that is a construct of humans will exist as a reflection, at least in part if not in whole, of human representative qualities.
To create something that is more intelligent than the humans who would use their own intelligence to create it, is an impossibility. You might as well create a generator that requires 10 watts of power to run, but yields 15 watts of power output. Such a generator could then power itself, and this is obviously an impossibility with current models of physics.
Posted 07 October 2002 - 09:29 PM
Posted 20 December 2002 - 03:43 PM
On the contrary, a computer cannot do that which we cannot, it can merely do what we can at a greater degree of speed. Speed is the issue here.Mangala Posted
Humans have the ability to overcome there own intelligent and physical ability simply because we create things that do the work for us that we cannot do.
Mind Posted
We have already built a machine that can beat the world's best chess player. No human mind can calculate fast enough to beat Deep Blue. We have created a machine that is better than any human at playing chess
Posted 20 December 2002 - 04:45 PM
Posted 21 December 2002 - 01:47 AM
OmniDo wrote:
In my previous post, my intent was to outline that creation of something greater than the sum of its parts is a logical contradiction and warrants definition as "Nearly impossible." Perhaps objective impossibility is too strong a word, but nonetheless it is logically valid. Let me attempt to explain...
On the contrary, a computer cannot do that which we cannot, it can merely do what we can at a greater degree of speed. Speed is the issue here.
While indeed humans fashoned a computer that defeated the worlds greatest ... didnt "Beat" him at all. The computer merely "out-thought" him in terms of speed.
There is no evidence that any machine could ever acquire "Superior-than-human intelligence", insofar as the human being could not endow themselves with the same degree of intelligence as a machine. Granted, it would take far longer, but if we are talking about efficiency and "superiority", then the human has the machine beat, hands down.
Stop for a moment and think about how much energy it takes to sustain a human, indeed to cultivate and educate a human, versus a machine. Machines at present require orders of magnitude more power to operate than a human mind, but also they yield orders of magnititude greater numbers of calculations.
Once humans begin to augment themselves with artificial systems, organic bio-chips or whatever sci-fi equivalent, we too will become faster than our former selves, with increased abilities and capacities. Still, all we will have done is make ourselves faster, not "Superior".
Perhaps the definition of "Superior" needs to be better defined. If superior is to equate to speed, then yes, it is possible to build superior-intelligence. But from the attitudes of many posts, it has been the perceived intent that "superior" meant "Beyond human cognition" or "beyond human capacity for understanding" which is in my opinion, totally bunk.
Posted 21 December 2002 - 02:09 AM
Mangala wrote:
In our society engineers are paid to do their work, they don't just do it because they enjoy the work that they do(except for the retired ones). Every dime that goes into the building of a seed AI will have to come from somewhere, and that somewhere is either from the government or the private sector. If no senators, CEO's, or cults want to build the AI, no one will. We alone cannot do the job, and that is why we've hardly done anything other than sit here and map out what others could do.
This reminds me of a great movie I saw the other day called "Contact." It really is a great movie starring Jodie Foster and is very relevant in terms of governments and companies around the world participating in the building of something that could be the best thing that humanity has ever built, or the worst. Actually, now that I think about it, it strangely has a whole lot to do with this subject, please rent Contact.
Plus I highly doubt you could even come close to buying the hardware needed to build a seed AI with $50 million. Not to mention the software is totally out of our league in this day and age. But believe me, I'd like to see the SI in 28 years or less as well, it just seems highly unlikely that anybody will take the idea of seriously building an AI and turn it into reality.
I have enough trouble explaining this thing to people without them waving it off as nothing but conjecture coming out of the mouth of someone "who's seen too many sci-fi movies."
0 members, 12 guests, 0 anonymous users