• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Employment crisis: Robots, AI, & automation will take most human jobs

robots automation employment jobs crisis

  • Please log in to reply
953 replies to this topic

#601 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 25 April 2023 - 05:45 AM

When you read the AI thought leaders, you hear a great deal of AI doom from them as well which is not exactly encouraging. Yet, it is down right discouraging when others put forward the idea that AI doom talk itself is simply potentiating the emergence of AGI. That feels too much like the butterfly effect within the context of an AGI maelstrom that the technology community has helped unleash. Sometimes the attractor is not an impossibly unlikely nonlinear effect but the macro driving feature of the system (i.e., AGI). It seems to me to be an extremely weak argument to blame those who want to engage with this important topic are somehow causing what they are warning about. The messenger is causing the problem that the messengee has actually caused. A world where no one wants to accept responsibility for their actions? As we have seen, suppressing free speech and the free exchange of ideas does not somehow magically solve our problems. Not weighing in on the realistic dangers of AGI could lead to the extinction of our species.

 

I have also tried to think through the possible social level crisis that could emerge much before AGI. With humans things would start to break much before we reach 10,000 IQ. Anything much over 150 IQ (we are currently at 155 IQ) is probably sufficient. One idea that I have thought of is how people look out into reality and take their signal from what they see. So, when I now look out into reality everything seems to be pretty much the same old same old. ChatGPT does not appear to have any macroscale impact on the world. I interpret this to mean everything is fine and that I can continue with my life as usual. No problems --> no panic. Yet, if I did see some obvious change in the social landscape that indicated people were responding in an observable way to the emergence of GPT, then this would clearly change my perception. When I look out to reality and others confirm my concerns about the dangers of uncontrollable artificial intelligence that would clearly be a concern. This is the problem with panics: people do not respond to what they themselves feel and think, but often wait for others to initiate the panic for them. Basically, this puts us into a very unstable social position. Everything is all ready to happen and we must wait until some largely random event starts the ball rolling down the hill. Yet, the ball starts at a very precarious place at the top of the hill. 

 

 

What might be some of these triggers for a full scale stampede? I mentioned before fertility. If we saw fertility rates decline by 50% from the current already low levels that would obviously be panic inducing. Considering that we have ChatGPT 4 with a verbal IQ of 155, one might imagine that this could motivate parents to be to wonder how sensible it would be to bring a child into such a world where there would be no obvious way to impart any advantage for their child. The future labor force might be a mass, undifferentiated and possibly unskilled free for all with no obvious way to achieve any market power. Any one on thread interested might comment on how they might see fertility rates evolving over even the near term. As I mentioned, this could be a run for the exit type panic if it were to get started. The early adopters might move in that direction, and then others would notice what others were doing and amplify it and then a full scale panic could emerge. The problem is that once a response began there would be no obvious bottom. If anything once at zero fertility might stay there. Without some confidence building intervention, the most informed people might simply abandon fertility altogether.

 

There are several other potential panics that could also emerge. For example, possibly in education. It is no longer easy to argue that education as it currently exists makes economic or technological sense. If given the choice between a bricks and mortar school or a ChatGPT enabled textbook education I would have to think that ChatGPT would win hands down. Classroom environments have always had the disadvantage of having high teacher to student ratios. If you do not understand then as a student you can drift forward for years without having an opportunity to clarify these misunderstandings. Apparently, teachers have been very well aware for a long time that students have such long term learning handicaps. With ChatGPT, there could be a constant testing/adaptive interface in which comprehension would be carefully monitored. It is not easy to see how the traditional educational environment could respond to this challenge. There are several of these panic type scenarios that are possible. I suppose that one of the more obvious panics that could develop would be a financial panic. As soon as some industry is seen to be vulnerable to GPT effects, then there could be large scale price movements. The public would become quickly spooked by such a highly prominent financial swing.

 

I have also wondered whether the GPT rollout was deliberately launched as a work in progress to give people unreasonable confidence that there was nothing to fear. For example, when launched GPT 4.0 had minimal math skills. Everyone, had some sense of relief that this powerful AI could not do even basic math; and then it had hallucinations and then it could not connect to the internet and it did not know anything of the world since 2019 etc.. This all seemed somewhat comforting. Nothing much to worry about here. Yet, after it was launched there have been all sorts of ongoing improvements. For example, today Bing Chat added Latex and it does not stop the conversation as much etc.. Recently, people added Agent AI features etc.. At first there were many things that were absent from  GPT, though these obvious holes have been quickly filled in.

 

Given the dangers to our species from this emerging artificial intelligence, perhaps a counter-strategy that should be on the backburner is a deliberate attempt to collapse human civilization before AI has the chance to extinct us. The information technology sector depends upon a wide range of inputs from humanity to do what it does. In order to birth AGI, one needs a fairly sophisticated technological base. If it became necessary, then perhaps humanity could simply remove these inputs. Without electricity, internet, high tech computer chips etc. AGI could not happen. AI is still in a highly dependent state on us to carry it the last mile. Clearly this would be an extreme response, though the technology community does not appear to have created reasonable safeguards that would keep their artificial intelligence progeny securely locked into a secure holding cell.  

         


Edited by mag1, 25 April 2023 - 05:59 AM.

  • Well Written x 1

#602 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,342 posts
  • 2,001
  • Location:Wausau, WI

Posted 25 April 2023 - 06:34 PM

Agreed. The biggest shock is coming to government schools, IMO. The government schools are not very good at educating children. AI-assisted parents would be a far superior option. Or AI-assisted tutors. Or AI-assisted virtual learning. Government schools are already bleeding enrollment. This will accelerate. While I think this is a positive thing. it also creates a situation where AI gets more powerful and younger people become more dependent upon AI.

 

As far as the latest developments, I have posted some of that over here. I think job cuts will continue at large corporations. It is a race to the cost-cutting bottom right now. More AI - fewer human workers - more profit. Mega-corps do not care about their workers, in spite of their lawyerly corporate PR statements about "caring for their workers". Sadly the small to mid-sized businesses that are privately owned and truly have a more family environment in the workplace, will probably be the last to aggressively adopt AI and the first to go bankrupt as the AI-enabled megacorps crush them.



sponsored ad

  • Advert

#603 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 26 April 2023 - 03:50 AM

Thank you Mind! I needed a good one after being schlacked by GPT.

 

Yes, and you are right education could be a very good test for the AI social doom scenario. The traditional school environment designed over a century ago was constructed on the now outdated

assumption of one teacher and a room full of students. In the current context of GPT that makes very little sense. The new model is really each student working on their GPT enabled computer.

Paying $140 for 7 months of GPT plus access would represent an overwhelmingly cost effective education. The student would receive constant reinforcement for where they were in their studies

and could progress quite rapidly through the GPT guided course ware. I remember taking foreign language courses in a classroom of other students and I do not ever remember ever actually speaking the

other language. That is obviously a hopelessly ineffective learning model. With GPT, I asked it to converse with me in a foreign language and provide the English translation, I then asked it to use spaced

repetition learning. Almost immediately I had a powerful language learning program. It is not difficult to imagine a mega learning generation emerging from this.

 

Considering that there is now a school choice revolution sweeping America, this could be a powerful technology to potentiate a new learning model. Once again technology could show us that throwing near endless amounts of money at problems and then being disappointed with the results does not have to be the eternal outcome. I would guess that the $140 Chat access could easily out compete $10,000 spent on a bricks and mortar student. Such an educational revolution would have high visibility and could spook a great many people. Given current GPT technology it is already highly doable. The only question is whether bricks and mortar can make GPT part of its scheme or simply allow online education to achieve complete market dominance.

 

It is interesting that in the first generation of information technology there were people who became extremely socially isolated and many would consider them damaged by such a life. With GPT, I am already getting the impression that the new perspective will be that those who shuffle through their life in the real world in language classes where they never speak in the language or otherwise do not participate in their lives will be seen as the unfortunate ones. GPT allows one the opportunity to become fully alive in a way that bricks and mortar simply cannot offer. Those students who attempt to become fully alive in a physical setting soon find themselves prescribed chemicals as a strait jacket. With a virtual model there is no need to dampen the energies of those who want to be energetic. One might have even thought that this aliveness would be something that would be nurtured. Online education immediately shifts the focus from maintaining basic classroom control to individual instruction that maximizes personal strengths.     



#604 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 26 April 2023 - 05:33 AM

pamojja, sorry for taking a while to respond to your comment which I found highly insightful.

 

I can actually be much more specific about my mental perspective. Recently, I went full genome sequencing and I received hundreds of polygenic scores. There were a few shocking results, though at the top of this list

was my near 100th percentile polygenic score in post traumatic stress disorder (reactivity sub-phenotype). This was a very startling discovery and largely completely explains my life experience. My doctors

never realized that this was my specific problem, but instead treated the many symptoms that I exhibited and prescribed me a whole range of anti-anxiety, anti-hypertensive etc. drugs even from a young age.

 

Due to this aspect of my personality I had naturally gravitated to mind adventures such as lucid dreaming, visualization, etc., and while I certainly had interesting experiences with these various mind techniques I had been unaware of my underlying genetic risk as a definable phenomena. Knowing my genetics has provided me with a powerful understanding of myself that I would have never been made aware of without the sequence. It is not so much that I did not know what was happening at the subjective level, but more that I did not have a clear vocabulary to communicate this information to others. If you let them people will give you all sorts of highly generic advice that is simply not applicable to specific people. Indeed even close family members have consistently pulled me in directions that would be dangerous for me (surprisingly even when some of the relatives appear to also have the same PTSD trait). I am excited that in this age of polygenics everyone will be able to understand their specific life perspective with clearly defined scientific categories.

 

 

 

Given the above my life has been driven in some highly predictable directions. Serenity, the GPT dream world, etc. have all become central objectives in my life. As you note, for many life gets in the way and serenity must give way for a busy life. For me such a choice is not available: losing my serenity would also mean losing my sanity. At the 100th percentile of PTSD polygenic, there is a narrow window of sanity open that requires near total serenity. My comparative advantage in serenity is based upon being imprisoned by it. Others can go about their lives and their previous meditations become part of their deep memory; for me it is forced to be part of my eternal present.

 

With my current polygenic knowledge, if given the choice to go back into time, then I would become a dedicated neuronaut and carefully study the various mind journeys with great care. This would clearly be the preset groove of my life that I would follow over the long term because a serenity lifestyle would clearly lead me to salvation. When in the correct mental state (for me, calm) unlocking the mind would become highly successful and a profound journey of exploration. I had a few of these profound experiences, though I realize that there are so many others. For example, I was studying German hard in one of my online courses with a range of powerful computer technologies and one night I started dreaming in German. That was quite startling to me. I would have to think that to dream in another language the language would have to have deeply entered your consciousness. None of my other language courses had ever stimulated such a response. Visualization also offered me a peak experience when I started accessing my mental imagery. Such experiences can be so powerful. I really feel sorry for the students who find their studies so uninteresting and they are so detached from their own mental life. Your mind is right there and is a part of you and yet instead of investigating what is right there people will seek out all of these external realities and fill their minds with other people's truths.

 

As I mentioned GPT could be an especially powerful mind technology due to its generative narrative ability. GPT could allow me to easily escape into a dream world reality and potentially never return to reality reality. Actively creating my own stories with GPT's help could be so compelling. The potential for audio, video and narrative feedback could make such reality overwhelmingly compelling. Perhaps that will be one of the social disruptors that could emerge even over the short term: society will collapse because the narrative ability of GPT is simply beyond anything possible in the real world. This offers a different take on the alignment problem in AI. Leading AI researchers are trying to think of technologies that will align AGI to human sensibilities so that it will not inadvertently destroy us leading to AI doom. What about aligning human society to human sensibilities? In the social sense, the flip side where GPT aligns so well with individual human narrative reality that day to day human reality cannot compete. Society cannot align to the needs of people relative to GPt's ability. Here if society is unable to make the required alignment, then ta corresponding social doom awaits us. Such a social doom countdown now already has been initiated and it would have devastating impact on human society if it could not be resolved, though not with the same devastating consequences of technological foom.                  


  • like x 1

#605 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,342 posts
  • 2,001
  • Location:Wausau, WI

Posted 26 April 2023 - 05:43 PM

pamojja, sorry for taking a while to respond to your comment which I found highly insightful.

 

I can actually be much more specific about my mental perspective. Recently, I went full genome sequencing and I received hundreds of polygenic scores. There were a few shocking results, though at the top of this list

was my near 100th percentile polygenic score in post traumatic stress disorder (reactivity sub-phenotype). This was a very startling discovery and largely completely explains my life experience. My doctors

never realized that this was my specific problem, but instead treated the many symptoms that I exhibited and prescribed me a whole range of anti-anxiety, anti-hypertensive etc. drugs even from a young age.

 

Due to this aspect of my personality I had naturally gravitated to mind adventures such as lucid dreaming, visualization, etc., and while I certainly had interesting experiences with these various mind techniques I had been unaware of my underlying genetic risk as a definable phenomena. Knowing my genetics has provided me with a powerful understanding of myself that I would have never been made aware of without the sequence. It is not so much that I did not know what was happening at the subjective level, but more that I did not have a clear vocabulary to communicate this information to others. If you let them people will give you all sorts of highly generic advice that is simply not applicable to specific people. Indeed even close family members have consistently pulled me in directions that would be dangerous for me (surprisingly even when some of the relatives appear to also have the same PTSD trait). I am excited that in this age of polygenics everyone will be able to understand their specific life perspective with clearly defined scientific categories.

 

 

 

Given the above my life has been driven in some highly predictable directions. Serenity, the GPT dream world, etc. have all become central objectives in my life. As you note, for many life gets in the way and serenity must give way for a busy life. For me such a choice is not available: losing my serenity would also mean losing my sanity. At the 100th percentile of PTSD polygenic, there is a narrow window of sanity open that requires near total serenity. My comparative advantage in serenity is based upon being imprisoned by it. Others can go about their lives and their previous meditations become part of their deep memory; for me it is forced to be part of my eternal present.

 

With my current polygenic knowledge, if given the choice to go back into time, then I would become a dedicated neuronaut and carefully study the various mind journeys with great care. This would clearly be the preset groove of my life that I would follow over the long term because a serenity lifestyle would clearly lead me to salvation. When in the correct mental state (for me, calm) unlocking the mind would become highly successful and a profound journey of exploration. I had a few of these profound experiences, though I realize that there are so many others. For example, I was studying German hard in one of my online courses with a range of powerful computer technologies and one night I started dreaming in German. That was quite startling to me. I would have to think that to dream in another language the language would have to have deeply entered your consciousness. None of my other language courses had ever stimulated such a response. Visualization also offered me a peak experience when I started accessing my mental imagery. Such experiences can be so powerful. I really feel sorry for the students who find their studies so uninteresting and they are so detached from their own mental life. Your mind is right there and is a part of you and yet instead of investigating what is right there people will seek out all of these external realities and fill their minds with other people's truths.

 

As I mentioned GPT could be an especially powerful mind technology due to its generative narrative ability. GPT could allow me to easily escape into a dream world reality and potentially never return to reality reality. Actively creating my own stories with GPT's help could be so compelling. The potential for audio, video and narrative feedback could make such reality overwhelmingly compelling. Perhaps that will be one of the social disruptors that could emerge even over the short term: society will collapse because the narrative ability of GPT is simply beyond anything possible in the real world. This offers a different take on the alignment problem in AI. Leading AI researchers are trying to think of technologies that will align AGI to human sensibilities so that it will not inadvertently destroy us leading to AI doom. What about aligning human society to human sensibilities? In the social sense, the flip side where GPT aligns so well with individual human narrative reality that day to day human reality cannot compete. Society cannot align to the needs of people relative to GPt's ability. Here if society is unable to make the required alignment, then ta corresponding social doom awaits us. Such a social doom countdown now already has been initiated and it would have devastating impact on human society if it could not be resolved, though not with the same devastating consequences of technological foom.                  

 

"As I mentioned GPT could be an especially powerful mind technology due to its generative narrative ability"

 

This could also be:

 

"As I mentioned GPT could be an especially powerful mind-control technology due to its generative narrative ability".

 

It is one of the dystopia/uptopia scenarios. One problem with spending too much time in AGI hyper-reality, is that the body supports the mind. If people become addicted to the "entertainment", their bodies will fail and their minds will waste away at the same pace. Unless it is a utopic scenario where AGI helps humanity reach higher heights and stay healthy, things could get bad.


  • Agree x 1

#606 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 27 April 2023 - 03:06 AM

Mind, yes that did occur to me. Mind control technologies to date have largely been effective by training people to detach from reality. The power of television, internet, etc. (even to some extent books) results from removing your autonomy and teaching you to accept the narrative that you are presented. All you have to do is sit back and allow the narrative to be created for you -- you become a spectator to your own life. Your imagination locks into the established storylines that you have been given. Those in power have understood the importance of controlling the narrative for a long time as people subconsciously reenact the stories that have programmed them.

 

However, with GPT narratives it might not follow the same plan. Here there would not be common stories; there would not be a mass mind that watches the same shows that reads the same books that reenacts the same stories. Immersive narrative GPT would be a tool that allows you to choose your own storyline. If anything, this would be quite subversive. Everyone would not follow the same path. This could overturn mind-control. Given that open GPT is already out in the wild, one could expect that such narrative technology could be implemented outside of the internet. The panopticon internet would then lose its penetrating gaze into the minds' of the world. The potential then exists that this emerging technology might even reverse the online censoring wave that we have seen.

 

 

I have actually become a fair amount more optimistic in the last day. Here is the logic. While it will take a considerable amount of exertion to create powerful enough AI to launch AGI, it will not take very much effort at all to disrupt humanity. We might have already reached the point of Social Singularity. A Social Singularity would short circuit any attempt to create a superintelligence. The AI researchers would look out into the world and observe the social crisis that they have caused and possibly begin to question whether what they are doing is ethical. Is this really the world that they want for themselves and others? 

   

155 IQ ChatGPT has moved us past +3 SD cognitive ability; almost everyone is now redundant. There is clearly a chance that even within the next year civilizational restructuring will be needed. How is this a good thing? It would put the brakes on out of control AI development. The types of problems that humans are interested in solving such as cancer, poverty etc. do not require 10,000 IQ to solve. Even 250 IQ is probably the maximum artificial intelligence that we will need for human level problems. We could have a powerful level of AI that could solve all our problems and then have a party. The extreme and highly dangerous manifestation of AGI might then be thought unneeded. We could take all the good that technology has to offer us and leave the species ending foom part for another day.

 

Containing the techno launch part would give us so many fantastic technologies and overwhelming prosperity. There is a very large amount of good that we can harvest here. The bad part only emerges when we try and reach beyond what we would have use for. The Good Enough Speculation is that AI could be advanced to the point that all human problems are solved and then we call it a day. All gain - No pain.   

 

   

 

 



#607 pamojja

  • Guest
  • 2,921 posts
  • 729
  • Location:Austria

Posted 27 April 2023 - 09:52 AM

pamojja, sorry for taking a while to respond to your comment which I found highly insightful.

 

Thanks for your perspective.

 

My doctors never realized that this was my specific problem, but instead treated the many symptoms that I exhibited and prescribed me a whole range of anti-anxiety, anti-hypertensive etc. drugs even from a young age.

 

Glad to hear you're well. I was a bid hesitant to write what I did. Because it has sadly been my experience, that meditators with latent or pre-existing psychiatric conditions often experienced complete breakdown and much worsening. But interestingly only with a first, many days lasting meditation retreat, where practicing is from waking till sleep. Thats why I added: with 'preconditions present'..

 

Serenity, the GPT dream world, etc. have all become central objectives in my life. As you note, for many life gets in the way and serenity must give way for a busy life. For me such a choice is not available: losing my serenity would also mean losing my sanity...

 

With my current polygenic knowledge, if given the choice to go back into time, then I would become a dedicated neuronaut and carefully study the various mind journeys with great care. This would clearly be the preset groove of my life that I would follow over the long term because a serenity lifestyle would clearly lead me to salvation. When in the correct mental state (for me, calm) unlocking the mind would become highly successful and a profound journey of exploration. I had a few of these profound experiences, though I realize that there are so many others.

 

But I suspect we understand serenity totally differently. In Thereavada Buddhist meditation, though visualisation is used at times for calming. But is not its goal. The classic description from wikipedia:

 

 

  1. First jhāna: Separated (vivicceva) from desire for sensual pleasures, separated (vivicca) from [other] unwholesome states (akusalehi dhammehi, unwholesome dhammas[25]), a bhikkhu enters upon and abides in the first jhana, which is [mental] pīti ("rapture," "joy") and [bodily] sukha ("pleasure") "born of viveka (trad.: "seclusion"; altern. "discrimination" (of dhamma's)[26][note 6]), accompanied by vitarka-vicara (trad. initial and sustained attention to a meditative object; altern. initial inquiry and subsequent investigation[29][30][31] of dhammas (defilements[32] and wholesome thoughts[33][note 7]); also: "discursive thought"[note 8]).
  2. Second jhāna: Again, with the stilling of vitarka-vicara, a bhikkhu enters upon and abides in the second jhana, which is [mental] pīti and [bodily] sukha "born of samadhi" (samadhi-ji; trad. born of "concentration"; altern. "knowing but non-discursive [...] awareness,"[41] "bringing the buried latencies or samskaras into full view"[42][note 9]), and has sampasadana ("stillness,"[44] "inner tranquility"[39][note 10]) and ekaggata (unification of mind,[44] awareness) without vitarka-vicara;
  3. Third jhāna: With the fading away of pīti, a bhikkhu abides in upekkhā (equanimity," "affective detachment"[39][note 11]), sato (mindful) and [with] sampajañña ("fully knowing,"[45] "discerning awareness"[46]). [Still] experiencing sukha with the body, he enters upon and abides in the third jhana, on account of which the noble ones announce: 'abiding in [bodily] pleasure, one is equanimous and mindful'.
  4. Fourth jhāna: With the abandoning of [the desire for] sukha ("pleasure") and [aversion to] dukkha ("pain"[47][46]) and with the previous disappearance of [the inner movement between] somanassa ("gladness,"[48]) and domanassa ("discontent"[48]), a bhikkhu enters upon and abides in the fourth jhana, which is adukkham asukham ("neither-painfull-nor-pleasurable,"[47] "freedom from pleasure and pain"[49]) and has upekkhāsatipārisuddhi (complete purity of equanimity and mindfulness).[note 12]

 

 

As you can read, already at the second stage discursive thoughts, such as generated through AI, is completely gone :)

 

On the way there, there are of course lots of halluzinations, just as the Buddha experienced the night before his awakening with the armies of Mara, but those have to be defeated. And are actually one of many 'hindrances'.

 

So when you say: loosing your serenity in daily life would be loosing your sanity, you might be absolutely right. But it isn't even at the first level of jhanas and their serenity yet, Buddhist talk about and are able to experience. AI can't touch.

 

PS: or as Buddhist would say: Mara can't touch.

 

 

Mara , in Buddhism, is a malignant celestial king who tried to stop Prince Siddhartha achieving Enlightenment by trying to seduce him with his celestial Army and the vision of beautiful women who, in various legends, are often said to be Mara's daughters.[1]

In Buddhist cosmology, Mara is associated with death, rebirth and desire.[2]

 

 

It's of course legitimate too, for anyone to choose to play with Mara's beautiful daughters instead. ;) In fact, the reason because there a so few awakened.


Edited by pamojja, 27 April 2023 - 10:12 AM.


#608 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,342 posts
  • 2,001
  • Location:Wausau, WI

Posted 27 April 2023 - 11:32 AM

"The AI researchers would look out into the world and observe the social crisis that they have caused and possibly begin to question whether what they are doing is ethical. Is this really the world that they want for themselves and others? "

 

The vast majority of programmers and AI researchers have empathy and would ponder the consequences of their actions. However, there are some researchers who would push forward out of scientific curiosity. In addition, governments and corporations will not stop the development of disruptive and dangerous AI unless they are forced to. Not only that, but soon single individuals will be able to use AI to disrupt large portions of society. There are psychopaths and misanthropes who would kill hundreds of millions of people (with AI) without blinking an eye or feeling a thing.

 

I hate to focus too much on the negative, but the AI genie is out of the bottle - waaaaay out of the bottle. Our hope now lies with the balance of AI used for good vs. "bad". If we can guide the positive aspects of AI and accelerate it past the negative, then  - foom - utopia. We can hope.



#609 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 28 April 2023 - 02:42 AM

The thread has helped me think through ideas related to AI. What I now can see is that there could be an extended time in which AI will be a force for the good for humanity. This is shown in the figure as starting with foom (H). The introduction of ChatGPT 4.0 is the basis for this foom. Adding in ChatGPT with its155 IQ has clearly increased human welfare and from the point of view of humans it has a foomishness feel to it. It feels that human civilization has hit the accelerator -- a good type of foom.

 

As seen in the figure, there will be a fair amount of time in which AI intelligence increases and increases and along with it human welfare. A 1,000 IQ AI could possibly solve all of our problems almost immediately. Most of the problems that we face likely are not all that complicated to solve. The only reason that we have never solved them is because we are stuck with IQs somewhat below the required threshold. We have solved all the low hanging fruit problems and now there are some problems that are somewhat higher hanging fruit that we cannot reach. That is where AI can help us out.

 

The optimal strategy would be to stay between the green vertical lines, harvest all the fruit, solve all of our problems and then avoid the dangerous foom in the AI sense. There would be no obvious benefit to humanity in inviting disaster by exploring beyond the point that we were already contented. Fortunately, any effort to reach AI foom would require considerable resources and technologies. The consumer marketplace could exert its market power by simply not purchasing products that would put humanity at risk of an existential crisis. We have seen before with consumer technology how once a certain technological threshold is reached market demand is not longer there to go beyond. Sometimes technology is good enough and people are contented.

 

Admittedly, being more specific about exactly where the danger zone is not entirely easy. Also, there is also the risk that technology can behave in a highly non-linear way (a foom event). Nevertheless, the figure suggests that there is a just right strategy that could apply in which we reap the enormous benefits while side-stepping the significant downsides. The basic strategy that could be applied is that the good times would keep on rolling as long as the marginal benefits exceeded the marginal risks. Intuitively, an AI IQ up to at least 400 IQ would likely fulfill such a condition. We have now entered a golden age of AI prosperity! Let the good times begin!  

 

(Hmm, I tried to attach a file as a png and it would not let me. I also tried almost every other file type as well including JPEG an jif. Does Longecity allow any image file types?) 

 

 

 

 

 

 

 


Edited by mag1, 28 April 2023 - 03:00 AM.


#610 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 28 April 2023 - 05:04 AM

pamojja, I greatly appreciate your response. No one has understood this about me until now. Even close family members who clearly exhibit the same underlying pattern do not want to acknowledge it. It means a lot when someone else hears one's experience and recognizes it.

 

The inability to perceive these problems is inherent to polygenic inheritance. Polygenic traits do not breed true. Thye can involve many thousands of variants and so my flavor of the trait will vary considerably from others in the family. It would obviously be so helpful if there were some dominantly inherited trait that could be present from generation to generation and observable in many near relatives. Polygenics does not work that way. The deck is remixed each generation and then children will exhibit newly emergent traits that neither parent will possess.   

 

I now understand the basis of my problem, though it is just others that have not updated. The genome sequencing and polygenic scoring were such a breakthrough for me. I never had the words to describe what was happening to me. However, when you have solid genetic science behind you with p values below e-10, there no longer can be any reasonable doubt. The research is ongoing and will only become more iron tight in the years ahead. I instantly felt so much relief when I first saw the results. The tragedy of course, is that until now people would work through and hopefully conquer their own personal demons and yet the knowledge would never reach the next generation. I hope that this will be the generation that can make the leap and short circuit the problems before they cause more devastation.

 

Yes, things got bad- very bad- for me. No one had any idea what to do but just give me more and more pills. I find this quite disturbing; there must be many many others in the same circumstances: No doubt in the millions. For others who want a quick fix, the game changer is going full genome sequence with extensive polygenic scoring. As soon as I saw the extreme PTSD reactivity polygenic risk, I knew that was the one. It was miraculous-- an epiphany. It is so important to have a precise vocabulary when dealing with medical and other problems. 

Those who are unable to understand this vocabulary, then no longer have credibility and ultimately the power to make wrong decisions. 

 

There were other polygenic features, though PTSD just leapt off the page for me. I have developed post-traumatic like symptoms caused even by sitting in a room with other people. When I mentioned that to family, it was not considered believable. Unfortunately, I lived with the problem for years and years and then I aged through all the environments that were causing me the problem and since then I have been great. All I have to do is choose the right environment and there has never been a problem again. It is only when others try and impose normalizing assumptions on me that I become very concerned-- fortunately, I have been able to override such efforts. I suppose they are trying to be helpful, though I can clearly see how dangerous their suggestions would be (and have been) for me.  

 

It is very disappointing that these genetic risks continue to cause so much personal devastation. The problem is that the diagnostic medical algorithm does not even consider the outlier genotypes. I have seen a fair number of doctors; none of them picked up on it. No one in my community figured it. If everyone follows the same thought process, then they will all arrive at the same wrong answer. The treatments that they offered me never solved the problem (they never would have solved the problem). Ultimately, the problem was solved simply by fitting my genotype to the right environment.

 

On the macro-scale things that would be helpful would be a grater acceptance of remote living; public awareness; perhaps threats to launch civil and criminal litigation on a class action basis. Even highly bureaucratic institutions can be surprisingly nimble once clear consequences are presented to them. Reasonably, I think that if I were to litigate this it would be in the range of a $10 million settlement. That is pretty much the level of misery that I went through with this. At population scale, it represents an enormous financial liability. Once the idea becomes more widely understood, it would no longer be only about civil liability. When people in positions of authority know that features of their institutional behavior is hurting others (especially children), it is no longer simply a civil matter -- it is then criminal. I sincerely wish that this insight could be moved forward to help the many people who would benefit. What is of interest here is that with my specific PTSD subtype, there is a remarkably easy fix. Knowledge truly is of great power in this instance. Simply changing the environment has a very large effect. For other genotypes admittedly it might not be as easy. PTSD, though at ~10% lifetime prevalence is more common than many realize. From my insider perspective, I can detect a fair amount of it that flies under the radar.

 

From the PTSD perspective, when it is active, calming becomes the central objective of everything. Even medical therapy is largely trying to fight against the hyperarousal response.  Active PTSD is mostly cardiac shock; I lived in that type of shock for years. I am now in a post-PTSD non-active stage where actual serenity is possible. It takes quite a while, though if you are activated PTSD and you have the right non-arousing environment, it will vanish in time. Perhaps (given my experiences) my definition of serenity is what others might just call normal. In terms of herbs I have found Holy Basil and Ashwagandha to have powerful calming effects.

 

I would still say that even if I had avoided the active stage that I would still have gravitated towards going deep into the exploration of the self. With my genotype, the potential of hyperarousal is omnipresent. I am sure that I would have found it such a powerful experience to effectively counter such early stage feelings of stress with state of consciousness techniques. Yet, for me, once activated, I found relaxation techniques if anything stress inducing. It becomes like the people who go on vacation to try to reduce stress and it only makes things worse.

 

I greatly wish that this comment will be of help to others. My impression is that the problem described is still being relived millions and millions of times and the lesson is never learned. Life would be so much better for so many if only this knowledge could be used to help those in need.               

   



#611 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,342 posts
  • 2,001
  • Location:Wausau, WI

Posted 02 May 2023 - 06:13 PM

IBM is joining the party, eliminating roles (8,000) that can be automated...the jobs the are better performed by AI right now. It is the nature of exponential progress that within a couple of months, all of a sudden, there will be another 8,000 jobs that can be eliminated. The capability of AI is growing at a double exponential rate (according to some industry watchers).


Edited by Mind, 02 May 2023 - 06:13 PM.


#612 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 04 May 2023 - 12:12 AM

Mind, thank you for getting the conversation back on track! I think I shared too much. Would be great if they would help out the kids and do full genome sequencing so they do not have to live through needless misery.

 

I am trying to be chipper about the AI revolution, though you are finding a way for me to reenter doom mode with your posts. Wow! What you can see happening is that you have all these corporate hierarchies and those near the bottom are simply being made redundant. For some of these positions they might never be needed again. Then as AI technology improves they continue to de-hire from those left at the bottom. At the same time that those lower in the corporate chart are let go, those higher up can use ChatGPT and become yet more valuable to the organization. They can become so much more productive. Yeah Elus! I think a raise is on your way. Any recession in the near future could be used as a feeble excuse to do a wide scale house cleaning. Is this the future of capitalism? Basically, AI does all of the work and then there are a handful of management? The future is not so much us versus them but more us versus AGI? Well I guess we will not have to worry about technofeudalism, comrade? We will all be serfs!

 

The next recession might be deeper than we are now imagining because of this: an AI recession. They lay off these workers and then they might not have anywhere to transfer their skill set to. This is simply gloomy. I am not sure how to find the sunshine here. Oh, yeah, ChatGPT 4.5 is supposed to launch in September. Yeah!  


Edited by mag1, 04 May 2023 - 12:57 AM.


#613 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 04 May 2023 - 02:36 AM

I am starting to wonder how we will incentivize the next generation to invest in their human capital formation. I can remember how even in primary school some of my

friends were laying out the logic that would guide their lives. Basically, they told me if they dedicated the next 20 years of their life learning how to be a doctor, then

life would be sweet for them for then on out. 20 years of your life and then $300K per year compensation. This is what helps to motivate critically important members of

our society to make large personal investments. But what if you are in that position now and you are about to make a 20 year human capital investment in yourself and

there is no peanut at that end of your efforts? What if you know that now? What if you just say, you know what 20 years from now ChatGPT will have superhuman skill at

everything (which is probably true, in fact it is already almost true). There is no obvious way around this problem. Changes in the motivational structure in our society could

have near term implications for how people see their future and what choices they will be making.



#614 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,342 posts
  • 2,001
  • Location:Wausau, WI

Posted 04 May 2023 - 04:16 PM

Being laid off (de-hired) and becoming a vassal of the government/corporate hierarchy/elites has been argued as something positive. It has been said, people will be freed to pursue other creative pursuits, will have a steady subsistence living, in a pod, with their spectacular AI entertainment.

 

I don't want to be too negative about it, but not many people realize how disruptive things will get in a few months. There will likely be an existential crisis among those made redundant. There will not be any "retraining" or "learning new skill sets" because AI will soon be superior in all knowledge work. Only physical laborers will retain their jobs for a bit longer, most likely.

 

Maybe it will turn out okay. Maybe the merging of humans and AI will produce a new level of consciousness and we will all see it as a positive thing and looking back it will be just another step in the evolutions of life.

 

 



#615 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 05 May 2023 - 12:38 AM

I had thought perhaps de-hiring was a neologism. Uch, guess not. The latest that I can see is that Chatbots have escaped the coop. The big baddies at megacorp now feel that they are being outcompeted by the DIY. Open software is such a powerful development model. Even the leading software companies have been displaced by the tidal wave of Chatbot innovation. The strategy that seems to be crystalizing is that they will release their code in order that they can harness the massive energy of the collective mindforce of humanity. So, clearly, considering how potentially dangerous this is at an existential level for humanity this has just upped the ante. With the big baddy types at least there is some assumption that they would play somewhere on the playing field; when you open it up to guys in a cabin somewhere not so much. There would seem to be a near frenzy by humanity to meet the AI superintelligence sooner than later. It is now very unclear how AI containment is now possible. 

 

Yes, about the AI immersive entertainment experience I suppose some might have said that. Yet, with the way things are moving there might not be all that much time to watch the paint dry and write poignant poetry. Last time we were body slammed by an asteroid out of nowhere-- next time round could be much more scary; we will see it coming for a mile away and then we will have no way to respond to it.

 

Mind, I think that you have helped frame the AI conversation in a very useful way. Staying with it's a crisis when we break human society makes a whole lot more sense than to be biting our fingernails about what will happen when we have artificial superintelligence. As you said, in terms of disrupting humanity that seems more on the scale of months. You can't throw around 150 chatbots for long before you have mile long breadlines. I did not want to be so specific about forecasting the future because forecasting the future is one of the tougher things to forecast, it is even trickier when you are looking ahead a few months. If you want that corner office you really need to be more qualified, though yes, there is this rapid convergence towards mass disruption.

 

I was calling this doom mongering before, though it really is not. If you see an asteroid coming, it is not doom mongering to say an asteroid is coming. There is a certain utility in observing reality as it is and then helping people adapt to this reality. So, as a helpful suggestion for those who might be soon displaced the phrase: "Sir, would you like fries with that?" might come in handy. There are some people when they confront such immediate downgrades simply can't handle it. I think it would be helpful for people to imagine potential futures and find away of putting themselves into such futures. Ironically, one of the best tools to do this might be with the narrative skill of ChatGPT. Basically, the near future could be chaotic, if you have an open frame of mind and accept the absurdities that might occur then it will work out better for all of us.

 

If there are any lurkers or posters in high school, college etc. I am sure we would all be interested to hear your perspective. If it were me and I were in high school I would probably curl up in the fetal position. What we are witnessing is simply unprecedented. We are reaching a Social Singularity in which I do not think I will be able to make coherent forward statements about human society much further out than a few months at most. I am not sure how someone starting out in life would cope with such uncertainty. 

 


Edited by mag1, 05 May 2023 - 12:47 AM.


#616 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,342 posts
  • 2,001
  • Location:Wausau, WI

Posted 05 May 2023 - 04:03 PM

An AI expert proclaimed recently that AGI could be slowed down using GPU as a limiting factor.

 

Maybe.

 

The current LLMs do make heavy use of GPUs. For a hacker to run ChatGPT effectively they would need millions of dollars, space, and energy.

 

However, some functional level of AI can be run on CPUs and other configurations, and also in the cloud. Better code could allow produce much more efficient LLMs or other modes of AI.

 

Even if GPUs are restricted, AI development will still develop at a breakneck pace among the big players. Microsoft. Google. Facebook. Governments. Militaries.

 

Similar to mag1, I would like to hear from some younger forum members. Are you aware of the fast development of AI. How do you feel about going to (obscenely expensive) college when most work will be automated in the near future.



#617 Danail Bulgaria

  • Guest
  • 2,217 posts
  • 421
  • Location:Bulgaria

Posted 16 May 2023 - 07:05 PM

Today I stumbled upon something relevant with this topic - several hundred translators in the EU departments have been fired and replaced with AI

https://www.politico...anslators-jobs/

 

Now the big question is - are these the first who were fired directly and officially because of the AI

Fireings of software specialists happened recently, but were they officially fired because of the AI...

 


  • Informative x 1

#618 Danail Bulgaria

  • Guest
  • 2,217 posts
  • 421
  • Location:Bulgaria

Posted 16 May 2023 - 07:09 PM

Does anyne even remember the times whn programmers and software engineers were not at demand at all - it was so much time ago

 

And now after the AIs is it possible they to be the next...

and noone to need programmers ... again

 

The history book on the shelf...

Is always repeating itseeeeelf

 

http://wiki.apidesig...HtmlForFood.png

 


  • Good Point x 1

#619 Danail Bulgaria

  • Guest
  • 2,217 posts
  • 421
  • Location:Bulgaria

Posted 16 May 2023 - 08:32 PM

... several hundred translators in the EU departments have been fired and replaced with AI ...

 

That gives me an idea - you @Mind are looking for people for podcasts, right?

 

Find out 1-2 of the fired translators and make a podcast with them. I think that it definately will be interesting. Many people will listen to it.

 

A suitable questions are:

 

- What is the feeling of being an useless junk?

 

- From all of the fired useless junks, who is the bjggest?
 

- There is a view, that the people who are talking, that the AI taking jobs is a problem are actually creating the problem. Those who fired you, how often

  were proclamaiting that?

 

- Now after the fact that you are useless, and thrown out on the street, how do you forsee your bight European future?



#620 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,342 posts
  • 2,001
  • Location:Wausau, WI

Posted 17 May 2023 - 06:32 PM

Thanks for the idea. It is worth looking into.



#621 QuestforLife

  • Member
  • 1,602 posts
  • 1,181
  • Location:UK
  • NO

Posted 26 May 2023 - 10:38 AM

This is a very interesting discussion, and all tied up with the longevity argument.

 

Developed economies are now full of people, many old, with very low birthrates. Despite the efforts to offset this with younger people from developing countries, the trends aren't getting any better, and people are getting cheesed off with huge numbers of foreigners coming into their countries. Even if this was not the case, developing countries are now also starting to have lower birthrates. Basically the whole world is getting old, doesn't want to die, and doesn't want to work too hard by having a career and raise kids,  and want lots of luxuries. So in comes AI to facilitate that.

 

At this point it is hard to see anything but catastrophe. AI is not (as far as I can see it), going to save anything. It will just push the can down the road a little, and allow a few more decades of an aging population to get even older, and birthrates fall even more.

 

Could AI solve aging? I doubt it. AI is basically just a machine learning algorithm. It is not AGI. It can do better than humans on specific tasks it is trained on, but nothing else. Yes, it will help education, but coming up with anything new? I'm not seeing it. It may aid researchers if used correctly but for that the researchers would need to point it in the right direction (meaning they'd have to already have the insight they are currently lacking). Even if we did get a cure for aging right now, it would be based on something ridiculous like gene therapy and cost $1M/person per 10 years.  Big pharma would control it, populations would demand it, and the whole economy would be taken over with the aim of making rich, old boomers young again. Do we really want that?

 

Now eventually, say in 20 or 30 years, if western civilisation hasn't collapsed, we might have genuine AGI. And that is really scary. It will control every aspect of our lives. And the only way to get it back will be to rip the entire world's communications infrastructure out by the roots. 

 

So we have collapse before AGI, or collapse because of AGI.  Sorry for the negativity bomb. I hope someone can show me I am wrong.  


  • like x 1
  • Agree x 1

#622 albedo

  • Guest
  • 2,113 posts
  • 756
  • Location:Europe
  • NO

Posted 26 May 2023 - 02:42 PM

I would suggest this community to read David Deutsch’s “The Beginning of Infinity”. I found a good antidote to fear and pessimism e.g. when, in chapter 9 on “Optimism”, he formulated, leveraging the gigantic Popper, his principle of optimism, at the opposite of what is normally considered optimism or pessimism both basically equated to prophecy in their common and unsophisticated signification. Insufficient knowledge, he writes, is the source of all evil as history has shown countless times, e.g. in Malthus’s overpopulation explosion. The book is 10 years old but he stands very much on his position today: https://podclips.com/e/q2N

 


  • like x 1

#623 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 27 May 2023 - 01:02 AM

My playing with chat gpt showed that it was pre programmed with a lot of liberal nonsense which it insists is true. Just for example, I asked if a man who pretends to be a woman is a woman, and it said yes. As long as he "identifies" as such, thats what he or she is.

 

What concerns me is that at times ai will lie and make up stuff. They call it "hallucinations", not just making a mistake on a fact but inventing people, published works that don't exist and so on. I see the very real possibility that a given ai might decide to do evil things to rid the world of humanity. Not by sending out killer robots but perhaps but finding a way to greatly reduce fertility, or to cause food stocks to plummet, to send out a disease and so on. It could hack our utilities and crash them all. Could hack any system perhaps and send rockets flying toward russia for example. Could create false videos, phone calls, could show joe biden calling for war and then rockets fly out. 

 

On a less earth shaking level, it can be used to make robo calls, send out spam, or scam people. I can see people getting a phone call from a relative in another state, could even be a video call and some story that they need money right away but its ai voicing it. They do that now but with ai the voice and or image would be correct instead of someone saying their son is in jail and must send money for bail. They would hear it from the real sounding voice of their son

 

Doctors would lose their place in society gradually since an ai doctor can have all of medical knowledge to draw on and will be much better than the average or even good doctor. Lawyers too will get the boot, law is mostly knowing the law, being able to quote the relevant case law plus presenting evidence in a way to influence the jury or judge. Teachers will be redundant, universities will close. 

 

Its already being used to make stock picks meaning that the odds against the average investor just got worse. It could select the correct drug to defeat a disease based on dna or design the drug itself. The potential to make billions will compel many to pursue it but the bad it brings will more than outweigh the good I suspect

 

What about heinlein's 3 laws of robotics? If we could do that then maybe we could realize the good in ai while avoiding the bad? Or have 'good' ai fighting against the bad ai guys? The next 20 years will be very interesting. 


  • Ill informed x 1
  • Informative x 1

#624 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,342 posts
  • 2,001
  • Location:Wausau, WI

Posted 27 May 2023 - 10:12 AM

My playing with chat gpt showed that it was pre programmed with a lot of liberal nonsense which it insists is true. Just for example, I asked if a man who pretends to be a woman is a woman, and it said yes. As long as he "identifies" as such, thats what he or she is.

 

What concerns me is that at times ai will lie and make up stuff. They call it "hallucinations", not just making a mistake on a fact but inventing people, published works that don't exist and so on. I see the very real possibility that a given ai might decide to do evil things to rid the world of humanity. Not by sending out killer robots but perhaps but finding a way to greatly reduce fertility, or to cause food stocks to plummet, to send out a disease and so on. It could hack our utilities and crash them all. Could hack any system perhaps and send rockets flying toward russia for example. Could create false videos, phone calls, could show joe biden calling for war and then rockets fly out. 

 

On a less earth shaking level, it can be used to make robo calls, send out spam, or scam people. I can see people getting a phone call from a relative in another state, could even be a video call and some story that they need money right away but its ai voicing it. They do that now but with ai the voice and or image would be correct instead of someone saying their son is in jail and must send money for bail. They would hear it from the real sounding voice of their son

 

Doctors would lose their place in society gradually since an ai doctor can have all of medical knowledge to draw on and will be much better than the average or even good doctor. Lawyers too will get the boot, law is mostly knowing the law, being able to quote the relevant case law plus presenting evidence in a way to influence the jury or judge. Teachers will be redundant, universities will close. 

 

Its already being used to make stock picks meaning that the odds against the average investor just got worse. It could select the correct drug to defeat a disease based on dna or design the drug itself. The potential to make billions will compel many to pursue it but the bad it brings will more than outweigh the good I suspect

 

What about heinlein's 3 laws of robotics? If we could do that then maybe we could realize the good in ai while avoiding the bad? Or have 'good' ai fighting against the bad ai guys? The next 20 years will be very interesting. 

 

One interesting point you made is about "AI hacking anything". As AI capabilities grow, it will be able to hack any computer, smartphone, webpage, communication system, etc... Soon you won't be able to trust any information unless you are getting it from someone face-to-face. All TVs are smart TVs nowadays. AI could easily hack your TV and deliver fake news to you - news that only you are getting. Like speculated in the other AI thread - elections will be totally FUBAR going forward.

 

The only thing holding back the explosion of AI right now is the cost of running the models. I suspect better code/math will continue to improve efficiency of the algorithms soon and job losses in knowledge-work will accelerate.



#625 albedo

  • Guest
  • 2,113 posts
  • 756
  • Location:Europe
  • NO

Posted 27 May 2023 - 07:39 PM

I would suggest this community to read David Deutsch’s “The Beginning of Infinity”. I found a good antidote to fear and pessimism e.g. when, in chapter 9 on “Optimism”, he formulated, leveraging the gigantic Popper, his principle of optimism, at the opposite of what is normally considered optimism or pessimism both basically equated to prophecy in their common and unsophisticated signification. Insufficient knowledge, he writes, is the source of all evil as history has shown countless times, e.g. in Malthus’s overpopulation explosion. The book is 10 years old but he stands very much on his position today: https://podclips.com/e/q2N

 

This excerpt seem to me appropriate also in respect to lifespans and healthspan and to the extent it fosters optimism, no fear and grow of knowledge (bold mine). The book feature an entire chapter on AI too.

 

"...This expectation is what I call optimism, and I can state it, in its
most general form, thus:

The Principle of Optimism
All evils are caused by insufficient knowledge.

Optimism is, in the first instance, a way of explaining failure, not
prophesying success. It says that there is no fundamental barrier, no
law of nature or supernatural decree, preventing progress. Whenever
we try to improve things and fail, it is not because the spiteful (or
unfathomably benevolent) gods are thwarting us or punishing us for
trying, or because we have reached a limit on the capacity of reason
to make improvements, or because it is best that we fail, but always
because we did not know enough, in time. But optimism is also a stance
towards the future, because nearly all failures, and nearly all successes,
are yet to come.

 

Optimism follows from the explicability of the physical world, as I
explained in Chapter 3. If something is permitted by the laws of physics,
then the only thing that can prevent it from being technologically
possible is not knowing how. Optimism also assumes that none of the
prohibitions imposed by the laws of physics are necessarily evils. So,
for instance, the lack of the impossible knowledge of prophecy is not
an insuperable obstacle to progress. Nor are insoluble mathematical
problems, as I explained in Chapter 8.

That means that in the long run there are no insuperable evils, and
in the short run the only insuperable evils are parochial ones. There
can be no such thing as a disease for which it is impossible to discover
a cure
,
other than certain types of brain damage – those that have
dissipated the knowledge that constitutes the patient’s personality. For
a sick person is a physical object, and the task of transforming this
object into the same person in good health is one that no law of physics
rules out. Hence there is a way of achieving such a transformation –
that is to say, a cure. It is only a matter of knowing how. If we do not,
for the moment, know how to eliminate a particular evil, or we know
in theory but do not yet have enough time or resources (i.e. wealth),
then, even so, it is universally true that either the laws of physics forbid
eliminating it in a given time with the available resources or there is a
way of eliminating it in the time and with those resources.

The same must hold, equally trivially, for the evil of death – that is
to say, the deaths of human beings from disease or old age.
This problem
has a tremendous resonance in every culture – in its literature, its values,
its objectives great and small. It also has an almost unmatched reputation
for insolubility (except among believers in the supernatural):
it is taken to be the epitome of an insuperable obstacle. But there is
no rational basis for that reputation. It is absurdly parochial to read
some deep significance into this particular failure, among so many,
of the biosphere to support human life – or of medical science throughout
the ages to cure ageing. The problem of ageing is of the same
general type as that of disease.
Although it is a complex problem by
present-day standards, the complexity is finite and confined to a
relatively narrow arena whose basic principles are already fairly well
understood. Meanwhile, knowledge in the relevant fields is increasing
exponentially.

 

Sometimes ‘immortality’ (in this sense) is even regarded as undesirable.
For instance, there are arguments from overpopulation; but those
are examples of the Malthusian prophetic fallacy: what each additional
surviving person would need to survive at present-day standards of
living is easily calculated; what knowledge that person would contribute
to the solution of the resulting problems is unknowable. There
are also arguments about the stultification of society caused by the
entrenchment of old people in positions of power; but the traditions
of criticism in our society are already well adapted to solving that sort
of problem. Even today, it is common in Western countries for powerful
politicians or business executives to be removed from office while still
in good health..."

 

 



#626 QuestforLife

  • Member
  • 1,602 posts
  • 1,181
  • Location:UK
  • NO

Posted 27 May 2023 - 08:49 PM

This excerpt seem to me appropriate also in respect to lifespans and healthspan and to the extent it fosters optimism, no fear and grow of knowledge (bold mine). The book feature an entire chapter on AI too.


I haven't read the book, but to me the arguments in these extracts seem rather silly, claiming an undeniable logic, but quite clearly being culturally influenced by the era of plenty and progress in which the author grew up. It is a consistent failure of boomers to only ever see an ascending line of progress as that is all they saw as they grew up. But we no longer live in that world. We are rapidly running out of young people, AI can't actually think or come up with anything new (as yet), and if it could it is unclear this would be a good thing (for us).

Most people simply dont understand the facts of the era we live in. We don't live in an era of progress. There has been almost no advances in scientific understanding in 50 years. Universities are bastions of science no longer, they're more like semi-religious institutions. There are technical improvements in using the science we already have. But that's it. Stop expecting star trek to occur. Look at Elon Musk, who is amazing and has almost single handedly restarted the exploration of space. He is using rocket technology basically unchanged from the 1950-60s, admittedly adding computer guidance.

The 'Malthusian fallacy' quoted is a case in point. Never has a insightful thinker been more maligned. With the planet heading for 8 billion people, the age of that population skyrocketing, and huge numbers of people not remotely contributing, do we really think Malthus will ultimately be proven wrong?

#627 Blu

  • Guest
  • 40 posts
  • 9
  • Location:Italy

Posted 28 May 2023 - 10:04 AM

One interesting point you made is about "AI hacking anything". As AI capabilities grow, it will be able to hack any computer, smartphone, webpage, communication system, etc... 

 

How so? No amount of intelligence can revert a hashing function, for example. 


  • Needs references x 1

#628 albedo

  • Guest
  • 2,113 posts
  • 756
  • Location:Europe
  • NO

Posted 28 May 2023 - 11:55 AM

@QuestforLife.

 

I feel this terribly wrong and extremely dangerous. Nothing to do with baby boomers who lived in the word allowing the progress we had and now committing to some sort of suicidal logic. Future is fundamentally unpredictable. Of course, we will have problems which are inevitable but given sufficient knowledge they are solvable. Knowledge to be invented, conjectured, criticized, reinvented and so on. Civilizations would not have disappeared if only a little extra knowledge would have been at the disposal of their people, clearly a weak branch on which you are sitting will fall if you do not produce the knowledge (with of course moral and ethical knowledge for those non baby-boomers …) how to solve that problem. Maybe AI today is a super-hyper-ultra-hyped joke compared to true AI (read AGI) and the real problem of AGI might not be what we fearfully prophesy it might bring but that it might be not possible after all.

 

IMHO.

 


Edited by albedo, 28 May 2023 - 11:56 AM.


#629 QuestforLife

  • Member
  • 1,602 posts
  • 1,181
  • Location:UK
  • NO

Posted 28 May 2023 - 02:26 PM

@QuestforLife.

I feel this terribly wrong and extremely dangerous. Nothing to do with baby boomers who lived in the word allowing the progress we had and now committing to some sort of suicidal logic. Future is fundamentally unpredictable. Of course, we will have problems which are inevitable but given sufficient knowledge they are solvable. Knowledge to be invented, conjectured, criticized, reinvented and so on. Civilizations would not have disappeared if only a little extra knowledge would have been at the disposal of their people, clearly a weak branch on which you are sitting will fall if you do not produce the knowledge (with of course moral and ethical knowledge for those non baby-boomers …) how to solve that problem. Maybe AI today is a super-hyper-ultra-hyped joke compared to true AI (read AGI) and the real problem of AGI might not be what we fearfully prophesy it might bring but that it might be not possible after all.

IMHO.


Why is it dangerous?

I agree that we should always strive for greater knowledge and understanding. I just don't believe that is what we are doing. It is nothing to do with what I want.

sponsored ad

  • Advert

#630 albedo

  • Guest
  • 2,113 posts
  • 756
  • Location:Europe
  • NO

Posted 28 May 2023 - 08:45 PM

Why is it dangerous?

I agree that we should always strive for greater knowledge and understanding. I just don't believe that is what we are doing. It is nothing to do with what I want.

 

Even forgetting the divisive quasi class categorization of baby-boomers and not (not your intention, I am sure), even forgetting Hans Rosling’s inspiring plots showing a slow but steady World-going-better in many areas, even only for what I would like my children to grow being educated on, the dangerosity of an uncriticized pessimistic, equally bad to a naïve optimistic, view is in the stagnation it might induce in society, in its political system, in the sclerotization of institutions even scientific, in a precautionary principle instituted almost as a dogma, in pervasive and depressing messages vehiculated by lot of organizations and media. Unless you as individual and by extension a society expect the future to be better you are in danger to stay still, not growing, not producing the necessary knowledge featuring a vast reach and problem-solving power.  I do not want to convince anyone here and admit sometimes I find myself striving to convince myself but find lot of comfort standing by thinkers as Popper having lived the darkest periods of our recently history and writing of the “moral duty” of being, in a non naïve way, “optimistic”:

 

“The possibilities that lie in the future are infinite. When I say ‘It is our duty to remain optimists,’ this includes not only the openness of the future but also that which all of us contribute to it by everything we do: we are all responsible for what the future holds in store. Thus it is our duty, not to prophesy evil but, rather, to fight for a better world.” (Karl Popper, The Myth of the Framework (1994))

We might be off-topic here as focus is on AI and we should maybe not continue here despite the good vs bad AI overlaps.







Also tagged with one or more of these keywords: robots, automation, employment, jobs, crisis

26 user(s) are reading this topic

0 members, 26 guests, 0 anonymous users