When you read the AI thought leaders, you hear a great deal of AI doom from them as well which is not exactly encouraging. Yet, it is down right discouraging when others put forward the idea that AI doom talk itself is simply potentiating the emergence of AGI. That feels too much like the butterfly effect within the context of an AGI maelstrom that the technology community has helped unleash. Sometimes the attractor is not an impossibly unlikely nonlinear effect but the macro driving feature of the system (i.e., AGI). It seems to me to be an extremely weak argument to blame those who want to engage with this important topic are somehow causing what they are warning about. The messenger is causing the problem that the messengee has actually caused. A world where no one wants to accept responsibility for their actions? As we have seen, suppressing free speech and the free exchange of ideas does not somehow magically solve our problems. Not weighing in on the realistic dangers of AGI could lead to the extinction of our species.
I have also tried to think through the possible social level crisis that could emerge much before AGI. With humans things would start to break much before we reach 10,000 IQ. Anything much over 150 IQ (we are currently at 155 IQ) is probably sufficient. One idea that I have thought of is how people look out into reality and take their signal from what they see. So, when I now look out into reality everything seems to be pretty much the same old same old. ChatGPT does not appear to have any macroscale impact on the world. I interpret this to mean everything is fine and that I can continue with my life as usual. No problems --> no panic. Yet, if I did see some obvious change in the social landscape that indicated people were responding in an observable way to the emergence of GPT, then this would clearly change my perception. When I look out to reality and others confirm my concerns about the dangers of uncontrollable artificial intelligence that would clearly be a concern. This is the problem with panics: people do not respond to what they themselves feel and think, but often wait for others to initiate the panic for them. Basically, this puts us into a very unstable social position. Everything is all ready to happen and we must wait until some largely random event starts the ball rolling down the hill. Yet, the ball starts at a very precarious place at the top of the hill.
What might be some of these triggers for a full scale stampede? I mentioned before fertility. If we saw fertility rates decline by 50% from the current already low levels that would obviously be panic inducing. Considering that we have ChatGPT 4 with a verbal IQ of 155, one might imagine that this could motivate parents to be to wonder how sensible it would be to bring a child into such a world where there would be no obvious way to impart any advantage for their child. The future labor force might be a mass, undifferentiated and possibly unskilled free for all with no obvious way to achieve any market power. Any one on thread interested might comment on how they might see fertility rates evolving over even the near term. As I mentioned, this could be a run for the exit type panic if it were to get started. The early adopters might move in that direction, and then others would notice what others were doing and amplify it and then a full scale panic could emerge. The problem is that once a response began there would be no obvious bottom. If anything once at zero fertility might stay there. Without some confidence building intervention, the most informed people might simply abandon fertility altogether.
There are several other potential panics that could also emerge. For example, possibly in education. It is no longer easy to argue that education as it currently exists makes economic or technological sense. If given the choice between a bricks and mortar school or a ChatGPT enabled textbook education I would have to think that ChatGPT would win hands down. Classroom environments have always had the disadvantage of having high teacher to student ratios. If you do not understand then as a student you can drift forward for years without having an opportunity to clarify these misunderstandings. Apparently, teachers have been very well aware for a long time that students have such long term learning handicaps. With ChatGPT, there could be a constant testing/adaptive interface in which comprehension would be carefully monitored. It is not easy to see how the traditional educational environment could respond to this challenge. There are several of these panic type scenarios that are possible. I suppose that one of the more obvious panics that could develop would be a financial panic. As soon as some industry is seen to be vulnerable to GPT effects, then there could be large scale price movements. The public would become quickly spooked by such a highly prominent financial swing.
I have also wondered whether the GPT rollout was deliberately launched as a work in progress to give people unreasonable confidence that there was nothing to fear. For example, when launched GPT 4.0 had minimal math skills. Everyone, had some sense of relief that this powerful AI could not do even basic math; and then it had hallucinations and then it could not connect to the internet and it did not know anything of the world since 2019 etc.. This all seemed somewhat comforting. Nothing much to worry about here. Yet, after it was launched there have been all sorts of ongoing improvements. For example, today Bing Chat added Latex and it does not stop the conversation as much etc.. Recently, people added Agent AI features etc.. At first there were many things that were absent from GPT, though these obvious holes have been quickly filled in.
Given the dangers to our species from this emerging artificial intelligence, perhaps a counter-strategy that should be on the backburner is a deliberate attempt to collapse human civilization before AI has the chance to extinct us. The information technology sector depends upon a wide range of inputs from humanity to do what it does. In order to birth AGI, one needs a fairly sophisticated technological base. If it became necessary, then perhaps humanity could simply remove these inputs. Without electricity, internet, high tech computer chips etc. AGI could not happen. AI is still in a highly dependent state on us to carry it the last mile. Clearly this would be an extreme response, though the technology community does not appear to have created reasonable safeguards that would keep their artificial intelligence progeny securely locked into a secure holding cell.
Edited by mag1, 25 April 2023 - 05:59 AM.