adamh thank you for responding!
I think that this thread has framed the threat from AGI quite well.
Firstly, Mind invoked the exponentially filling lake. This is a very good rough heuristic to have on hand.
With this lake, you do not really notice anything and then when you do you are rapidly completely swamped.
This observation almost certainly applies in the present situation.
Until January 1st of 2023, most people really had not noticed that much happening in AI.
All of the breakthroughs were largely demonstration type technologies.
Examples such as Watson IBM, alphaGo, etc. were far away from the consumer marketplace and they evolved over multi-year timescales.
Yet, what we see with GPT is the launch of a consumer grade technology that has clearly ramped up over the last year.
We can see the differential over one year and this differential if anything will likely accelerate in the years ahead: now that people have become
alerted to this technology it seems like everyone is in on the GPT Gold Rush. As soon as you realize that the lake is filling that means that it is almost
too late to do anything. For the first 90 percentage of the time that the lake is filling you notice nothing; when you do notice something it means that it
is very close to overfilling. Not only that but we are only seeing things in the back mirror. The leading edge of change is likely a year or more ahead of what we
are seeing. The technology leaders have already said that they might delay deployment of the next wave of technology because they are afraid of what might happen
to our society if they were to release it. Why argue with the people who have the best understanding of the technology? So, by the lake filling analogy we are
probably pretty far up the creek.
Secondly, the thread has adopted more of a humanistic instead of technologist position on what we mean by harms from technology. If you take the position that we
only have to worry when we are all swimming in 10 feet of green goo, then yes I suppose that we are just fine ... for now. But for those more with a human first type
perspective we use a more subtle yardstick of being OK. You know, can very average people successfully navigate the basic processes of life such as education, work
and then marriage. Clearly a superintelligence could impact on all of these fundamental functions of a basic existence. When you move away from the green goo type
alarm stage to more the average person alarm stage, it would not be that radical to suggest that we have already reached an average person crisis. A superintelligence would
largely eliminate all jobs that an average person could perform. I do not entirely dismiss the idea that we can all watch TV all day and collect UBI, though this by itself will
cause a massive social crisis. It is crossing the average person living their lives normally bridge that we have put at the center of the thread conversation. Considering that this
might now be on the near term time horizon, a profound social crisis might no longer be that far off.
As you noted my life has clearly gotten much better over the last year, There is a certain paradox involved: Life has gotten better, why worry? I do not deny that this seems paradoxical,
though it is more my apprehension about what might be in store for even the next year or two that has me so concerned. The problem with AGI is that it can start spinning the merry go
round so fast that we will likely go very quickly from having a fun time to having a not so fun time. We have seen even more clearly this year then before that there will be no getting off of the merry go round when things start to spin out of control. The LLMs have been released to the wild and now there is a competitive race to keep up with everyone else that has created an unstoppable AI arms race. Even more worrisome is that now if we apply the brakes what would then happen when we hit the accelerator again. The underlying logic of the machine already seems uncontrollable.
There is now a certain perverse rationality in the idea that if we don't run faster than other people with likely less benevolent intentions will run faster than us. There do not seem to be any great answers for the problems that we are confronting. Perhaps the only plausible strategy is to build a space ark and escape from all of this technology.
Edited by mag1, 16 November 2023 - 04:14 AM.