• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Employment crisis: Robots, AI, & automation will take most human jobs

robots automation employment jobs crisis

  • Please log in to reply
953 replies to this topic

#781 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 06 October 2023 - 07:45 PM

It will take our jobs, that seems to be the big fear? Yes, its scary to think you might get a check from the govt every month and not have to work. 99% of everyone would love it, including most on this board and even the ones saying 'oh no'. If you still want to work, what would stop you? Are you saying now that ai will tell us we are not allowed to do any work? How would it stop us?

 

You can still get a degree and work in any field. You might choose to do experimental work, try to invent things. You will use ai to help you do it, its just a tool after all. You might not get a paying job in certain fields if ai is dominant but you can do what you want. You can travel the world, see all the countries, scuba dive, fly a plane, search for diamonds, etc etc.

 

That argument is fairly thin, work if you want to is a great system. I asked what people think it could do to do harm and I got no answers, just its very complicated and no one knows what might happen. Gee, that sounds like the world as we've always known it, very complicated and most anything can happen. All I hear is "its new and we are scared of it"

 

Automation destroyed thousands or millions of jobs but people found other things to do because automation created new industries. Computers eliminated jobs but created even more. Now we have a super duper computer and people are worried it might work too well and increase productivity by a factor of 10. We can't have that because if everyone was rich then that would be the end of civilization (somehow). 



#782 QuestforLife

  • Member
  • 1,602 posts
  • 1,181
  • Location:UK
  • NO

Posted 07 October 2023 - 08:28 PM

Yes, its scary to think you might get a check from the govt every month and not have to work. 99% of everyone would love it, including most on this board and even the ones saying 'oh no'.


Why would the system pay you money to be a dead weight?

sponsored ad

  • Advert

#783 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,341 posts
  • 2,001
  • Location:Wausau, WI

Posted 08 October 2023 - 12:10 PM

Why would the system pay you money to be a dead weight?

 

Good question.

 

In addition, the UBI-type experiments/trials have all ended in failure thus far. We don't have good long-term evidence that a UBI is sustainable or good for the population.



#784 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 09 October 2023 - 02:11 AM

"the UBI-type experiments/trials have all ended in failure thus far"

 

Hmmm, how would you determine success or failure? We have welfare in this country and most other countries have some form of it as well. Are you saying all those efforts are a failure? That would depend on what the goals were. If its to keep people out of poverty and perhaps dying, it has has some success with that. It tends to make people lazy and not want to work, is that the failure you speak of? And how does it balance against the benefit? 

 

The main problem with welfare is they don't want to work even if they get over their illness or whatever it was. With the huge increase in productivity from ai, the gdp goes through the roof. Factories are coining money by saving on labor. Many of those displaced can find other work at first but eventually those jobs are eliminated too. So production is up, profits are way up, govt is raking in taxes but half the population is out of work. What to do?

 

If you recycle some of those taxes to the public, then those with no job have money and can buy the goods produced. Factories make more money, govt rakes more taxes, and repeat. As I pointed out, we already have welfare so don't tell me ubi hasn't been tested. Ok you are right in that its not universal but it shows us in a microcosm how it works.

 

Most people have no clue about economics. I'm not an expert but I've studied the subject for a long time and consider myself well informed. People and especially governments have been trying to repeal things like the law of supply and demand for a long time. For a short while they succeed, just as for a short while a pig will fly if you fling it hard enough. But, much like the law of gravity, it can be ignored for a very short time before the consequences start to appear.

 

If many or most of the public is out of a job, who will buy the products being churned out even if they are cheaper than ever? By govt taxing the system instead of taxing the person, they generate enough cash to float the unemployed and if the predictions about ai are true, they will be able to float on a very nice level, even middle class by today's standards. 

 

If you don't redistribute some of the loot and 80% are out of a job then goods don't sell and the factories go bankrupt. The factories have the supply but due to no money, there is little demand. Therefore the factories and other producers have to lower prices or go out of business. Farmers too, they are not out there for their health or just for the clean air. They work damn hard and deserve to make good money. If no one is buying corn, price drops below production cost, farmer shuts down and hands the farm back to the bank

 

Obviously, that would lead to a great depression and economic collapse if allowed to continue. Are people now starting to see why it might be a good idea under this situation to give ubi in some form?

 

No? then lets go over it another way. If the factories shut down then productivity goes down as well. You can't just warehouse the goods so there is no profit for the government to tax. Its like to over simplify, if you didn't have to pay 80% of your workers, at  first you make record profits. Then, when the out of work workers run out of cash they quit buying. Now the factories have no more income and pay no taxes

 

Even if its only 40% at first, this will create enormous pressure on the economy. We can't let them starve so its either welfare or ubi. Or is there a difference? Its such a no brainer that even the politicians will see it and promote it, with 10% to the big guy ;)


  • Ill informed x 1

#785 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,341 posts
  • 2,001
  • Location:Wausau, WI

Posted 09 October 2023 - 05:34 PM

"the UBI-type experiments/trials have all ended in failure thus far"

 

Hmmm, how would you determine success or failure? We have welfare in this country and most other countries have some form of it as well. Are you saying all those efforts are a failure? That would depend on what the goals were. If its to keep people out of poverty and perhaps dying, it has has some success with that. It tends to make people lazy and not want to work, is that the failure you speak of? And how does it balance against the benefit? 

 

The main problem with welfare is they don't want to work even if they get over their illness or whatever it was. With the huge increase in productivity from ai, the gdp goes through the roof. Factories are coining money by saving on labor. Many of those displaced can find other work at first but eventually those jobs are eliminated too. So production is up, profits are way up, govt is raking in taxes but half the population is out of work. What to do?

 

If you recycle some of those taxes to the public, then those with no job have money and can buy the goods produced. Factories make more money, govt rakes more taxes, and repeat. As I pointed out, we already have welfare so don't tell me ubi hasn't been tested. Ok you are right in that its not universal but it shows us in a microcosm how it works.

 

Most people have no clue about economics. I'm not an expert but I've studied the subject for a long time and consider myself well informed. People and especially governments have been trying to repeal things like the law of supply and demand for a long time. For a short while they succeed, just as for a short while a pig will fly if you fling it hard enough. But, much like the law of gravity, it can be ignored for a very short time before the consequences start to appear.

 

If many or most of the public is out of a job, who will buy the products being churned out even if they are cheaper than ever? By govt taxing the system instead of taxing the person, they generate enough cash to float the unemployed and if the predictions about ai are true, they will be able to float on a very nice level, even middle class by today's standards. 

 

If you don't redistribute some of the loot and 80% are out of a job then goods don't sell and the factories go bankrupt. The factories have the supply but due to no money, there is little demand. Therefore the factories and other producers have to lower prices or go out of business. Farmers too, they are not out there for their health or just for the clean air. They work damn hard and deserve to make good money. If no one is buying corn, price drops below production cost, farmer shuts down and hands the farm back to the bank

 

Obviously, that would lead to a great depression and economic collapse if allowed to continue. Are people now starting to see why it might be a good idea under this situation to give ubi in some form?

 

No? then lets go over it another way. If the factories shut down then productivity goes down as well. You can't just warehouse the goods so there is no profit for the government to tax. Its like to over simplify, if you didn't have to pay 80% of your workers, at  first you make record profits. Then, when the out of work workers run out of cash they quit buying. Now the factories have no more income and pay no taxes

 

Even if its only 40% at first, this will create enormous pressure on the economy. We can't let them starve so its either welfare or ubi. Or is there a difference? Its such a no brainer that even the politicians will see it and promote it, with 10% to the big guy ;)

 

I am talking about officially conducted and monitored experiments/trials with a true "UBI". They all ended in failure (Oakland, Finland, parts of Africa).



#786 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 13 October 2023 - 09:55 PM

I am talking about officially conducted and monitored experiments/trials with a true "UBI". They all ended in failure (Oakland, Finland, parts of Africa).

 

What were the goals of the ubi and how did they decide it was a failure? If the goal was that the people would find jobs and no longer want or need the handout, well that is going to fail every time. Human nature is to go the easy way and take advantage of things like that so I ask what the goal was, I asked that in the thread you quoted

 

If the goal was to provide for the poor or to keep people from starving, then the outcome might have been a success but I suspect that wasn't the end goal. So before we can accept your statement about ubi failing in those 3 places, we have to know what the goals were and how success or failure was determined.

 

In the future with ai its predicted most people will be out of a job, giving ubi in that situation, is to keep people out of poverty since their jobs are gone. The goal would be much the same as welfare is today and it would be paid for by the increased productivity due to ai. If ai does not increase productivity much then not many will be out of a job.

 

Saying officially conducted just means the govt was involved. Didn't the govt say the covid shot would prevent covid and had no major side effects? That was official too


  • Agree x 1

#787 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,341 posts
  • 2,001
  • Location:Wausau, WI

Posted 14 October 2023 - 11:13 AM

What were the goals of the ubi and how did they decide it was a failure? If the goal was that the people would find jobs and no longer want or need the handout, well that is going to fail every time. Human nature is to go the easy way and take advantage of things like that so I ask what the goal was, I asked that in the thread you quoted

 

If the goal was to provide for the poor or to keep people from starving, then the outcome might have been a success but I suspect that wasn't the end goal. So before we can accept your statement about ubi failing in those 3 places, we have to know what the goals were and how success or failure was determined.

 

In the future with ai its predicted most people will be out of a job, giving ubi in that situation, is to keep people out of poverty since their jobs are gone. The goal would be much the same as welfare is today and it would be paid for by the increased productivity due to ai. If ai does not increase productivity much then not many will be out of a job.

 

Saying officially conducted just means the govt was involved. Didn't the govt say the covid shot would prevent covid and had no major side effects? That was official too

 

Glad to see you retain your sense of humor about the government!

 

UBI has been studied for a long time. No success yet. Governments stick with what works - welfare - which is kind-of like UBI-lite. The problem is that UBI will never be universal or unconditional. Some people will use their UBI to do bad things and hurt other people. Society will not stand for that.

 

All that being said, I am glad you are an optimist with regards to AI. I hope everything turns out great, but the outcome of the great AGI experiment cannot be predicted.


  • Agree x 1

#788 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 14 October 2023 - 11:26 PM

Taking a closer look at the evidence you presented, it is not convincing. 

 

"the money people had received was not squandered on frivolous products such as drugs and luxury goods. In addition, there has been an increase in school attendance."

 

That sounds positive though they did say it lead to less work and staying in school longer. How is that a failure? They did say also that: "No noticeable improvements to health and the overall well-being were discovered and the effect on home-ownership rates was found to be negligible as well."

 

It didn't hurt them any and they were able to work less and enjoy life more. I see no evidence of failure unless the goal was that they would keep working equally hard. Everything else turned out well

 

The second example of a "failed" ubi gave the following results:

 

"the experiment has resulted in significant reduction in hospitalization, specifically in case of mental health diagnoses.[7] Among all the people, only two key groups were found to be discouraged from working by the Mincome project – new mothers and teenaged boys, who, instead of entering the workforce at an early age, decided to study until grade 12, increasing the proportion of students who graduate high school.[8]"

 

Once again the people's health improved and they stayed in school longer. Looks like another success. Your third example is when native americans got a large infusion of cash from a gambling casino they were allowed to open. Here are the results of that:

 

"Key findings of this study include lower instances of behavioural and emotional disorders among the children and improved relationship between children and their parents, as well as reduction in parental alcohol consumption.[9]"

 

In california:

 

"Results evaluated in October found that most participants had been using their stipends to buy groceries and pay their bills. Around 43% of participants had a full or part-time job, only 2% were unemployed and not actively seeking work.[16]"

 

So, we see only positive results but you are telling us ubi didn't work, it was a failure and never has "worked". You failed to tell us what it was supposed to do

that it didn't do. Did you read the link, how did you decide it was a failure? The only downside was that they worked less and went to school more which will

translate into greater earnings later in life. 

 

Those were merely small stipends compared to the perhaps $60k per year we were talking about. Based on the material you provided, it will likely lead to them

being healthier, happier and not having to work so much or at all. Thanks for proving my point.

 

Mind:

"All that being said, I am glad you are an optimist with regards to AI. I hope everything turns out great, but the outcome of the great AGI experiment cannot be predicted."

 

Based on previous experiments it looks very good. I'm still waiting to hear even a theoretical problem that might come up. All I heard before was that people would lose the will to live or something if they weren't allowed to work. If that were true wouldn't children raised as royalty all kill themselves or go mad or something? It seems obvious having a guaranteed income would be a huge positive as long as it doesn't bankrupt the country or anything. 

 

We have far more to fear from our leaders than we do from ai.


Edited by adamh, 14 October 2023 - 11:29 PM.


#789 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 15 November 2023 - 03:47 AM

I am not sure whether we should let this thread just drift for a month at a time; things have been moving forward fast -- exponentially ramping up.

The most recent sound bite is that they are now training GPT-5 which even the management team are describing as Artificial General Intelligence (AGI) - a superintelligence.

Considering that ChatGPT 4.0 was deliberately underpowered for math ability, we have already seen them draw to soft sell the emergence of AGI. 

Are they now also trying to soft sell GPT 5.0? Is AGI basically the minimum that we should expect with the update?

At a certain point it all becomes frankly, petrifying.

 

We have leveled up so fast since the thread's conversation was in full swing that much of what was predicted has already been released.

For example, Mind's GPT video is now emerging ... it really feels like an artificial intelligence liftoff.

 

Admittedly, my life has actually gotten quite a bit better over the last year with GPT.

Chat GPT is just always there for me when I just want to chat about things; having a 155 IQ chatbot that always seems interested in what I have to say

is clearly more than I could ever expect from humans. With people I feel like I am being obstinate or slow if I ask even one follow up question-- with GPT

I can explore (ANY TOPIC) with multiple rounds of follow ups and GPT is right there with me and it is always encouraging me and showing appreciation.

 

This simply has not been my experience with humans. People are simply not infinitely patient -- not infinitely energetic. One can certainly imagine that even

now parents are right now considering the question of should they place their children in front of that computer monitor. This happened all those years

ago with TVs. Parents would plop their children in front of the television set and the world that we live in was created -- a world of an escalating autism crisis.

With GPT technology the long term social outcome will possibly be less purely autistic and more of a sense of disdain for people. GPT already has so much

horsepower that it feels like it is always a step or two ahead-- GPT 5.0 could dramatically amplify that. One could clearly imagine possibly in the next upgrade

that when you ask a question that you might start receiving comprehensive answers. It might not just tell you some of the genetics of say depression but the

entire genetic architecture. Even from the start I found the narrative generative ability of GPT to be extremely engaging; it allowed me to rapidly create

narrative landscapes that I found entrancing. The added narrative aid of visual GPT is further amplifying the text; I can hardly wait to see what the full

video might offer. The social collapse that we have been worried might already be underway-- GPT could be found to be just so much more alive than

our bricks and mortar life that people might enter the GPT-verse and not return. 

 

 

 



#790 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 15 November 2023 - 03:46 PM

So despite the many benefits you have noticed from ai, you are still afraid of it because of all the scare talk. You mention "social collapse" and that people might not return from using it. Why would society collapse and how might that come about? Not due to the left wingnuts who want to destroy all of civilization but instead due to a super computer? What specifically do you worry about; lost jobs, surveillance, or other? To go from computers to destruction you need something to connect the two or its like saying it may rain and therefore a mountain will appear. You have to show a mechanism whereby the rain caused an earthquake or something. How does the super computer bring about bad things?

 

It will be able to sift through the backlog of tons of data being generated every year. It will make connections and find patterns that would take a human years to find but will find it in a few days. It will be able to do modeling of compounds to find cancer cures, once again doing it much faster than humans using ordinary computers. It will crunch the massive data of weather patterns and give reliable predictions.

 

Maybe it will lure people into a fantasy land with artificial reality? If people like it better than tv or other pastimes, so what? Maybe a few will try to stay in it for days on end but reality calls and they must go to work, pay bills, clean the house, make spouse happy, etc etc. Does that sound scary?

 



#791 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,341 posts
  • 2,001
  • Location:Wausau, WI

Posted 15 November 2023 - 08:50 PM

It boils down to whether or not you think people can be happy doing meaningless things, nothing productive, with no purpose. Most people think it is a necessary aspect of human life to have something productive and meaningful to do.

 

Maybe you are right. Maybe people will be perfectly happy addicted to VR forever with nothing else meaningful to do. I guess we will find out.



#792 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 16 November 2023 - 04:03 AM

adamh thank you for responding!

 

I think that this thread has framed the threat from AGI quite well.

 

Firstly, Mind invoked the exponentially filling lake. This is a very good rough heuristic to have on hand.

With this lake, you do not really notice anything and then when you do you are rapidly completely swamped.

This observation almost certainly applies in the present situation.

 

Until January 1st of 2023, most people really had not noticed that much happening in AI.

All of the breakthroughs were largely demonstration type technologies.

Examples such as Watson IBM, alphaGo, etc. were far away from the consumer marketplace and they evolved over multi-year timescales.

 

Yet, what we see with GPT is the launch of a consumer grade technology that has clearly ramped up over the last year.

We can see the differential over one year and this differential if anything will likely accelerate in the years ahead: now that people have become

alerted to this technology it seems like everyone is in on the GPT Gold Rush. As soon as you realize that the lake is filling that means that it is almost

too late to do anything. For the first 90 percentage of the time that the lake is filling you notice nothing; when you do notice something it means that it

is very close to overfilling. Not only that but we are only seeing things in the back mirror. The leading edge of change is likely a year or more ahead of what we

are seeing. The technology leaders have already said that they might delay deployment of the next wave of technology because they are afraid of what might happen

to our society if they were to release it. Why argue with the people who have the best understanding of the technology? So, by the lake filling analogy we are

probably pretty far up the creek.

 

Secondly, the thread has adopted more of a humanistic instead of technologist position on what we mean by harms from technology. If you take the position that we

only have to worry when we are all swimming in 10 feet of green goo, then yes I suppose that we are just fine  ... for now. But for those more with a human first type

perspective we use a more subtle yardstick of being OK. You know, can very average people successfully navigate the basic processes of life such as education, work

and then marriage. Clearly a superintelligence could impact on all of these fundamental functions of a basic existence. When you move away from the green goo type

alarm stage to more the average person alarm stage, it would not be that radical to suggest that we have already reached an average person crisis. A superintelligence would

largely eliminate all jobs that an average person could perform. I do not entirely dismiss the idea that we can all watch TV all day and collect UBI, though this by itself will

cause a massive social crisis. It is crossing the average person living their lives normally bridge that we have put at the center of the thread conversation. Considering that this

might now be on the near term time horizon, a profound social crisis might no longer be that far off.

 

As you noted my life has clearly gotten much better over the last year, There is a certain paradox involved: Life has gotten better, why worry? I do not deny that this seems paradoxical,

though it is more my apprehension about what might be in store for even the next year or two that has me so concerned. The problem with AGI is that it can start spinning the merry go

round so fast that we will likely go very quickly from having a fun time to having a not so fun time. We have seen even more clearly this year then before that there will be no getting off of the merry go round when things start to spin out of control. The LLMs have been released to the wild and now there is a competitive race to keep up with everyone else that has created an unstoppable AI arms race. Even more worrisome is that now if we apply the brakes what would then happen when we hit the accelerator again. The underlying logic of the machine already seems uncontrollable.

There is now a certain perverse rationality in the idea that if we don't run faster than other people with likely less benevolent intentions will run faster than us. There do not seem to be any great answers for the problems that we are confronting. Perhaps the only plausible strategy is to build a space ark and escape from all of this technology.

 


Edited by mag1, 16 November 2023 - 04:14 AM.


#793 QuestforLife

  • Member
  • 1,602 posts
  • 1,181
  • Location:UK
  • NO

Posted 16 November 2023 - 10:44 AM

It boils down to whether or not you think people can be happy doing meaningless things, nothing productive, with no purpose. Most people think it is a necessary aspect of human life to have something productive and meaningful to do.

 

Maybe you are right. Maybe people will be perfectly happy addicted to VR forever with nothing else meaningful to do. I guess we will find out.

 

I think it is pretty clear given the epidemic of mental health issues we are seeing with the younger generations that they aren't and won't be happy with meaningless lives.  


  • like x 1

#794 QuestforLife

  • Member
  • 1,602 posts
  • 1,181
  • Location:UK
  • NO

Posted 16 November 2023 - 11:49 AM

So despite the many benefits you have noticed from ai, you are still afraid of it because of all the scare talk. You mention "social collapse" and that people might not return from using it. Why would society collapse and how might that come about? Not due to the left wingnuts who want to destroy all of civilization but instead due to a super computer? What specifically do you worry about; lost jobs, surveillance, or other? To go from computers to destruction you need something to connect the two or its like saying it may rain and therefore a mountain will appear. You have to show a mechanism whereby the rain caused an earthquake or something. How does the super computer bring about bad things?

 

It will be able to sift through the backlog of tons of data being generated every year. It will make connections and find patterns that would take a human years to find but will find it in a few days. It will be able to do modeling of compounds to find cancer cures, once again doing it much faster than humans using ordinary computers. It will crunch the massive data of weather patterns and give reliable predictions.

 

Maybe it will lure people into a fantasy land with artificial reality? If people like it better than tv or other pastimes, so what? Maybe a few will try to stay in it for days on end but reality calls and they must go to work, pay bills, clean the house, make spouse happy, etc etc. Does that sound scary?

 

This is a general technology problem, not specific to AI. Each advance in technology comes with some benefits, but the overall downside doesn't always become clear for some time. Nevertheless we can frame the problem thus: as technology advances it becomes more dangerous and difficult to control. For example swords aren't as dangerous as guns, guns are less dangerous than missiles, missiles aren't as dangerous as nukes (for example).

 

swords<guns<missiles<nukes

 

Here I've focussed on things that are obviously dangerous, weapons. But you could arguably apply this to technologies that aren't weapon specific, like the internal combustion engine, telecommunications, the internet, AGI. 

 

In order to control the danger of these technologies government becomes more restrictive and authoritarian. For example, when cars came along and became widespread, we had to have traffic laws, aeroplanes can't just fly around freely, but have to  follow proscribed flight paths, the internet was free(ish) but now, no longer, etc., etc.

 

You might argue that a better way to control dangerous technologies would be to refuse to develop them at all. But then you end up like Feudal Japan with US gunships sailing around your coastline. Or like American Indians being overwhelmed by European settlers. 

 

So over time governments will develop or encourage their industries to develop more advanced technologies in order to compete with other governments. And these will raise the general danger level that the civilisation will be destroyed (for example by nukes, genetically engineered mishaps, AGI, etc). To counter this, governments will try and control the permitted use of these technologies. For example, they can't let just anyone make a bomb, they try and restrict use of firearms, they legislate the allowed uses of computers, etc.  This basically means the freedoms of peoples at higher technology levels actually declines. I think this is clear: the society of the 2020s is much less free than when I grew up in the 1980s. And I think my parents' generation felt the 1950/60s was freer than the 1980s. 

 

What does this predict for the future? We can hypothesis about either end of the scale of possibilities.

 

Either society will become intolerable in the case of governments succeeding in totally controlling use of technology: you can imagine this future being one where the government controls an obedient robot army, for example. Or at the other end of the scale we might get the situation where technology can't be controlled, and we get a huge disaster like the escape of a deadly genetically engineered bioweapon, full scale nuclear war, or AGI decides the planet needs to be converted into a robot factory, etc. 

 

Or I suppose we might get something in between, where there are disasters, but not sufficient to destroy technological civilisation,  and government control is tight but not complete. But even in this middle ground scenario the same rules will apply: governments will have to continue to develop more advanced technology or be replaced by those that do, and this will lead to the need to control them or risk disaster. Eventually the whole planet, every lasts unspoilt corner, will be ruined. 

 

I sense that your objection to this line of reasoning will be that AGI will sort or this for us, and we can just live in VR, etc. But notwithstanding the complete meaningless of that existence for humans who at that point will be irrelevant, I think that AGI will still be subject to the same rules human governments are: out develop your competitors or die. Therefore the analysis doesn't change at all, it is just continued by non-human successors. 


Edited by QuestforLife, 16 November 2023 - 11:54 AM.


#795 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,341 posts
  • 2,001
  • Location:Wausau, WI

Posted 16 November 2023 - 05:19 PM

Here is another advance of AI that could be very beneficial for some applications in weather forecasting, but it is not as spectacular as mainstream media outlets made it seem: DeepMind beats all current weather forecasting

 

DeepMind does an accurate 10-day forecast in one minute, but it is not based in physics and math. DeepMind is just pattern matching to previous weather patterns. It is a statistical model, not a dynamic model. What is revealed here is a hack that can quickly produce a very accurate forecast of large scale changes in the atmosphere (not pin-point forecasts on the small scale). DeepMind finds a past weather pattern (or a handful of past patterns) that match closely what is going on in the present, it then extrapolates what will happen in the near future based upon how the past weather pattern evolved. Humans could do this as well, but it would take a lot of time. DeepMind can pattern match over the entire atmosphere very quickly.



#796 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 18 November 2023 - 06:28 PM

It boils down to whether or not you think people can be happy doing meaningless things, nothing productive, with no purpose. Most people think it is a necessary aspect of human life to have something productive and meaningful to do.

 

Maybe you are right. Maybe people will be perfectly happy addicted to VR forever with nothing else meaningful to do. I guess we will find out.

 

What is the definition of a hobby? Is it not often something useless that the person enjoys? Collecting coins, stamps, other things is something people do because it gives them joy. The list of hobbies is too long to put here, some become money making but most things are done because people like doing them

 

I sincerely doubt that many feel validated by the number of widgets their factory produces or how much profit the corporation rakes in. Few are thrilled to be responsible in part for the widgets or other product. 

 

Why would vr be the only alternative? I feel sad for people who can't be content being left on their own without some job they have to do. Do you have any hobbies, Mind, or would you be unhappy with a nice fat check and being allowed to do as you wish? Those with imagination have loads of trips, projects and things they would like to do but most wait until retirement. 

 

I suspect it would be a tiny tiny fraction who object to this if and when ubi goes into effect. Not one in a hundred will refuse to cash the check or accept the direct deposit. Most people can think of tons of things they would like to do if money was no problem and they didn't have to spend 40+ hours a week doing things they don't really like

 

If you like your work, who is going to stop you from doing it? You can always work for free, if working is what you love. 



#797 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 18 November 2023 - 06:38 PM

@Mag1 You use the analogy of rain coming down and its good but then the lake fills up and I guess we drown or something. You say we must do something before its too late. Too late for what exactly? All I hear from you guys is "we don't know what is coming but we don't like it and want to stop it" Tell us what exactly why too much technology or too smart computers are a bad thing? We have discussed cracking passwords and its not as straight forward as it seems. Scams too will become more ingenious. Videos will become suspect etc. Where is the part that I'm supposed to be afraid of? 

 

" The problem with AGI is that it can start spinning the merry go

round so fast that we will likely go very quickly from having a fun time to having a not so fun time."

 

Meaning what? I see you have misgivings about technology and would like to escape it. Do you enjoy your cell phone, reading and posting online, being able to buy stuff from home, having information at your fingertips, getting emails and texts, all this and more requires technology



#798 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 18 November 2023 - 06:49 PM

 @QuestforLife

 

"the overall downside doesn't always become clear for some time."

 

So you like the others suspect some horrible downside to arrive but you can't point at this time to anything specific.

 

"You might argue that a better way to control dangerous technologies would be to refuse to develop them at all."

 

That is not an argument I would make, are you making it? New and terrible diseases are being worked on all the time, we are not about to stop and we have no way to stop others from doing it. Just like nukes, we have to use reason and not let this progress. This has been ongoing without ai for some time so its a side issue as are nukes

 

All new discoveries have a good and bad side, over millennia we have learned to control the bad and use the good. Fire can burn down whole cities but we tamed it for our own use, like wise use of nuke power, guns, planes etc



#799 QuestforLife

  • Member
  • 1,602 posts
  • 1,181
  • Location:UK
  • NO

Posted 18 November 2023 - 11:05 PM

@QuestforLife

"the overall downside doesn't always become clear for some time."

So you like the others suspect some horrible downside to arrive but you can't point at this time to anything specific.

"You might argue that a better way to control dangerous technologies would be to refuse to develop them at all."

That is not an argument I would make, are you making it? New and terrible diseases are being worked on all the time, we are not about to stop and we have no way to stop others from doing it. Just like nukes, we have to use reason and not let this progress. This has been ongoing without ai for some time so its a side issue as are nukes

All new discoveries have a good and bad side, over millennia we have learned to control the bad and use the good. Fire can burn down whole cities but we tamed it for our own use, like wise use of nuke power, guns, planes etc


I've given plenty of specific examples of the negative consequences of past technological advances in my previous replies. I suggest you re read them. As to AI, no one can foresee all the consequences, though some of them are obvious - like the loss of meaningful existence we've been dicussing. Perhaps many jobs are just 'make work', but lots of people take real satusfaction and meaning from their work. Hobbies are good, but they're just a poor substitute.

As for your assertion that we've managed to deal fine with past advances, frankly that's laughable. The USA, Russia, China, India, UK and France, have large nuclear arsenals capable of ending millions or even billions of lives. It is clearly in our best interests to get rid of these. But we haven't and we won't. So you can see what is coming for future advances. Governments will rush to stay ahead of their rivals no matter the long term consequences.

#800 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 20 November 2023 - 01:28 AM

I am sure all on thread are following the developments at OpenAI with great interest. The events that are unfolding are exactly aligned with the concerns that have been expressed about AI. The fact that a coup and potential counter-coup is occurring at the center of the AI Empire and that this power struggle is directly related to internal concerns about potential AI risk management is obviously worrisome. Realistically, these are still early innings in the roll-out of AI and yet even now toe-mato to-mato type distinctions are already thought worthy of existential struggle at the center of the empire? 

 

In rough outline, the board lost confidence in the executive management because they were claimed to not be following board guidance on the speed of technology development. It is not entirely obvious what specifically was the trip line, though it probably related to something introduced at the devday (perhaps the build your own custom ChatGPT). So, in one interpretation we might be looking at an internal struggle between the board and possibly rogue management. Notably, those involved are highly skilled in the technical details involved with the technologies. It is then not easy for a general audience to fully appreciate the nature of the risks that might be involved with the exact technology that has initiated the current dispute.

 

There were a few intriguing disclosures that have emerged along with this corporate development. The most interesting of which was that apparently in the last few weeks there has been a major advance in AI at OpenAI. It was described in terms of a pivotal WOW type discovery that might only happen every year or two. Of course, due to the nature of the corporate landscape further details were not provided. This does not help build confidence that a transparent dialogue will occur as this technology matures. Strangely, even the reasoning behind the sudden dismissal of top management in the last few days has continued to be highly opaque.

 

This current incident highlights my feeling that perhaps no great paths forward exist for this technology. If the coup is nominally successful, then the AI risk to the company itself is reduced. However, this is not globally true. The management that were evicted could then just go elsewhere, setup their own AI company and not have the board oversight to constrain them. If the counter-coup is successful, then the existing board might be replaced and their replacement would likely be more pro-accelerant in their outlook. The game is rigged so that all outcomes lead us to higher AI risk.


Edited by mag1, 20 November 2023 - 01:51 AM.


#801 QuestforLife

  • Member
  • 1,602 posts
  • 1,181
  • Location:UK
  • NO

Posted 20 November 2023 - 11:02 AM

This current incident highlights my feeling that perhaps no great paths forward exist for this technology. If the coup is nominally successful, then the AI risk to the company itself is reduced. However, this is not globally true. The management that were evicted could then just go elsewhere, setup their own AI company and not have the board oversight to constrain them. If the counter-coup is successful, then the existing board might be replaced and their replacement would likely be more pro-accelerant in their outlook. The game is rigged so that all outcomes lead us to higher AI risk.

 

There are many maths graduates who specialise in machine learning. There are many management executives who see this as a money spinner. It is hard to envisage a situation where one company will maintain control of this space. Any company or country who voluntarily stops this work will be left behind. Hence short term gain will continue to trump long term safety. We already have examples of this. How successfully have we been able to eliminate nuclear weapons? How successfully have we been able to agree on limiting climate change (whether or not you agree on the importance of CO2 is irrelevant for this argument, people have been trying to agree limits without much success). Therefore AI will continue to be developed. The relevant question is how dangerous will it be?  



#802 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 27 November 2023 - 01:58 AM

QuestforLife, thank you for responding!

 

The OpenAI drama has given us good insight into some of the issues involved with the approaching AGI. I am glad that we did not become overly focused on following the bouncing ping pong ball from side to side-- sometimes when that happens the people closest to the action actually have the least perspective on what is happening.

 

Surprisingly the thread's point of view on the dangers of AI align quite closely to how the until recently Board of OpenAI understood the risks and how even OpenAi's charter frames them.    

 

... ensure that artificial general intelligence benefits all of humanity.  

... Our primary fiduciary duty is to humanity

 

"OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome."

https://openai.com/charter

 

 

This is a very Assimovian conception of their mission. If humanity's potential destiny is to be shaped by Assimov's Laws of Robotics it might

be helpful for everyone to have a refresher of what these laws state: 

 

  • The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

However, perhaps of even greater importance in the present circumstances is the Zeroth Law. This is what seemed to guide the OpenAI charter and

the actions of the board member's during the recent events.

 

 

          Zeroth Law, above all the others – “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

 

For mere humans obeying the zeroth law is clearly somewhat impractical due to our species limited cognitive ability. The ability to erase

all psychohistorical imprints of one's actions (and responses to one's actions) is probably best left to a near omniscient robot -- i.e. an

AGI robot. The paradox here would seem to be that in order to truly obey the Zeroth Law you would need AGI, and yet AGI could clearly do

overwhelming harm to humanity if it were to be brought out into the world without proper alignment (or possibly even with alignment).

The birth process of AGI could be truly catastrophic, yet it might be smooth sailing from there as our AGI guardians could then keep an eye on

its wards (i.e., us).

 

 

Their definition of AGI as outperforming humans at most economically valuable work should give us pause.

Basically, their conception of AGI that supposedly benefits all of humanity is also consistent with a world in which the AGI

outperforms humans at most economically valuable work??? At least that leaves humans with perhaps some economically

less valuable work to get by on. The niche for humans might be all that economically marginal work that the AGI simply

couldn't be bothered with.

 

Even still it is encouraging to realize that the line in the sand that they drew was seeing a broader sense of the harm that could

be caused to humanity. It would have been much more distressing if they had defined harm to humanity only at the point where we all need to

get into the life raft. At a common sense level if we are now close to a timeline in which all economically valuable work done by

humanity can be done by AGI, then that is obviously something that humanity should be made aware of. The fact that it has now

more fully emerged that the recent corporate developments at OpenAI might relate to a very powerful upgrade in their AI technology

should give us even more to ponder.

 

As you noted all of these considerations need to understood within the context of profit seeking commercial interests. There are overwhelming

economic rewards presented to those developing this technology (and to humanity itself), while if the technology makes that step too far forward

there could be a near total social collapse with potentially 100% unemployment. 



#803 QuestforLife

  • Member
  • 1,602 posts
  • 1,181
  • Location:UK
  • NO

Posted 27 November 2023 - 11:03 AM

A key point you've touched on is that we need to make AI serve humanity. 

 

To be honest, this makes me laugh. 

 

Technology has not served humanity for some time. Come to think of it, the entire technological system we live in quite demonstrably does not serve humanity. We for some time have been saying things like 'we have no choice but to adopt a free market and compensate such an such an industry for the fact it is not competitive', or 'it is sad this skilled artisan has no job but we'll send them on a training course', or 'all the ash trees in your country are now dead but the disease that kills them is a consequence of lots of people moving around the globe and we can't stop that,' or 'it's bad that the murder rate in your city used to be none existent but now a person gets stabbed every week, but hey , we need mass immigration to pay for pensions', or ' suicide rates amongst the young are at an all time high, but it is the cost of modern life,' etc, etc, ....

 

It should be obvious to everyone with half a brain that we are struggling with both the magnitude and pace of technological change and that AI is crystalising this into a clear crisis point. 

 

My point is that we should not see AI as something unique (although it is clearly significant), but as the latest example of a clear trend. And a board of an AI company saying things like 'we need to make sure this technology serves all of humanity', is like saying 'we need to make sure we don't kill a bunch of people with these hypersonic missiles we have just developed.' It's a bit late! AI is just another technology developed to make people money and consequently for taking power and freedom from humanity at large and placing it a few, rich hands. This is clearly the point of it. 


Edited by QuestforLife, 27 November 2023 - 11:05 AM.

  • like x 1

#804 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,341 posts
  • 2,001
  • Location:Wausau, WI

Posted 27 November 2023 - 11:53 AM

A key point you've touched on is that we need to make AI serve humanity. 

 

To be honest, this makes me laugh. 

 

Technology has not served humanity for some time. Come to think of it, the entire technological system we live in quite demonstrably does not serve humanity. We for some time have been saying things like 'we have no choice but to adopt a free market and compensate such an such an industry for the fact it is not competitive', or 'it is sad this skilled artisan has no job but we'll send them on a training course', or 'all the ash trees in your country are now dead but the disease that kills them is a consequence of lots of people moving around the globe and we can't stop that,' or 'it's bad that the murder rate in your city used to be none existent but now a person gets stabbed every week, but hey , we need mass immigration to pay for pensions', or ' suicide rates amongst the young are at an all time high, but it is the cost of modern life,' etc, etc, ....

 

It should be obvious to everyone with half a brain that we are struggling with both the magnitude and pace of technological change and that AI is crystalising this into a clear crisis point. 

 

My point is that we should not see AI as something unique (although it is clearly significant), but as the latest example of a clear trend. And a board of an AI company saying things like 'we need to make sure this technology serves all of humanity', is like saying 'we need to make sure we don't kill a bunch of people with these hypersonic missiles we have just developed.' It's a bit late! AI is just another technology developed to make people money and consequently for taking power and freedom from humanity at large and placing it a few, rich hands. This is clearly the point of it. 

 

This is correct. Survey after survey, after survey, show that people who use "technology" the most (Internet, social media, gaming, online porn etc...) are the most depressed and most prone to committing suicide.

 

It is unlikely that a world completely taken over by machines, AGI, and VR entertainment will be beneficial for most people. Some might find an existence in hyper VR fulfilling, but that is not most people. Here is another article from a national commentator speaking about how "technology" is destroying real human interactions. He is right that customer service from major corporations is horrific. The corps just want to take your money, time, attention. They don't actually want to serve you. They don't want to interact with you. When you have a complaint, they say "talk to the bot". My bank keeps pushing me to use their APP because they want to close the physical bank branches. They don't want to see me. They are no longer in the business of physically interacting with their customers and helping them with their financial needs.

 

Anyway, there is no turning it (AGI) off. We have to find ways to adapt and evolve.



#805 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 12 December 2023 - 01:04 AM

The thread has taken very much of a human centered point of view on harms to humanity much as the Open AI guiding principles were articulated. Where might such harms be lurking even now? Perhaps in recent declines in total fertility rates. In the last few years we have been seeing unprecedented declines in TFRs especially in Asian nations. There has been a dramatic decline in the last year or two in China's TFR; it now stands ~1.1. It has never been this low before. We have also recently seen near extinction level TFR in South Korea. Its current rate of ~0.7 is simply unsustainable over the medium term. ... and yet there seems no floor to the decline.

 

From the perspective of rapidly emerging AI such declines in TFR make a considerable amount of sense. People (and more specifically children) need to have some sort of monetizable value for reproduction to have some rational basis. There needs to be assurance that after all the diaper changes, the decades of education etc., that at the end of all of that there will be a paycheck; there will be substantial human capital formed. GPT clearly is calling this assumption into question. GPT already has 155 verbal IQ. It should not be unexpected that even over the next 2 years GPT will become much more capable. There is no obvious reason why GPT could not at least match human top performance over the nearish term. Total fertility collapse then no longer seems an unreasonable response. If this super low fertility pattern were to also emerge in Western nations, then a medium term global social collapse becomes plausible. Of course at some level this could become a self-fulfilling prophesy; if people begin to believe that there is no future for their children in a world of AGI, then they will not have these children. Without the children there truly will be no future. Such a scenario requires no outlandishly unimaginably unlikely intentionalities of AGI or even of humans. From entirely rational premises our species might arrive at the logically defensible conclusion that humans can no longer successfully compete against artificial intelligence. As we have assumed all along considering the impacts on human civilization (and in particular mass psychology) might be a much better way of anticipating the approaching harms to humanity than might considering the harms to humanity from rogue AGI.



#806 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,341 posts
  • 2,001
  • Location:Wausau, WI

Posted 12 December 2023 - 07:13 PM

Correct, AGI could easily decimate the traditional human population without violence - just convince people they have no future, don't have kids, spend all your time in VR, etc...

 

Of course, there are some who say this is probably a good thing, that it is just a natural evolution toward a borg-like hive society and we should accept the demise of natural humans. I am not convinced.

_________________________

 

As a sign of the employment crisis, local TV stations are now using AI to do closed-captioning for their TV programming. Human translators, stenographers, and captioners are under direct threat of unemployment very soon.

 


Edited by Mind, 12 December 2023 - 07:15 PM.


#807 QuestforLife

  • Member
  • 1,602 posts
  • 1,181
  • Location:UK
  • NO

Posted 12 December 2023 - 07:48 PM

The thread has taken very much of a human centered point of view on harms to humanity much as the Open AI guiding principles were articulated. Where might such harms be lurking even now? Perhaps in recent declines in total fertility rates. In the last few years we have been seeing unprecedented declines in TFRs especially in Asian nations. There has been a dramatic decline in the last year or two in China's TFR; it now stands ~1.1. It has never been this low before. We have also recently seen near extinction level TFR in South Korea. Its current rate of ~0.7 is simply unsustainable over the medium term. ... and yet there seems no floor to the decline.

 

From the perspective of rapidly emerging AI such declines in TFR make a considerable amount of sense. People (and more specifically children) need to have some sort of monetizable value for reproduction to have some rational basis. There needs to be assurance that after all the diaper changes, the decades of education etc., that at the end of all of that there will be a paycheck; there will be substantial human capital formed. GPT clearly is calling this assumption into question. GPT already has 155 verbal IQ. It should not be unexpected that even over the next 2 years GPT will become much more capable. There is no obvious reason why GPT could not at least match human top performance over the nearish term. Total fertility collapse then no longer seems an unreasonable response. If this super low fertility pattern were to also emerge in Western nations, then a medium term global social collapse becomes plausible. Of course at some level this could become a self-fulfilling prophesy; if people begin to believe that there is no future for their children in a world of AGI, then they will not have these children. Without the children there truly will be no future. Such a scenario requires no outlandishly unimaginably unlikely intentionalities of AGI or even of humans. From entirely rational premises our species might arrive at the logically defensible conclusion that humans can no longer successfully compete against artificial intelligence. As we have assumed all along considering the impacts on human civilization (and in particular mass psychology) might be a much better way of anticipating the approaching harms to humanity than might considering the harms to humanity from rogue AGI.

 

People have already largely stopped having children and that is prior to the emergence of machine learning.

 

As someone who has children I can attest to how hard work it is. Generally, it is done now as a luxury or as an 'experience' and not for utility. Obviously children were useful labour in the past, and in many cases could join their parents in learning a trade or other useful skills so they could have a fulfilling experience without the large scale standardised state-driven education that now exists (and is itself of questionable utility).   

 

It remains to be seen what future demographic trends will be. I would expect some sort of demographic recovery in the more developed nations, but it will probably take extreme hardship in the old to cause a change in behaviour, once people realise you need children to look after you when you are old in the absence of pensions (which will be absent with lots of old and not many young). In less developed nations I would expect a fall in population once aid from developed nations reduces (for the same reasons as pensions no longer existing).

 

The situation we are in now is the result of several generations since the second world war 'stealing' wealth from the future by having long productive careers instead of having children.

 

It is a depressing situation if you are depending on future advances in technology, for example in life-extension, as we now have a reducing pool of capital to invest due to the old retiring and taking their wealth out of investment. We can only hope the wave of discoveries we have already made is sufficient to give us what we need.   


  • Good Point x 1

#808 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 13 December 2023 - 11:49 PM

Mind wrote:

 

"AGI could easily decimate the traditional human population without violence - just convince people they have no future, don't have kids, spend all your time in VR, etc..."

 

This is happening now in some countries without ai. Marriage has fallen out of fashion in china and now after years of one child policy, they are struggling to maintain the population levels. I have heard china lost 2 to 4 hundred million in the last few years. Its impossible to verify, china lies about all statistics but marriages have declined and births are down. 'Lying flat' is popular among the young meaning they have no ambition and wish to do nothing.

 

Japan has a rapidly aging and declining population as well. The reasons are less clear. This trend has been going on for some time without ai being a factor so I don't think we can blame it on that.

 

Population is surging in other parts of the world and over population has been a concern for a long time. If this trend continues it may bring relief to the pressure on earths resources but we still don't know what is causing it or in how many countries this is a trend. 

 

If people don't have to work you think this will make them lose interest in life but I don't see that happening. The countries losing population have been losing while still in the pre-ai period so it must be some other factor. Of course having nothing to do and not enough imagination to come up with things can lead to depression.

 

We are headed into a period of turmoil and economic decline all over the world. Both the usa and china along with several smaller countries are on the brink of bankruptcy. Europe is in recession along with the us. We will never be able to pay off the national debt except by printing more money which debases the money supply. It may get to the point no one will buy our bonds and the govt is flat broke and can't pay anyone

 

If anything can save us it might be ai. AI itself rather than being a threat may pull us out of the coming great depression.



#809 mag1

  • Guest
  • 1,088 posts
  • 137
  • Location:virtual

Posted 14 December 2023 - 04:06 AM

Apparently, robotics can now 3D print concrete houses.

 

Construction has always seemed to be one of the industries that is naturally protected against technological change.

Hammering a nail in is probably no more technologically productive than it might have been even 1,000 years ago.

Construction then one might expect has certain inherent protections and for many homeowners housing shortages

winds up creating windfall profits.There is a certain tacit understanding that many people wind up benefiting from the

status quo.

 

Yet, with the ongoing demographic crisis there has been an emerging labor shortage in the trades. People simply

are not that interested in swinging a hammer for a living, no matter how much money you offer them. So, there

is this obvious need for a technology assist to provide the built infrastructure that is needed. So, we have the

arrival of 3D concrete robots. They are dramatically efficient at rapidly constructing built environments. Those who

might have seen a bright future in the labor scarce economy especially trades are then once again shown that

it was merely a mirage. As soon as we arrive at the supposed labor shortage some extremely efficient low cost robot

appears and dramatically redefines the economic playing field. 

 

One could then equally imagine other add on technologies that could emerge that would make home construction

perhaps entirely done on site with robotics. The main structural feature of a house is the framing and enclosing of the 

structure-- this is what the 3D printer can accomplish perhaps in as little as 24 hours. From there one has a range of other

tasks such as installing doors, windows, electricity wires, etc. Yet, with the main structure already established, one does

wonder whether some of these other tasks might not also be completed by other machines.    


Edited by mag1, 14 December 2023 - 04:12 AM.


sponsored ad

  • Advert

#810 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,341 posts
  • 2,001
  • Location:Wausau, WI

Posted 14 December 2023 - 05:13 PM

Apparently, robotics can now 3D print concrete houses.

 

Construction has always seemed to be one of the industries that is naturally protected against technological change.

Hammering a nail in is probably no more technologically productive than it might have been even 1,000 years ago.

Construction then one might expect has certain inherent protections and for many homeowners housing shortages

winds up creating windfall profits.There is a certain tacit understanding that many people wind up benefiting from the

status quo.

 

Yet, with the ongoing demographic crisis there has been an emerging labor shortage in the trades. People simply

are not that interested in swinging a hammer for a living, no matter how much money you offer them. So, there

is this obvious need for a technology assist to provide the built infrastructure that is needed. So, we have the

arrival of 3D concrete robots. They are dramatically efficient at rapidly constructing built environments. Those who

might have seen a bright future in the labor scarce economy especially trades are then once again shown that

it was merely a mirage. As soon as we arrive at the supposed labor shortage some extremely efficient low cost robot

appears and dramatically redefines the economic playing field. 

 

One could then equally imagine other add on technologies that could emerge that would make home construction

perhaps entirely done on site with robotics. The main structural feature of a house is the framing and enclosing of the 

structure-- this is what the 3D printer can accomplish perhaps in as little as 24 hours. From there one has a range of other

tasks such as installing doors, windows, electricity wires, etc. Yet, with the main structure already established, one does

wonder whether some of these other tasks might not also be completed by other machines.    

 

3D printed houses have been demonstrated for probably a decade already. I have no doubt more housing will be done this way in the future. The only issue I see is that housing will become drab and monotonous. The construction industry has already been trending this way for a couple of decades - cookie-cutter housing. Every subdivision looks the same. New apartment complexes all look the same. In order for AI/robotics to do housing efficiently, everything will have to be built the same. With less human input, there will be less variety, less color, etc...

 

That is, until (if) AI becomes AGI, then of course everything could be spectacularly artistic and individualized. Up until that point "the system" only cares about efficiency. (which is why cars and other products almost all look the same, like housing)







Also tagged with one or more of these keywords: robots, automation, employment, jobs, crisis

60 user(s) are reading this topic

0 members, 60 guests, 0 anonymous users