• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI soars past the turing test

chatgpt turing test

  • Please log in to reply
123 replies to this topic

#91 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 25 February 2024 - 08:52 PM

Its nice discussing things with you, Mind. But you never tackle the tough questions and just make a general statement. For example no answer to the following:

 

Isn't that a good thing? Robots killing robots instead of humans?

 

How is it that "we could all end up dead"?

 

My impression of chatgpt and the like is that they are not very intelligent. This is true of computers in general, good at processing tons of data and following instructions but no original thought. AI in its present form just seems to add another layer of processing making it able to follow more complex rules giving the appearance of human thought at times. 

 

Nobody, not you or anyone else has explained how a machine could become "evil" and have "desire" to do harm. Don't they simply follow their program? You can program a robot to shoot guns and fire rockets etc but its no more evil than a gun that needs a trigger pull. Thats why I constantly scoff at the anthropomorphizing that goes on, "rogue" robots, "killer" robots, and so on implying consciousness and will independent of any programming.



#92 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 26 February 2024 - 12:10 AM

If you check out the article about robots on the battlefield (linked in the previous post), they specifically mention emergent behaviors as just one way that things could go awry (military commanders are concerned about this as well). This is a common phenomena in nature and AI. Once a system reaches a level of complexity and interaction (beyond the mostly linear prediction/comprehension capabilities of the human mind), then new behaviors can "emerge" that were not anticipated.



sponsored ad

  • Advert

#93 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 05 March 2024 - 08:40 PM

Interesting article about polling in the age of AI. Interestingly, using simpler Captchas/questions can now ferret out AI/bots. Current AI is verbose and uses unnecessary superlatives when describing a simple picture.



#94 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 14 March 2024 - 07:27 PM

A couple examples of the singularity-like explosion of AI and robotics.

 

Covariant is bringing LLM-type reasoning and conversation capabilities to factory robots.

 

The Figure robot shows off some amazing actions while conversing with a person. Figure started a mere 18 months ago. Now they maybe have one of the most advanced robots. What really makes it seem more "real" is the voice program - making the robot seem much more human and relatable.

 

Also,

 

AI is getting very close to being able to reprogram/improve itself. Cognition's AI can write, compile, and debug programs all on its own.

 

Graphic artists, digital designers, etc. those who do the difficult meticulous work to produce beautiful media, might be some of the first to lose their jobs, seeing how fast SORA (from OpenAI) is progressing.

 



#95 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 15 March 2024 - 08:44 PM

In your first post you seem to say "we don't know what will happen so lets be afraid" without giving any reasons besides saying things 'could' go wrong.

 

In your second post you point out how its easy to detect ai, at least for now. I have found zero intelligence in the somewhat primitive chatgpt I have to work with. It can only repeat what it has been told and apply simple rules it was given. The main niche for ai, at least as it may be today, is to digest the mountains of data that research produces. One example I saw recently was that ai was able to look at hundreds or thousands of xrays and mri's and detect abnormalities or cancer much more accurately than humans and in much less time

 

In your third post you bemoan the fact that ai is improving and some people might lose their jobs. As a matter of fact, ai can aid artists and designers. You want something as simple as a support wall in a structure or as complex as a reactor design you simply start with templates the program gives you, plug in your specifications and you are well into designing it. If ai evolves even more, you could simply tell it what you need and it cranks it out. For now its like a super helper that does all the boring and tedious work. Why do you wish boredom and tedium on us?

 

Artists too will benefit. AI needs at minimum a prompt telling it what to do and the results look very ai-ish. Maybe you want a star like figure in a portion of your painting but not a typical star with points, maybe you want it to look a little like a splash or to morph and change, pulsate, whatever. Then you modify it to your taste, and colors, shimmer, this can be very artistic and does much of the tedius work in art production. Da vinchi and all the great artists had artists assistants who filled in areas that were not as important perhaps in the background or something. Many of them went on to become recognized artists in their own right

 

So, I encourage you and others not to be so morose and to look at the likely positive things it can do. We have barely scratched the surface of the benefits to come yet people put forward comic book theories of evil killer robots. AI will usher in an age of prosperity never seen before. It will make the industrial age look like a feeble warmup



#96 pamojja

  • Guest
  • 2,921 posts
  • 729
  • Location:Austria

Posted 16 March 2024 - 03:27 PM

yet people put forward comic book theories of evil killer robots.

 

Comic book theories? You don't realize killer robots, drones, are a decisive and growing force in real down to earth Ukraine? Causing countless real deaths already for years?
 

 



#97 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 16 March 2024 - 03:59 PM

Comic book theories? You don't realize killer robots, drones, are a decisive and growing force in real down to earth Ukraine? Causing countless real deaths already for years?
 

 

Killer autonomous drones were also used in the Armenian conflict - to kill humans/Armenians - not other robots. We are not in comic book land anymore.



#98 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 16 March 2024 - 04:40 PM

The point I was making is that these devices are not sentient, they do not have desire to do harm or good. What we interpret as good or evil is the machines simply following their instructions. The evil ones are those who program the machines of destruction

 

If you cling to the "evil" hypothesis then a simple land mine is an evil robot. They lay in wait and when a machine or human activates it, it follows its instructions and blows up. Is that an evil device or is it simply an object that does what it was made to do? Guns kill people but its the ones who use the guns who are doing the killing

 

Yes autonomous drones exist but they are similar to sending a rocket. The drone or rocket is sent to destroy something, the rocket we aim at a certain spot but the robot selects its own target sometimes. An ai powered drone is scary but what I reject is the claim that its evil and somehow "decided" to kill and do harm on its own. That is a comic book theme

 

Should we get rid of all our potentially evil machinery? No, because there are bad people and they will steal what we have if not pushed back. A defenseless country is begging to be invaded and taken over. We need to have machinery that will do very bad things... to the enemy. The same logic applies to nukes, deterrence being the benefit of having them 


  • Good Point x 2

#99 pamojja

  • Guest
  • 2,921 posts
  • 729
  • Location:Austria

Posted 17 March 2024 - 12:10 PM

We need to have machinery that will do very bad things.. to the enemy. The same logic applies to nukes, deterrence being the benefit of having them 

 

So will both sides. Nowadays gain of function research only brought up this harmless as the flu chimera, and today drones aren't the infallible killer machines yet. All that changes with AI. And probably the former will bring decision in this till now only proxy world war between the NATO and eastern state, because its origins are so difficult to track.

 

 

A defenseless country is begging to be invaded and taken over.

 

The last time I checked, though already a decade ago, beside the superpowers, 2 nations armies stood out with their man- and machinery-power: Syria and Ukraine. On the contrary, I ironically feel sort of glad living in a neutral non-NATO state just a mile from the Swiss border. A state probably only worthwhile to destroy its international infrastructure as member of the EU, but nothing else really of worthwhile.


Edited by pamojja, 17 March 2024 - 12:11 PM.


#100 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 17 March 2024 - 01:34 PM

AI enabled fraud is expanding rapidly around the world. Like I mentioned earlier, elections are going to be FUBAR due to AI. I can certainly see "news" organizations like CBS, ABC, BBC, AP, Rueters, NBC, spreading fake videos without checking their sources.



#101 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 19 March 2024 - 01:34 AM

Lol, mind, I have to agree this time but with a caveat. The so called "news" organizations you mentioned would gladly run fake ai if it was harming the republicans. If it was attacking biden well, they would instantly call it false. 

 

Elections in usa have been fubar for some time now, perhaps you didn't notice? Particularly in 2020 and 2022. It is highly likely they will cheat again, why not, it worked great before, might as well try it again. Has the gop learned anything or are they just fake opposition? I dont care for the gop either.

 

pamoja

" today drones aren't the infallible killer machines yet. All that changes with AI. And probably the former will bring decision in this till now only proxy world war between the NATO and eastern state, because its origins are so difficult to track."

 

The russians are bringing "decision" to the battlefield now. The war should be over by summer. There is no "who will win" that part is already understood even by the west. The origins are not difficult to track unless you believe west's propaganda

 

"A defenseless country is begging to be invaded and taken over." Me

 

"The last time I checked, though already a decade ago, beside the superpowers, 2 nations armies stood out with their man- and machinery-power: Syria and Ukraine"

 

So, usa and china didn't count? Ukraine was not invaded because it was defenseless. It had a large nato trained army equiped with the latest weapons. It was invaded because it was constantly shelling the donbass region which had mostly russian speakers. Do you deny that? Russian spies found that they were massing large forces and were planning to attack those regions directly. 

 

Despite the western propaganda, russia has no need for more territory. They have more territory now than the entire land mass of the surface of the moon. They have vast areas with very few people and lots of natural resources. They need people, not land



#102 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 20 March 2024 - 04:25 PM

Interesting article from the Nvdia CEO about AGI (human level intelligence) being 5 years away. Interesting because they keep moving the goal post. AI can already pass the Turing Test with flying colors and every other type of college level exam in the 99th percentile. AI can already program faster and better than humans. AI can already create high-quality create video, audio, and prose faster than humans. AI can beat humans at any (mental) game. Now it seems, none of this really proves that AI is anywhere near human level intelligence. Pretty soon AI will do everything better than humans, except maybe solve the most difficult theoretical physics problems.....and people will say SEE! AI is nowhere near human level. Lol.

 

Some people are not concerned with the dangers of AI because current AI can't spread or control things without the aid of human programmers or high bandwidth Internet. Well, problem solved. Here we have two robots explaining and showing each other how to complete tasks. Now if a killer robot figures out how to kill a human on his/her own, he can just show another robot how to do it. No need for the Internet. Now if a factory robot figures out a task, no need for a human to try and program or teach the other robots, they can just teach each other.



#103 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 25 March 2024 - 09:41 PM

Sigh, you will never let go of the killer robot theme will you? 

 

"Now if a killer robot figures out how to kill a human on his/her own,"

 

You still have to explain why a piece of machinery wants to kill a human? I've asked before and you always run from the question



#104 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 26 March 2024 - 05:28 PM

I am not talking about "a piece of machinery". I am talking about true AGI. We cannot predict what its motivations will be, same as a bacteria can't predict what a human will do.

 

As far as "a piece of machinery" goes, automated weapons were killing people already almost two decades ago.



#105 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 26 March 2024 - 10:57 PM

Sometimes I think you are trolling

 

"We cannot predict what its motivations will be, same as a bacteria can't predict what a human will do."

 

 



#106 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 29 March 2024 - 07:51 PM

What does everything think about the new ASI organization?



#107 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 29 March 2024 - 10:14 PM

They start out with this which I think is true and also something we need to get over

 

"Humans are hardwired to focus more on the existential threats from AGI than on the benefits."

 

It seems to be a primordial fear of the unknown. We even have a saying for it, 'better the devil you know than the devil you don't know' so people will put up with all sorts of unpleasant, difficult and even dangerous situations simply because they are familiar. The solution is often something new and strange, therefore people are fearful because its unknown and might be worse. 

 

This primitive superstitious feeling is usually expressed in the form that the machines or at least the programming will turn "evil" and develop desires to kill, etc and so on you see the plot in many sci fi movies. Normally intelligent people can easily fall victim to this way of thinking and become fearful. We can explain that it just follows the rules given to solve problems that are presented to it. It has more rules and processes which allow it to mimic human conversation.

 

Anything that seems to talk instantly seems alive and intelligent. Natives on distant islands heard the voice of people they knew in other towns and they thought they were inside the phone. Many to this day do not want their photo taken because they think their spirit will be captured and they will never be free. People still believe machines can develop independent will and turn evil

 

Its not just the out of touch or uneducated who are affected by this. People develop phobias and become very upset over imaginary problems every day. If its not ai its the russians are coming, aliens are coming...



#108 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 06 April 2024 - 03:53 PM

This will cheer you up, mind. Robots are used more and more on the battlefield. Russia started using an autonomous ground robot that looks like a small tank, it has automatic grenade throwers, possibly a gun though it wasn't mentioned, and of course vision and sound capable.  It is armored though not as heavily as a tank, it can go where a soldier would be in great danger and attack locations. It can also carry ammo, food and water.

 

One day, instead of human armies, we will have fleets of air, water and ground drones. They will battle each other, fire rockets and artillery, fpv drones and everything else. When one side wins, the other side has to agree to surrender terms. Few if any humans will be killed



#109 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 10 April 2024 - 04:24 PM

Out of Japan, a call for quick regulation of AI or risk a total collapse of the current social order. Glad I am not the only one trying to think about and head off the potential bad scenarios ahead.

 

I mentioned earlier that future elections are probably going to be FUBAR with AI manipulation of media, something that could wreck a lot of the "social order".


Edited by Mind, 10 April 2024 - 04:51 PM.


#110 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 12 April 2024 - 06:20 PM

This will cheer you up, mind. Robots are used more and more on the battlefield. Russia started using an autonomous ground robot that looks like a small tank, it has automatic grenade throwers, possibly a gun though it wasn't mentioned, and of course vision and sound capable.  It is armored though not as heavily as a tank, it can go where a soldier would be in great danger and attack locations. It can also carry ammo, food and water.

 

One day, instead of human armies, we will have fleets of air, water and ground drones. They will battle each other, fire rockets and artillery, fpv drones and everything else. When one side wins, the other side has to agree to surrender terms. Few if any humans will be killed

 

Not really cheerful, sorry.

 

Russia is using their robots to kill people, not other robots.

 

Maybe in the future robots will fight robots, but there is no conquering a country unless the people are subjugated. Who cares if your robots kill your enemy's robots. That doesn't mean you rule the country. People still have to acquiesce. If they don't - you just going to let them be? Lol. The robots will be sent in to kill people until they surrender.

 

In addition, even if the robots are still at least minimally controlled by humans, why not kill the people controlling the robots. It would likely be a much quicker way to win the war.

 

If AI is acting autonomously, then it might figure out the best way to win the war is to target the people behind the enemy robots (the programmers/manufacturers).

 

Israel is currently using AI to decide who is targeted in GAZA.

 

We can hope that AI is used for good instead of killing people, but that will take an honest discussion of the potential pitfalls, otherwise evil people will use it for nefarious purposes.


Edited by Mind, 16 April 2024 - 05:23 PM.


#111 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 16 April 2024 - 03:47 PM

 

Israel is currently using AI to decide who is targeted in GAZA.

 

 

Do you believe that propaganda? Isreal has made it clear it wants all palestinians dead or gone. This would have you believe they can distinguish between hamas and civilians. They can't and don't really care. To them, all need to be  eliminated. When they bomb whole cities destroying everything, do you think ai had anything to do with it?

 

For your information, russia has always tried to avoid hitting civilians. They do kill a lot of enemy soldiers. Ukraine, on the other hand, hides behind civilians. They set up howitzers next to schools, apartment buildings and shopping centers. Then the return fire will hit the schools etc

 

There is no need to target civilians, that is terrorism. When the army is defeated, the people have no choice but to accept what happened. If the wars of the future are fought with robots, then no one gets killed


  • Disagree x 2
  • Needs references x 1

#112 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 25 April 2024 - 07:39 PM

This forum was one of the first places where AGI was discussed at length/in depth over 20 years ago. Now we are at the exponential explosion of AI and not many people seem to be aware of it. Just because the "media" is not covering AI, it gives people the perception that nothing is going on - but there is a lot going on. "Miniature" LLMs running on raspberry PIs are nearing the capability of Chat GPT-4. Unreal! Open AI had better get a move on or they will be eclipsed by the more genuine open-source community.

 

Either humans have some "special sauce" that cannot be replicated in silico, or the entire species will soon be made irrelevant.



#113 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 17 May 2024 - 09:04 PM

Vitalek Buterin says current AI has passed the Turing test.

 

For the vast majority of people, AI passed the Turing test a couple of years ago. Most people could not tell the difference between human and bot conversation a couple years ago unless they were clever in the way they asked questions/conversed. Now even clever people know that AI has passed the Turing test.

 

But is AI "generally intelligent"? Probably quite close.



#114 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 25 May 2024 - 09:48 AM

Someone brings up an issue with the development of AI.

 

In the same manner that individuals will lose all skills they used to employ to live their life, so will the government lose collective "intelligence" and decision-making ability. As the government uses AI more-and-more, it will rely upon the decisions, information, and guidance of AI more and more. Some day my house will be flooded with sewage. I will ask the local government for help. They will ask AI about the problem. It will say there is no problem (or that nothing like that was predicted to happen) and I will be left standing in sewage.



#115 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 28 May 2024 - 06:09 PM

At least one AI - Google's - is laughably failing the Turing test. The rest keep getting better and continue to impress.

 

Here is a disturbing story about how people are getting "hooked/dependent" on AI even at this early stage when it still makes mistakes (a lot of mistakes - when it comes to Google's AI, lol). Scientists find a lot of errors when ChatGPT is asked programming questions. Even when the AI answers are wrong, 35% of human coders still trust the AI answers. Probably because they have lost real coding skill and knowledge. They have probably been using GitHub and AI too much and never really understanding the logic/programming.



#116 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 01 June 2024 - 07:13 PM

The problem is most people have not grasped the fact that ai is just fancy programming and training. There is no actual intelligence, it merely follows a sequence of steps and if that arrives at the right answer, it succeeds. The programming seems to be that it must produce an answer of some sort so if it can't solve the problem it makes up a solution that is false. So the urge to answer is stronger than the prohibition against faking the answer. That can be solved via better programming. A final check as to whether the answer is true and relates to something real or not should be in place

 

"Even when the AI answers are wrong, 35% of human coders still trust the AI answers."

 

Lazy humans + lying programs = errors. What is your solution? I say better programming, you seem to imply lets get rid of it all.

 

You go on and on about people losing skills. What about the skills most of us have already lost like throwing a spear, shooting a bow, using a snare etc? You have not moaned or groaned once over the relatively ancient skills we have lost, why the angst now? If a machine can do coding more accurately then why should humans do it all themselves instead? You never give a reason why these things are so terrible. When is the last time you butchered and cleaned an animal you killed? Not having to do hard labor is an improvement, not a step backward



#117 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 04 June 2024 - 10:11 AM

The problem is most people have not grasped the fact that ai is just fancy programming and training. There is no actual intelligence, it merely follows a sequence of steps and if that arrives at the right answer, it succeeds. The programming seems to be that it must produce an answer of some sort so if it can't solve the problem it makes up a solution that is false. So the urge to answer is stronger than the prohibition against faking the answer. That can be solved via better programming. A final check as to whether the answer is true and relates to something real or not should be in place

 

"Even when the AI answers are wrong, 35% of human coders still trust the AI answers."

 

Lazy humans + lying programs = errors. What is your solution? I say better programming, you seem to imply lets get rid of it all.

 

You go on and on about people losing skills. What about the skills most of us have already lost like throwing a spear, shooting a bow, using a snare etc? You have not moaned or groaned once over the relatively ancient skills we have lost, why the angst now? If a machine can do coding more accurately then why should humans do it all themselves instead? You never give a reason why these things are so terrible. When is the last time you butchered and cleaned an animal you killed? Not having to do hard labor is an improvement, not a step backward

 

Almost all of the meat and fish I eat I catch/kill and butcher myself. Almost all of the vegetables and fruit I eat, I grow myself. I don't have cows or goats so I do have to buy dairy products.

 

The point about the coders is that they are becoming dependent upon the technology to the point that they don't even know they are getting erroneous results. Once dependent upon the technology/system, one has to hope that it perpetuates in your favor, because you will no longer have the skill to do anything on your own.



#118 adamh

  • Guest
  • 1,102 posts
  • 123

Posted 04 June 2024 - 05:28 PM

Almost all of the meat and fish I eat I catch/kill and butcher myself. Almost all of the vegetables and fruit I eat, I grow myself. I don't have cows or goats so I do have to buy dairy products.

 

The point about the coders is that they are becoming dependent upon the technology to the point that they don't even know they are getting erroneous results. Once dependent upon the technology/system, one has to hope that it perpetuates in your favor, because you will no longer have the skill to do anything on your own.

 

Good for you! If more people did that we wouldn't have half the problems we do now. So you do know how to use a snare? Or at least shoot a gun if not a bow. We have lost many of those old skill sets. Now we have to rely on others to do it for us

 

Lying ai is a problem but it can be solved. Why should coders do all their work by hand when a prog can do it easier and faster? Should we walk instead of drive cars? We might lose the ability to walk one day? Should we go back to writing letters and give up email? You haven't bemoaned the loss of many other old skills but you are worried about coding? Riding horses or other animals was once an essential thing and must be learned. Not so much anymore



#119 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 26 June 2024 - 05:37 PM

Here is a good podcast about the various forms of AI in use today. The host and guest agree that LLMs are nowhere close to "intelligent". However, they do go into a good discussion (second half of the podcast) about the aspects of intelligence. LLMs can memorize most human knowledge and make connections that were not apparent before. This is a form of intelligence - as the podcast notes. A LOT of what humans do and know require memorized knowledge and making new connections among the memories to solve problems. That is why everyone is blown away by the most recent LLMs.

 

The guest of the podcast says we need knew novel logic puzzles to test whether or not AI is "truly" intelligent. This reminds me of a point I made earlier - that every time a new version of AI comes out and "blows (normal) people away" at how intelligent it seems, some academic comes out with a new and tougher metric for AI to achieve before they will say there is any intelligence within the bot/program.



sponsored ad

  • Advert

#120 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,336 posts
  • 2,001
  • Location:Wausau, WI

Posted 06 July 2024 - 03:42 PM

Here is an instance of someone urging for the use of AI to mislead the public during an election. They suggest that the Biden team use AI to make the President look vibrant and smart (instead of a guy with dementia who can barely walk) .







Also tagged with one or more of these keywords: chatgpt, turing test

14 user(s) are reading this topic

0 members, 14 guests, 0 anonymous users