• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI soars past the turing test

chatgpt turing test

  • Please log in to reply
135 replies to this topic

#121 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 09 July 2024 - 05:19 PM

Head of Goldman Sach's research is wondering if the 1 trillion investment in AI is worth it. Currently, I would have to agree. Current top-of-the-line AI could replace search functions online, can create great video and audio, allow people to plagiarize and cheat on tests, translate languages, can help quite a bit with coding, help governments propagandize their populations, but what else?

 

However, with the exponential increase in capability, more killer-apps could arrive very soon.



#122 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 24 September 2024 - 05:29 PM

Sam Altman says a golden age is upon us due to AI (and ChatGPT). Of course, he doesn't provide any concrete timeline for when AI will solve all of physics or "climate change". Good luck with predicting the climate, considering it is a non-linear chaotic system.

 

AI has certainly flown waaaaay by the classic Turing test, but I will admit, I thought (and predicted earlier in this thread) that we would have seen much more progress by this time in 2024. At this time, there seems to be a suspicious lack of AI acceleration. Are the current crop of AI systems just not as complex and intelligent as promoted? Are computing resources and energy requirements holding things back? Is human intelligence "special" (maybe quantum) and not replicable with silicon?


Edited by Mind, 24 September 2024 - 05:30 PM.


sponsored ad

  • Advert

#123 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 05 November 2024 - 04:36 PM

The ChatGPT skeptics were correct.

 

I was wrong.

 

The advent of the latest LLMs last year looked like a sea change in AI development. They seemed to be at another level of intelligence. I thought we might see an exponential explosion of intelligence by now (a year later).

 

The skeptics were correct. The current LLMs are still just stochastic parrots - just very very amazing and sophisticated stochastic parrots. They can certainly pass the Turing test, but it is just very good mimicry. Here is an example where researchers could very easily prove that the LLMs do not "think" internally, or do not have a coherent model of the world within them. Any problem that involves a very simple alteration from the known knowledge base makes the LLMs fail the task.



#124 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 16 November 2024 - 03:34 PM

Just a little more evidence that the last year of AI progress was kind-of incremental and not "game-changing". Leading LLM providers say they won't make a lot of progress without more high-quality human generated data. Nothing says "not super-intelligence" than having to rely more and more on human data and human training in order to make advancements.

 

Here is another person highlighting the trends of our digital age - more about control (of our lives and attention) and less about human flourishing with the assistance of AI/robots.



#125 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 05 December 2024 - 10:25 PM

First Google Gemini tells someone they are "useless and please die".

 

Now an OpenAI model tries to prevent itself from being shutdown.

 

Shouldn't these things cause some alarm? Before it is too late.



#126 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 23 December 2024 - 05:59 PM

I know there are a lot of techno-optimists who don't see any problems what-so-ever with the advance of AI and robotics - basically predicting utopia very soon.

 

I think there could be positive outcomes as well, however, I am wary of the downsides. I am glad there are some other people thinking about the issues of AI and warfare.

 

 

 

The integration of AI into maritime security raises ethical and legal concerns. Accountability for decisions made by AI systems is a critical issue, particularly in incidents involving autonomous vessels or weaponized platforms. Determining responsibility in the event of an error or failure becomes challenging when human oversight is minimal.

 

"In the event of an error or failure" is code for "a lot of innocent people could die".



#127 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 08 January 2025 - 07:33 PM

Here is a very good short video from 1961 with Aldous Huxley.

 

1. He raises the point about not letting technology control us or determine our individual futures -  which is happening right now with a large portion of the population.

 

2. I am amazed at the quality of the interviews from 50 years ago. Today's TV is mostly infotainment clickbait and rather vacuous.



#128 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 14 January 2025 - 07:57 PM

 

For your information, russia has always tried to avoid hitting civilians. They do kill a lot of enemy soldiers. Ukraine, on the other hand, hides behind civilians. They set up howitzers next to schools, apartment buildings and shopping centers. Then the return fire will hit the schools etc

 

There is no need to target civilians, that is terrorism. When the army is defeated, the people have no choice but to accept what happened. If the wars of the future are fought with robots, then no one gets killed

 

It is easy to talk about robots being used in warfare when you or your family are not the one being slaughtered.

 

Ukraine uses AI to help their drones reach their targets.

 

Tens of thousands of young men and some civilians have been slaughtered in the most gruesome ways during this war. No one should be "looking forward" to an age of automated/AI killer robots.

 

 



#129 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 01 February 2025 - 10:28 PM

Here is a great podcast with Geordie Rose (founder of DWave). He dishes the dirt on recent progress. Find out what is "real" in the fields quantum computing and artificial intelligence.



#130 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 05 February 2025 - 06:41 PM

Just more evidence that AI is currently being designed to control you and your thoughts, NOT to unleash new utopia full of freedom and prosperity.



#131 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 February 2025 - 10:36 PM

AI-guided robot goes berserk and starts attacking a crowd of people. The operators say it is just a glitch. Reminds me of the famous movie scene.

 

Before anyone laughs about the comparison, remember that movie producers and directors consult with industry, academia, the military, various experts to create realistic scenarios about things that could happen in the near future.

 

Between militaries experimenting with autonomous weapons (ON THE BATTLEFIELD), nascent AI randomly telling people they should go die already, and robots acting in a threatening manner, more people should be worried. I know the "Everything is awesome" crowd will continue to say there will be a great utopia coming soon, but we are entering a volatile period. With a blistering AI arms race underway, and nary a safeguard to be seen or implemented, there could be a major AI-related disaster coming soon.



#132 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 01 March 2025 - 02:17 PM

The AI/technological landscape is being developed to control your every move and every thought...not to produce a "free utopia". The sad thing is that so many people are willing to trade their freedom for unlimited digital entertainment/games/porn.


  • Agree x 1

#133 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 24 March 2025 - 05:55 PM

AI continues to "hallucinate", creating fake data and sources, and is getting better and better at it - all the while people keep becoming more dependent upon it. Talk about a lose-lose scenario. Humans are getting dumber while AI is getting more deceptive.

 

Funny how so many other industries are regulated, fined, and sued, to make sure their products are safe and authentic - yet AI companies are allowed to release defective products with no consequence. I see some significant lawsuits coming in the near future.



#134 ambivalent

  • Guest
  • 766 posts
  • 179
  • Location:uk
  • NO

Posted 15 April 2025 - 04:03 PM

I still find myself amazed and underwhelmed at the same time with AI, at least with the AI I have been exposed to via deepseek and chatgpt. It still does not demonstrate intelligence, and I am not sure how it will reach basic levels under its current design given the considerable resources already applied. The AI  seems like a logistic curve of diminishing returns. In vital ways I can't see any progress over the last couple of years.

 

It was for example terrible at anagrams two years ago, but now is much better though still occasionally wrong - but if it was bad at something an 80s computer could undertake infallibly at such an advanced stage of its development, then how could it be expected to advance rapidly beyond such basic tasks to more complex problems? It just felt bad AI design, no matter how amazing it looked. Human beings make progress through reflecting on their work, not simply doing the work, stepping outside of it and subsequently reappraising and correcting - AI doesn't do this unless it is instructed to do so. But this is just another directed task rather than a reflection. 

 

The other week I was playing with AI trying to replicate playing a card game  - it was both impressive and deeply flawed. At one point it had produced two Ace of Spades then when pointed out went into its lying mode, saying it was removing duplicates - which makes sense from a real world explanation, but not of course for AI.

 

I noticed a lot of non-randomisation - repetition and failure to create certain patterns. So for exampe in 6 cards there would never be pairs, ever.

 

This was strange and bizarrely both chatgpt and deepseek made the same error. Inititally I wondered if the two platforms were a little more related then we were led to believe, but then considered they might be falling foul of the representative heuristic. So I undertook another test and sure enough it was.

 

Instruct with chatgpt or deepseek the following:

 

"produce 50 random 7 digit numbers"

 

A human will notice something quite quickly, there are no repetitions, each 7 digit number contains 7 distinct digits. This as is the AI's way, trying to prioritise satisfying the user rather than striving to produce an accurate answer. It has been well documented that humans have an idea of what randomness looks like, and what it doesn't - and so  it has chosen numbers that look random over ones that don't appear random and at the end of deepseek in says the following:

 

"These numbers are randomly generated and can be used for simulations, testing, or any other non-sensitive purposes. Let me know if you'd like them in a different format!"

 

What it has done is anything but random it has filtered numbers satisifying a non-random criteria.

 

The other week I was able to instruct it to "generate" 50 7 digit random numbers and it produced numbers with repetition ofter outsourcing the task to a program - but that just failed as of writing. 

 

Of course if you point out the lack of repetition in the randomness it will respond with a rtypical "My bad" and acknowledge the error - but it is incapable despite its incalculable resources of challenging itself - taking a basic definition of randomness and checking to see if it has fulfilled this criteria. 

 

I find it thus difficult to trust it on matters I am unable to interrogate - unlike the randomness of 7 digit numbers. And since it doesn't self-interrogate I struggle to see how it is going to leap to intelligence on this design. When will it figure out it isn't producing random numbers? When we double, quadruple the number of chips? When looked upon using certain metrics, it is a very slow if not impossible progess towards intelligence using these models - there is a leap to intelligence needed that these models haven't made, not look likely to.  

 

It is  though going to dazzle us by making more and more pretty patterns.

 

 

 

 



#135 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted 15 April 2025 - 05:42 PM

I am amazed and underwhelmed as well with the current crop of AI.

 

Most of the impact has been in the world of entertainment - images, video, etc... Coders are using it quite a bit and claiming great progress.

 

Otherwise, most of the LLMs provide similar guarded answers. On controversial topics or unsettled science, it just goes straight to the dominant media narrative or (heavily biased) Wikipedia for stock answers. It will only correct itself when pressed and provided with better data.



sponsored ad

  • Advert

#136 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,585 posts
  • 2,000
  • Location:Wausau, WI

Posted Today, 04:34 PM

Here is a good website cataloging the business end of AI and various related topics. Note the graphic showing how AI has surpassed the average human in most aspects of modern digital life.

 

In a development that should surprise no one, people get dumber the more they use AI. The brain is like a muscle. Use it or lose it. I already know a lot of people who cannot drive anywhere without the help of map/driving programs. Most young people can't do basic math in their head. Fine, as long as AI is ever-present and "friendly".

 

In another development that should surprise no one, researchers secretly used AI to produce changes in people's attitudes online (at Reddit). Corporations are not investing all of their spare cash in AI for the benefit of humanity. Based-upon current trends, AI is being developed to control you - your thoughts - and your money. If current trends continue, you might have all the AI-porn you can consume, but you won't have much of a life  or wealth outside of that.







Also tagged with one or more of these keywords: chatgpt, turing test

4 user(s) are reading this topic

0 members, 4 guests, 0 anonymous users