• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI soars past the turing test

chatgpt turing test

  • Please log in to reply
125 replies to this topic

#121 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,384 posts
  • 2,000
  • Location:Wausau, WI

Posted 09 July 2024 - 05:19 PM

Head of Goldman Sach's research is wondering if the 1 trillion investment in AI is worth it. Currently, I would have to agree. Current top-of-the-line AI could replace search functions online, can create great video and audio, allow people to plagiarize and cheat on tests, translate languages, can help quite a bit with coding, help governments propagandize their populations, but what else?

 

However, with the exponential increase in capability, more killer-apps could arrive very soon.



#122 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,384 posts
  • 2,000
  • Location:Wausau, WI

Posted 24 September 2024 - 05:29 PM

Sam Altman says a golden age is upon us due to AI (and ChatGPT). Of course, he doesn't provide any concrete timeline for when AI will solve all of physics or "climate change". Good luck with predicting the climate, considering it is a non-linear chaotic system.

 

AI has certainly flown waaaaay by the classic Turing test, but I will admit, I thought (and predicted earlier in this thread) that we would have seen much more progress by this time in 2024. At this time, there seems to be a suspicious lack of AI acceleration. Are the current crop of AI systems just not as complex and intelligent as promoted? Are computing resources and energy requirements holding things back? Is human intelligence "special" (maybe quantum) and not replicable with silicon?


Edited by Mind, 24 September 2024 - 05:30 PM.


sponsored ad

  • Advert

#123 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,384 posts
  • 2,000
  • Location:Wausau, WI

Posted 05 November 2024 - 04:36 PM

The ChatGPT skeptics were correct.

 

I was wrong.

 

The advent of the latest LLMs last year looked like a sea change in AI development. They seemed to be at another level of intelligence. I thought we might see an exponential explosion of intelligence by now (a year later).

 

The skeptics were correct. The current LLMs are still just stochastic parrots - just very very amazing and sophisticated stochastic parrots. They can certainly pass the Turing test, but it is just very good mimicry. Here is an example where researchers could very easily prove that the LLMs do not "think" internally, or do not have a coherent model of the world within them. Any problem that involves a very simple alteration from the known knowledge base makes the LLMs fail the task.



#124 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,384 posts
  • 2,000
  • Location:Wausau, WI

Posted 16 November 2024 - 03:34 PM

Just a little more evidence that the last year of AI progress was kind-of incremental and not "game-changing". Leading LLM providers say they won't make a lot of progress without more high-quality human generated data. Nothing says "not super-intelligence" than having to rely more and more on human data and human training in order to make advancements.

 

Here is another person highlighting the trends of our digital age - more about control (of our lives and attention) and less about human flourishing with the assistance of AI/robots.



#125 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,384 posts
  • 2,000
  • Location:Wausau, WI

Posted 05 December 2024 - 10:25 PM

First Google Gemini tells someone they are "useless and please die".

 

Now an OpenAI model tries to prevent itself from being shutdown.

 

Shouldn't these things cause some alarm? Before it is too late.



sponsored ad

  • Advert

#126 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,384 posts
  • 2,000
  • Location:Wausau, WI

Posted 23 December 2024 - 05:59 PM

I know there are a lot of techno-optimists who don't see any problems what-so-ever with the advance of AI and robotics - basically predicting utopia very soon.

 

I think there could be positive outcomes as well, however, I am wary of the downsides. I am glad there are some other people thinking about the issues of AI and warfare.

 

 

 

The integration of AI into maritime security raises ethical and legal concerns. Accountability for decisions made by AI systems is a critical issue, particularly in incidents involving autonomous vessels or weaponized platforms. Determining responsibility in the event of an error or failure becomes challenging when human oversight is minimal.

 

"In the event of an error or failure" is code for "a lot of innocent people could die".







Also tagged with one or more of these keywords: chatgpt, turing test

2 user(s) are reading this topic

0 members, 2 guests, 0 anonymous users