ChatGPT Has Passed the Turing Test
#179675
03/30/2023 02:31 PM
03/30/2023 02:31 PM
|
Joined: Jan 2002
Posts: 24,235 Tulsa
airforce
OP
Administrator
|
OP
Administrator
Senior Member
Joined: Jan 2002
Posts: 24,235
Tulsa
|
This, I think, is a Pretty Big Deal. Should we be worried? I'm not, at least not terribly. What would scare me? The AI that deliberately fails the Turing Test. ...Passing the Turing Test is a significant milestone, as it demonstrates the ability of an AI model to convincingly mimic human-like conversation, making it difficult for users to differentiate between AI-generated and human-generated responses....
A Turing test is an examination that takes place in the AI industry to review the credentials of an AI chatbot. Through this test, the experts analyze whether the AI chatbot is capable of interacting, responding, and making them believe it is indeed a human. The Turing test was initiated by British mathematician and computer scientist Alan Turing around seven decades ago.
This test usually takes place with a human assessor with the AI and humans in general conversation. Based on the answers delivered the accessor determines whether it was a human or an AI.
If the judge is unable to distinguish between both responses, then the machine passes the test. Passing the Turing test is a huge milestone for any AI chatbot, as this determines the capabilities of the chatbot to truly interact like a human.... Read the whole thing at the link. Onward and upward, airforce
|
|
|
Re: ChatGPT Has Passed the Turing Test
[Re: airforce]
#179676
03/30/2023 02:39 PM
03/30/2023 02:39 PM
|
Joined: Jan 2002
Posts: 24,235 Tulsa
airforce
OP
Administrator
|
OP
Administrator
Senior Member
Joined: Jan 2002
Posts: 24,235
Tulsa
|
Elon Musk and others want an A.I. "pause." it wouldn't work, and it wouldn't do much good anyway. "AI systems with human-competitive intelligence can pose profound risks to society and humanity," asserts an open letter signed by Twitter's Elon Musk, universal basic income advocate Andrew Yang, Apple co-founder Steve Wozniak, DeepMind researcher Victoria Krakovna, Machine Intelligence Research Institute co-founder Brian Atkins, and hundreds of other tech luminaries. The letter calls "on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." If "all key actors" will not voluntarily go along with a "public and verifiable" pause, the letter's signatories argue that "governments should step in and institute a moratorium."
The signatories further demand that "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable." This amounts to a requirement for nearly perfect foresight before allowing the development of artificial intelligence (A.I.) systems to go forward.
Human beings are really, really terrible at foresight—especially apocalyptic foresight. Hundreds of millions of people did not die from famine in the 1970s; 75 percent of all living animal species did not go extinct before the year 2000; and "war, starvation, economic recession, possibly even the extinction of homo sapiens" did not happen since global petroleum production failed to peak in 2006.
Nonapocalyptic technological predictions do not fare much better. Moon colonies were not established during the 1970s. Nuclear power, unfortunately, does not generate most of the world's electricity. The advent of microelectronics did not result in rising unemployment. Some 10 million driverless cars are not now on our roads. As OpenAI (the company that developed GPT-4) CEO Sam Altman argues, "The optimal decisions [about how to proceed] will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far."
Still, some of the signatories are serious people and the outputs of generative A.I. and large language models like ChatGPT and GPT-4 can be amazing—e.g., doing better on the bar exam than 90 percent of current human test takers. They can also be confounding.
Some segments of the transhumanist community have been greatly worried for a while about an artificial super-intelligence getting out of our control. However, as capable (and quirky) as it is, GPT-4 is not that. And yet, a team of researchers at Microsoft (which invested $10 billion in OpenAI) tested GPT-4 and in a pre-print reported, "The central claim of our work is that GPT-4 attains a form of general intelligence, indeed showing sparks of artificial general intelligence."
As it happens, OpenAI is also concerned about the dangers of A.I. development—however, the company wants to proceed cautiously rather than pause. "We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice," wrote Altman in an OpenAI statement about planning for the advent of artificial general intelligence. "We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize 'one shot to get it right' scenarios."
In other words, OpenAI is properly pursuing the usual human path for gaining new knowledge and developing new technologies—that is, learning from trial and error, not "one shot to get it right" through the exercise of preternatural foresight. Altman is right when he points out that "democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas."
A moratorium imposed by U.S. and European governments, as called for in the open letter, would certainly delay access to the possibly quite substantial benefits of new A.I. systems while doubtfully increasing A.I. safety. In addition, it seems unlikely that the Chinese government and A.I. developers in that country would agree to the proposed moratorium anyway. Surely, the safe development of powerful A.I. systems is more likely to occur in American and European laboratories than those overseen by authoritarian regimes. Onward and upward, airforce
|
|
|
|
|