Artificial General Intelligence (AGI)
Artificial General Intelligence (AGI) Is A Very Human Hallucination
In Sparks of Artificial General Intelligence: Early experiments with GPT-4, Microsoft researchers reported on March 22 the results of their investigation of an “early version” of GPT-4, claiming that it exhibits “more general intelligence than previous AI models.” Given the breadth and depth of the capabilities of GPT-4, displaying close to human performance on a variety of novel and difficult tasks, the researchers conclude that “it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”
The Microsoft researchers got an early peak into GPT-4 (released on March 14) possibly because Microsoft has invested $10 billion in OpenAI, the creator of the GPT-4 and ChatGPT. Gary Marcus was not pleased. “It’s a silly claim, given that it is entirely open to interpretation (could a calculator be considered an early yet incomplete version of AGI? How about Eliza? Siri?),” Marcus argued, pointing to the continuing deficiencies of the current generation of large Language Models (LLMs): “The problem of hallucinations is not solved; reliability is not solved; planning on complex tasks is (as the authors themselves acknowledge) not solved.”
If by “AGI” we (including Marcus) mean getting a machine to be as “intelligent” as humans, why is it a problem to have hallucinations, a very human trait? Isn’t “AGI” or even just ordinary “AI” a very human hallucination?
One of the definitions of hallucination given by the Marriam-Webster dictionary is “an unfounded or mistaken impression or notion.” Obviously Marcus, like many other intelligent people today, thinks that a calculator is not AI or an incomplete AGI. But that mistaken notion (or hallucination) has been advanced by many intelligent people for many years.
In 1833, contemporaries of Charles Babbage called his mechanical calculator or, as we would call it today, a mechanical general-purpose computer, a “thinking machine.” In 1949, computer pioneer Edmund Berkeley wrote in Giant Brains or Machines that Think:
“Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill... These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”
Other computer pioneers such as Maurice Wilkes and Arthur Samuel thought that the question of whether machines can—or could ever—think is a matter of how you define “thinking.” Unfortunately, defining what you are talking about is today considered very old-fashioned thinking. For example, a recent 2-year effort by a large group of very prominent AI researchers to establish the baseline for “The One Hundred Year Study on Artificial Intelligence,” declared that not having a clear definition of what they study is actually a good thing.
#artificialintelligencefacts #artifical #2079 #2083 #2077 #artificialintelligenceconference #modrenworld #technologysumit
Posted November 4, 2023
click to rate
Share this page with your family and friends.