[ad_1]
Hanson and I talked about the idea of adding real intelligence to these evocative machines. Ben Goertzel, a well-known AI researcher and the CEO of SingularityNET, leads an effort to apply advances in machine learning to the software inside Hanson’s robots that allows them to respond to human speech.
The AI behind Sophia can sometimes provide passable responses, but the technology isn’t nearly as advanced as a system like GPT-4, which powers the most advanced version of ChatGPT and cost more than $100 million to create. And of course even ChatGPT and other cutting-edge AI programs cannot sensibly answer questions about the future of AI. It may be best to think of them as preternaturally knowledgeable and gifted mimics that, although capable of surprisingly sophisticated reasoning, are deeply flawed and have only a limited “knowledge” of the world.
Sophia and company’s misleading “interviews” in Geneva are a reminder of how anthropomorphizing AI systems can lead us astray. The history of AI is littered with examples of humans overextrapolating from new advances in the field.
In 1958, at the dawn of artificial intelligence, The New York Times wrote about one of the first machine learning systems, a crude artificial neural network developed for the US Navy by Frank Rosenblatt, a Cornell psychologist. “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence,” the Times reported—a bold statement about a circuit capable of learning to spot patterns in 400 pixels.
If you look back at the coverage of IBM’s chess-playing Deep Blue, DeepMind’s champion Go player AlphaGo, and many of the past decade’s leaps in deep learning—which are directly descended from Rosenblatt’s machine—you’ll see plenty of the same: people taking each advance as if it were a sign of some deeper, more humanlike intelligence.
That’s not to say that these projects—or even the creation of Sophia—were not remarkable feats, or potentially steps toward more intelligent machines. But being clear-eyed about the capabilities of AI systems is important when it comes to gauging progress of this powerful technology. To make sense of AI advances, the least we can do is stop asking animatronic puppets silly questions.
[ad_2]
Source link