[ad_1]
Editors Note: In the wake of rising concerns about AI’s potential impact after the introduction of ChatGPT and other generative AI applications, HPCwire asked Steve Conway, senior analyst at Intersect360 Research, to interview Paul Muzio, former vice president for HPC at Network Computing Systems, Inc., and current chair of the HPC User Forum, an organization Conway helped to create. At a recent User Forum meeting, Muzio gave a well-received talk chronicling the history of human concerns about artificial intelligence and questioning whether intelligence is limited to humans. A link to Muzio’s presentation appears at the end of the interview.
HPCwire: Paul, people have been concerned for a long time about machines with super-human intelligence taking control of us and maybe even deciding to eliminate humanity. Your talk provided some examples from popular culture. Can you mention some of those?
Muzio: As I mention in my presentation, in my opinion the most profound prognostication of machines with super-human intelligence was presented in the 1956 movie Forbidden Planet. The movie foretells a global or “planetary” version of Google, the metaverse, machine-to-brain and brain-to-machine communication and what might go wrong. I also mention R.U.R., a play written in 1921 by Karel and Josef Capek. The Capeks are the inventors of the word “robot”. There is one line in that play that grabbed me, “from a technical point of view, the whole of childhood is a sheer absurdity. So much time lost.” This concept is also addressed in Forbidden Planet. There are many other writings in science fiction that I did not mention, such as I, Robot by Earl and Otto Binder, the movies Ex Machina, 2001: A Space Odyssey, and many others.
HPCwire: The impressive capabilities of generative AI have amplified concerns about where AI might be headed. In your opinion, how concerned should we be? You pointed out several times in your talk that unlike humans, computers retain what they’ve learned forever, without the need to educate the next generation.
Muzio: It is easy to make mistakes; it is hard to guarantee correctness. But even correctness does not preclude unintentional or adverse consequences. In The Complete Robot, Asimov discusses the situation where there is an iterative development of algorithms and that after a number of iterations, no human can understand the nth algorithm. This is illustrated to a degree by the development of AlphaGo by DeepMinds. AlphaGo was played against AlphaGo and in the end developed not only superhuman capability but also evolved to an algorithmic complexity beyond what humans could have developed. Recent experiments with developmental versions of GPT-4 have also resulted in some unexpected results. In fact, OpenAI has had to “dumb down” GPT-4 prior to its general availability.
GPT, as a released product, does not in and of itself have memory, i.e., it does not have operational access to a global planetary library which contains all knowledge. But we are building, at the present, huge decentralized libraries: libraries of human history and thought, libraries of biology, libraries of evolutionary trends, libraries of the universe. Of course, even data collections down to who we communicate with, what our preferences and dislikes are, and our everyday interactions. We strive to protect, perpetuate, and share those libraries. We are, with current computing technology, acquiring and preserving exabytes and exabytes of data. And, there is more sharing of that information than we are aware of. Right now, generative AI (G-AI) tools have access to some data for training purposes. What happens in the future when and if future G-AI tools gain access to all these decentralized libraries?
By the way, there are those who say that you have to “show” AI millions of pictures for it to be able to recognize a cat, whereas a child can quickly learn to recognize a cat. I argue that argument fails to acknowledge that the child has also seen millions of pictures of diverse things including the cat. I think that when G-AI has access to all those libraries we are building, it too will quickly learn.
HPCwire: Generative AI is still an early development. It’s generally still within the realm of so-called path problems, where a human provides the machine with a desired outcome and the machine obeys the human command by following a step-by-step path to pursue that outcome. At some future point machines should be able to handle insight problems, where they pursue and sometimes achieve innovations without prescribed outcomes. That has great potential benefits for humanity but is that also a cause for concern?
Muzio: I recently watched a presentation by Sebastien Bubeck, a very brilliant researcher at Microsoft. I think he clearly shows that an experimental version of GPT-4 has gone beyond the “path problem.” Yes, he concludes that GPT-4 is not capable of planning, but has many attributes of intelligence. His is a really great presentation and analysis of where we are today. Watch it.
As I point out in my presentation, it took 5,000 years to go from the invention of the wheel to the building of an automobile. The world of computers and AI is only a few decades old. Where will we be a few decades from now? Forbidden Planet and other science fiction books/movies tend to present a bleaker future and maybe science fiction may actually foretell the future. I would add the following: it is human hubris to assume that we are the pinnacle of evolution.
HPCwire: On a practical level this whole topic might revolve around the human-machine interface, or HMI, and the possibility that at some point computers or other machines might sever that connection as something no longer needed by them or even annoying. Do you see that as a possibility?
Muzio: Certainly this is so postulated in R.U.R. and the movie Ex Machina. I would expect it to be more evolutionary. We become more dependent on intelligent systems, and we become less capable of surviving in the world. I currently live out in Montauk, New York, which was long a quiet fishing community (the nearest traffic light to my house is 17 miles away). It is now inundated in the summer by Gen-Zers. Unfortunately, no one has taught Gen-Zers that when you walk on a country street with no sidewalks that you should walk facing traffic. I have a hunch that GPT-4 would know. In my presentation, I cite two books that address biological evolution with a crossover to AI. I highly recommend them.
HPCwire: AI is already being used to help design computer chips. You mentioned in your talk that this process could get out of human hands if the process becomes self-sustaining and the chips design their even-smarter successors. Should chipmakers be taking preventive measures?
Muzio: In my presentation, I mention that the chipmakers will not like what I say, but I believe the only preventative measure is to limit the further development of advanced chips. I guess I am not alone in this as the U.S. Government is restricting the export to the PRC of the technology to build advanced chips.
HPCwire: So far, we’ve been talking about two forms of intelligence, human and machine, but in your talk you referred to scientific evidence that humans aren’t the only natural creatures with intelligence. Can you say something about that?
Muzio: If you grew up with a pet or with animals, you recognized that they could think, plan, and had feelings, i.e., they had intelligence. Two millennia ago, the ancient Romans recognized that octopodes were uniquely intelligent. Some birds are able to count. Researchers have found that plants can recognize insect threats and communicate. In my presentation, I mentioned two books, both published in 2022, An Immense World by Ed Yong and Ways of Being: Animals, Plants, Machines-The Search for Planetary Intelligence by James Bridle. Both books have extensive citations to refereed research publications. Both books give you a different perspective on intelligence.
HPCwire: With AI, as with most transformational technologies, there can be a big difference between what can be done and what should be done. In 2016, Germany became the first country to pass a national law governing automated vehicles. Ethicists and even religious leaders were part of the group that developed this legislation. Is it time to require that training in ethics be added to AI-related university curricula?
Muzio: Ethics is important. Unfortunately, most ethics courses are poorly taught and not remembered. But yes, ethics should be taught in AI-related university curricula, and I would recommend that required reading include R.U.R., Asimov’s The Complete Robot, the two books I cited above, and a screening of Forbidden Planet and maybe my presentation if teachers think it’s worthwhile enough.
HPCwire: A final question. The definitions of life I’ve seen are pretty broad. Do you think AI machines at some point may qualify as living things? Does that matter?
Muzio: The short answer to the first final question is, yes. The answer to the second final question is more difficult. In Forbidden Planet, the goal was to build an eternal machine into which the Krell could intellectually live forever. If that could be achieved, a lot of people would be very happy. If the goal was to dispense with people altogether, that would also matter. And, if in x-billion years, the universe fades into nothing, it doesn’t matter at all.
Presentation link (short 20-minute video)
This article first appeared on HPCwire.
Related
[ad_2]
Source link