[ad_1]
In some ways, it’s hard to understand how this misalignment happened. We created all this by ourselves, for ourselves.
True, we’re by nature “carbon chauvinists,” as Tegmark put it: We like to think only flesh-and-blood machines like us can think, calculate, create. But the belief that machines can’t do what we do ignores a key insight from AI: “Intelligence is all about information processing, and it doesn’t matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers.”
Of course, there are those who say: Nonsense! Everything’s hunky-dory! Even better! Bring on the machines. The sooner we merge with them the better; we’ve already started with our engineered eyes and hearts, our intimate attachments with devices. Ray Kurzweil, famously, can’t wait for the coming singularity, when all distinctions are diminished to practically nothing. “It’s really the next decades that we need to get through,” Kurzweil told a massive audience recently.
Oh, just that.
Even Jaron Lanier, who says the idea of AI taking over is silly because it’s made by humans, allows that human extinction is a possibility—if we mess up how we use it and drive ourselves literally crazy: “To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane, if you like, in a way that we aren’t acting with enough understanding and self-interest to survive, and we die through insanity, essentially.”
Maybe we just forgot ourselves. “Losing our humanity” was a phrase repeated often by the bomb guys and almost as often today. The danger of out-of-control technology, my physicist friend wrote, is the “worry that we might lose some of that undefinable and extraordinary specialness that makes people ‘human.’” Seven or so decades later, Lanier concurs. “We have to say consciousness is a real thing and there is a mystical interiority to people that’s different from other stuff because if we don’t say people are special, how can we make a society or make technologies that serve people?”
Does it even matter if we go extinct?
Humans have long been distinguished for their capacity for empathy, kindness, the ability to recognize and respond to emotions in others. We pride ourselves on creativity and innovation, originality, adaptability, reason. A sense of self. We create science, art, music. We dance, we laugh.
But ever since Jane Goodall revealed that chimps could be altruistic, make tools, mourn their dead, all manner of critters, including fish, birds, and giraffes have proven themselves capable of reason, planning ahead, having a sense of fairness, resisting temptation, even dreaming. (Only humans, via their huge misaligned brains, seem capable of truly mass destruction.)
It’s possible that we sometimes fool ourselves into thinking animals can do all this because we anthropomorphize them. It’s certain that we fool ourselves into thinking machines are our pals, our pets, our confidants. MIT’s Sherry Turkle calls AI “artificial intimacy,” because it’s so good at providing fake, yet convincingly caring, relationships—including fake empathy. The timing couldn’t be worse. The earth needs our attention urgently; we should be doing all we can to connect to nature, not intensify “our connection to objects that don’t care if humanity dies.”
[ad_2]
Source link