[ad_1]
Jeffrey Hinton, a deep learning pioneer and Google vice president and engineering fellow, is leaving the company after 10 years because of a new fear he has in the technology’s development.
Hinton, who has been called “the godfather of artificial intelligence,” says he wants to be open about his concerns and that part of him now regrets his life’s work.
Hinton told the MIT Technology Review:
“I suddenly changed my views on whether all this will be wiser than us. I think they are very close now and will be much smarter than us in the future. How do we survive this?”
He worries that extremely powerful AI will be used by bad actors, especially in election and war scenarios, to harm people. He is also concerned that once AI is able to combine different tasks and actions (as we see with AutoGPT), intelligent machines may perform malicious actions on their own.
This is not necessarily an attack on Google specifically. Hinton said he has a lot of good things to say about the company. But he wants to “talk about AI security without worrying about how it interacts with Google’s business.”
In episode 46 of The Marketing AI Show, Paul Roetzer, founder and CEO of the Marketing AI Institute, lays out what you need to know about this important development…
- Hinton’s concerns should be taken seriously. Despite taking an extreme view of the risks posed by increasingly advanced artificial intelligence, Hinton is a major player in AI research. He has a legitimate perspective on an area that is important to focus on. Even if you don’t agree with its overall premise, it highlights a major issue with AI. “More need, more focus on ethics and safety is critical,” says Roetzer.
- But he is not the first or the only one to express these concerns. Researchers such as Margaret Mitchell and Timnit Gebru have raised security concerns at Google in the past, Rotzer says. Unfortunately, their concerns were not heard by the company at the time. They were both fired from Google.
- And not all AI researchers share this concern. Many other AI leaders disagree with Hinton’s concerns. Some share his concerns about security, but don’t go so far as to suggest that AI could become an existential threat. Others, like Ian LeCun, strongly disagree with Hinton that increasingly sophisticated AI will pose a threat to humanity.
- Hinton is though No Calls for a halt to AI development. Hinton has said publicly that he still believes AI needs to evolve. He said he believes artificial intelligence has “so many potential benefits” that it should continue to be developed safely. “He just wants to put more time and energy into security,” says Roetzer.
Bottom line: Hinton is an important voice to listen to when it comes to AI safety — and one more voice in a growing chorus of researchers raising concerns.
Don’t be left behind…
You can stay ahead of AI-driven disruption—and fast—with us Piloting AI for Marketers course seriesA series of 17 on-demand courses designed as a step-by-step learning journey for marketers and business leaders to increase productivity and performance through AI.
The course series includes 7+ hours of learning, dozens of AI use cases and vendors, a collection of templates, course quizzes, a final exam, and a professional certificate upon completion.
After getting Piloting AI for Marketers, you’ll:
- Learn how to advance your career and transform your business with AI.
- Have 100+ use cases for AI in marketing — and learn how to identify and prioritize your own use cases.
- Discover 70+ AI vendors across various marketing categories that you can start piloting today.
[ad_2]
Source link