[ad_1]
At one point during a congressional hearing for OpenAI CEO Sam Altman, Altman said, “My worst fear is that we’re going to do significant damage to the world.”
Lawmakers and the other two experts at the hearing — IBM executive Christina Montgomery and Gary Marcus, a leading AI expert, academic and entrepreneur — agreed.
During the hearing, they cited a common set of AI security issues that keep them up at night, including:
- Election disinformation or the ability of generative AI to create fake text, images, video and audio at scale, as well as emotionally manipulate people who consume the content to influence the outcome of elections, including the 2024 US presidential election.
- Job disruption, or the possibility that artificial intelligence will cause significant, rapid unemployment.
- Copyright and Licensing, or the fear that AI models are learning on material that is legally owned by other parties and is being used without their consent.
- Content that is generally harmful or dangerous, or the possibility that generative AI systems produce results that harm people. This can happen in a variety of ways, such as: hallucination, where the generative AI distorts information and misleads users, or inconsistency, where the generative AI is not trained well enough and gives users information that they can use to harm themselves or others. .
- Common fears about the pace and scale of artificial innovation, as well as our ability to control it. Experts and lawmakers fear that without proper safeguards in place, the development of artificial intelligence could advance so rapidly that we could unleash potentially harmful technology that cannot be adequately controlled and/or, in some more extreme views, actually create machines that are far smarter than us and beyond us. Control (often called “AGI”).
Which of these risks should we and lawmakers take seriously?
In episode 48 of The Marketing AI Show, I spoke with Paul Roetzer, founder and CEO of the Marketing AI Institute, to find out.
- Congress’s focus on near-term issues is welcome. A lot of attention is paid to doomsday headlines about possible superhuman AGI. And it’s important that intellectual leaders think about existential threats. But there are very short-term problems that AI can cause that we need to focus on, Roetzer says.
- Job loss and election interference are the most immediate threats. Both job losses (especially knowledge workers) to AI and AI-induced interference in key elections are likely to be the biggest issues in the next 12 months, Roetzer says. He emphasizes that these problems are here today. “There is no advance in the technology needed to make all of this happen.”
- It is unrealistic to think that companies control themselves. Companies like OpenAI are taking over some Strong steps to ensure AI security, such as spending months on patching and redlining (or trying to find flaws in systems) before releasing products. But there’s a problem, says Roetzer. “They have no incentive to prevent this technology from getting into the world.” The few big companies now innovating AI are being financially rewarded when they deploy the technology quickly, even if it causes problems. “Ethical concerns seem to be becoming secondary in some of these tech companies,” he says.
- And politicians have mixed motivations when it comes to regulation. On the one hand, lawmakers are taking a serious interest in talking about AI security, which is great. But on the other hand, they also have a clear interest in using technology to win elections and strengthen US economic competitiveness. So they have conflicting motivations when it comes to regulating technology wisely.
Bottom line: There are very real imminent threats we face from AI, but there are no simple answers to deal with these threats or specific regulations to prevent them.
Don’t be left behind…
You can stay ahead of AI-driven disruption—and fast—with us Piloting AI for Marketers course seriesA series of 17 on-demand courses designed as a step-by-step learning journey for marketers and business leaders to increase productivity and performance through AI.
The course series includes 7+ hours of learning, dozens of AI use cases and vendors, a collection of templates, course quizzes, a final exam, and a professional certificate upon completion.
After getting Piloting AI for Marketers, you’ll:
- Learn how to advance your career and transform your business with AI.
- Have 100+ use cases for AI in marketing — and learn how to identify and value your own use cases.
- Discover 70+ AI vendors in various marketing categories that you can start piloting today.
[ad_2]
Source link