[ad_1]
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
When Microsoft-funded lab OpenAI launched ChatGPT in February, millions of people realized almost overnight what tech professionals have understood for a long time: Today’s AI tools are advanced enough to transform daily life as well as an incredibly broad range of industries. Microsoft’s Bing leaped from a distant second place in search to a much higher-profile level. Concepts like large language models (LLMs) and natural language processing are now part of mainstream discussion.
However, with the spotlight also comes scrutiny. Regulators around the world are taking note of AI’s risks to user privacy. The Elon Musk-backed Future of Life Institute amassed 1,000 signatures from tech leaders asking for a six-month pause on training AI tools that are more advanced than GPT-4, which powers ChatGPT.
As heady as legal and engineering matters may be, the basic ethical questions are easily digestible. If developers need to take a summer vacation from working on AI advancements, will they shift focus to making sure AI upholds ethical guidelines and user privacy? And at the same time, can we control the potentially disruptive effects AI may have on where ad dollars are spent and how media is monetized?
Google, IBM, Amazon, Baidu, Tencent, and an array of smaller players are working on — or already launching, in Google’s case — similar AI tools. In an emerging market, it’s impossible to predict which products will come to dominate and what the outcomes will look like. This underscores the importance of protecting privacy in AI tools right now — planning for the unknown before it happens.
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Register Now
As the digital advertising industry eagerly looks to AI applications for targeting, measurement, creative personalization and optimization and more, industry leaders will need to look closely at how the tech is implemented. Specifically, we’ll need to look at the use of personally identifiable information (PII), the potential for accidental or intentional bias or discrimination against underrepresented groups, how data is shared through third-party integrations and worldwide regulatory compliance.
Search vs. AI: The great spend re-allocation?
As far as ad budgets are concerned, it’s easy to imagine how a “search vs. AI” face-off might look. It’s very convenient to have all of the information you’re seeking collected in one place via AI rather than rephrasing search queries and clicking through links to zero in on what you’re really looking for. If we see a generational shift in how users discover information, that is, if young people accept AI as a central part of the digital experience going forward, non-AI search engines are threatened with vanishing relevance. This could have a great impact on the value of search inventory and the ability of publishers to monetize traffic from search.
Search remains the driver of a significant share of traffic to publisher sites, even with the ongoing movement of publishers to foster audience loyalty through subscriptions. And now that advertising is making its way into AI chat — Microsoft, for example, has been testing the placement of ads in Bing chat — publishers are questioning how AI providers may share revenue with the sites from which their tools source information. It’s safe to say publishers will be looking at another set of data black boxes from walled gardens on which they rely for revenue. In order to thrive in this uncertain future, publishers need to lead conversations to make sure stakeholders across the industry understand what we’re rushing into.
Develop processes with privacy in mind
Industry leaders need to keep their eye on the ball regarding how they and their tech partners collect, analyze, store and share data for AI applications in all of their processes. The process of gaining explicit user consent for collecting their data, and providing clear opt-outs, must happen at the beginning of an interaction with AI chat or search. Leaders should consider implementing consent or opt-in buttons with AI tools that personalize content or advertising. Despite the convenience and sophistication of these AI tools, the cost simply cannot be paid with privacy risks to users. As the industry’s history has shown, we must expect users will become increasingly aware of these privacy risks. Businesses shouldn’t rush the development of consumer-facing AI tools and jeopardize privacy in the process.
At this point, with AI tools from Big Tech businesses generating the most attention, we shouldn’t be lulled into a false sense of security that the effects of this evolution will be Big Tech’s problem. The recent layoffs we’ve seen from major tech businesses are leading to a great dispersal of talent, which will, in turn, lead to AI advancements coming from smaller companies that have made talent grabs. And, for publishers who aren’t looking forward to working with yet another walled garden in order to survive, there’s an additional level beyond the crucial privacy level where their best business interests are at stake. Industry leaders need to treat the rise of AI chat as the pivotal moment it is.
Let’s take this opportunity to prepare for a privacy-safe, transparent and profitable future.
Fred Marthoz is VP of global partnerships and revenue at Lotame.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers
[ad_2]
Source link