[ad_1]
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
Organizations looking to supplement or even replace some human customer support employees might want to first review the cautionary tale of the National Eating Disorder Association (NEDA) that played out this week.
As The Daily Dot and Vice reported, NEDA had planned to discontinue its 20-year-old phone line, or “helpline,” for those seeking assistance with eating disorders and body image issues. At the same time, the organization planned to continue offering users the opportunity to interact with a “wellness chatbot” called Tessa, developed by the company Cass (formerly known as X2 AI Inc.) in connection with Washington University researchers and launched on the NEDA website in February 2022.
‘A chatbot cannot replace human interaction’
However, the helpline’s human staff — six paid employees and around 200 volunteers, according to Vice — said NEDA planned to let them all go and replace them with the Tessa chatbot, following the move by four staffers to unionize earlier this year. The organization’s CEO confirmed to VentureBeat they planned to close the helpline, but said Tessa was never intended to act as its replacement.
“There is a little confusion, started by conflated reporting, that Tessa is replacing our Helpline or that we intended it would replace the Helpline,” NEDA CEO Liz Thompson told VentureBeat in an email. “That is simply not true — a chatbot, even a highly intuitive program, cannot replace human interaction. We had business reasons for closing the Helpline and had been in the process of that evaluation for three years.”
Event
Transform 2023
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
Register Now
Reports of harmful responses
A weight inclusive consultant, Sharon Maxwell, posted on Instagram earlier this week claiming that she had conversed with Tessa and that the chatbot had given her advise that could cause harm, including for restrictive dieting — a type of dieting that seeks to eliminate certain foods, food groups, or otherwise strictly limit the types and portions of food a person can eat, and which has been decried by nutritionists and other health and fitness experts in recent years.
Following Maxwell’s post, NEDA released a statement on its Instagram account saying the organizaiton knew of the accusations and had “taken down that program [Tessa] until further notice for a complete investigation.”
Where the chatbot stands now
Thompson further expanded on the statement in her email to VentureBeat, writing, “With regard to the weight loss and calorie limiting feedback issued in a chat Monday, we are concerned and are working with the technology team and the research team to investigate this further; that language is against our policies and core beliefs as an eating disorder organization. It is concerning because in a ‘closed product,’ of the 5,289 people who have been involved in interacting with Tessa, we hadn’t seen anything like that.”
Thompson also said that Cass reported unusual activity in the Tessa chatbot suggesting it was being attacked by malicious actors or bots. “Last week, Tessa saw a surge in traffic of 600% and behavior that indicated various forms of nefarious activity from bad actors trying to trick Tessa,” Thompson wrote. “Even with the onslaught of these instances, the ‘off messaging’ only happened 0.1% of the time out of over 25,000 messages. We will continue to work to make sure the technology can stand up to future attacks.”
Will Tessa return and if so, when? Thompson didn’t offer a time estimate or definitive plan, but did state in her email to VentureBeat that “we’ll continue to work on the bugs and will not relaunch until we have everything ironed out. When we do the launch, we’ll also highlight what Tessa is, what Tessa isn’t, and how to maximize the user experience. Stay tuned!”
Takeaways for leaders and IT decision-makers
With companies racing to adopt generative AI tools — 65% of executives surveyed by market research firm KPMG recently said they expected the tech to have a high or extremely high impact on their organizations in 3-5 years — IT decision makers would do well to learn from NEDA’s experience.
Even well-intentioned AI programs designed with expert input for specific use cases can produce undesirable and potentially harmful responses, negatively impacting a company’s users/customers and public perception. It’s uncertain as to whether NEDA could have avoided or minimized the controversy now facing it by being more communicative or transparent in its decision-making around sunsetting the helpline, but clearly, having a pre-existing AI chatbot in the mix only fueled the accusations it is seeking to devalue and replace human labor with artificial intelligence — putting NEDA on the defensive, where it now finds itself.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link