There is a “fundamental misunderstanding” of what ChatGPT and AI can do
Insurance companies are increasingly eager to explore the benefits of generative artificial intelligence (AI) tools like ChatGPT for their businesses.
But are customers ready to accept this technology as part of the insurance experience?
A new survey commissioned by software company InRule Technology reveals that customers are not excited to find ChatGPT in their insurance journey, with nearly three in five (59%) saying they are there is distrust or complete distrust of generative AI.
Although advanced technology aims to improve the insurance customer experience, most respondents (70%) say they prefer to interact with a person.
Generational divide in AI traits
InRule’s survey, conducted with PR firm PAN Communications through Dynata, found significant generational differences between customer attitudes toward AI.
Most Boomers (71%) do not enjoy or are not interested in using chatbots such as ChatGPT. The number drops to only a quarter (25%) with Gen Z.
The younger generation is also more likely to believe that AI automation will help provide stronger privacy and security through stricter compliance (40% of Gen Z, compared to 12% of Boomers).
Additionally, the survey found that:
- 67% of Boomers think automation is reducing human-to-human interaction compared to 26% of Gen Z.
- 47% of Boomers find automation impersonal, compared to 31% of Gen Z.
- A data breach scares 70% of Boomers and makes them less likely to return as a customer, but the same is true for 37% of Gen Z
Why don’t customers trust AI and ChatGPT?
Danny Shayman, AI and machine learning (ML) product manager at InRule, isn’t surprised by customers’ wariness of generative AI. Chat bots have been around for years and have produced mixed results, he points out.
“Overall, it’s a frustrating experience interacting with chatbots,” Shayman said. “Chatbots can’t do things for you. They can run a difficult semantic search on some documentation and get some answers.
“But you can talk to someone and explain it in 15 seconds, and a powerful person can do it for you.”
Additionally, AI-driven tools rely on high-quality data to be efficient in customer service. Users may still see negative results while interacting with generative AI, leading to a reduced customer experience.
“Typically, if anything in that data set is wrong, incorrect, or misleading, the customer is frustrated. We feel like we’re spending an hour getting nowhere,” said Rik Chomko, CEO of InRule Technology.
The Chicago-headquartered firm offers process automation, machine learning and decision-making software to more than 500 financial services, insurance, healthcare, and retail firms. It counts the likes of Aon, Beazley, Fortegra, and Allstate among its clients.
“I believe [ChatGPT] is going to be a better technology than we’ve seen in the past,” Chomko told Insurance Business. “But we run the risk of someone assuming [the AI is right]thought that a claim would be accepted, and found out that was not the case.
The risks of connecting ChatGPT to automation
According to Shayman, there is a fundamental misunderstanding among consumers about how ChatGPT works.
“There is a big gap between creating a text that says something and doing it. People are working on hooking up APIs until ChatGPT can connect to a system to go and do something,” he said.
“But you end up with a disconnect between the capability of the tool, which produces text, and an efficient and accurate task maker.”
Shayman also warned of a significant risk for businesses setting up automation around ChatGPT.
“If you’re an insurer and you have ChatGPT set up so someone comes in and asks for a quote, ChatGPT can write the policy, send it to the policy database, and do the appropriate documentation,” he said. “But that’s very dependent on ChatGPT getting the quote right.”
Ultimately, insurance companies still need human oversight of AI-generated text — whether that’s for policy quotes or customer service.
“What happens when someone realizes they’re interacting with a ChatGPT-based system and understands that you can change its output based on small changes in prompts?” asked Shayman.
“When you’re trying to set up automation around a generative language tool, you need validations on its output and safety mechanisms to make sure that someone can’t go and get it if what the user wants, not what the company wants.”
What are your thoughts on InRule Technology’s findings about customers and ChatGPT? Share your comments below.
Keep up with the latest news and events
Join our mailing list, it’s free!