“We want the conversation to be fueled and determined in the insurance industry”
Legal and ethical concerns about artificial intelligence (AI) have increased in recent months, especially amid the rise of ChatGPT. As technologies continue to advance, one company believes that the insurance industry should come up with its own AI code of conduct.
AI-driven insurance intelligence provider Cloverleaf Analytics is calling for carriers and MGAs to manage the AI ethics conversation. It started a group called the “Ethical AI for Insurance Consortium” to help facilitate the conversation.
“One of the things we’re interested in is a code of conduct around the use of AI and the use of machine learning in insurance,” said Robert Clark, president and CEO of Cloverleaf Analytics (pictured right). “We recommend starting a working group to help some of the ethics forward.”
Consortium for ethical AI in insurance
Clark emphasized that ethical guidelines for the use of AI within the insurance industry will help companies stay ahead of the technology’s pitfalls, such as privacy and safety issues, bias and discrimination, and inaccuracy. .
“It should be done ahead of time to ensure there are no inherent biases [in the AI technology] and check back and forth so you don’t have an issue,” Clark said.
Cloverleaf Analytics reached out to its customers and asked them to nominate individuals to form the consortium, Clark told Insurance Business.
“Our customers include program business carriers, MGAs, and direct underwriters, so we start there,” he said. “We’re happy to help start this, but ultimately, we want it to be driven and determined by the insurance industry, and not by a vendor.”
Data released by Sprout.ai reveals that more than half of insurers in the US and UK are already using generative AI such as ChatGPT in their organizations.
But many concerns are emerging amid the integration of AI in insurance that need to be addressed, according to Michael Schwabrow (pictured left), EVP of sales and marketing at Cloverleaf Analytics.
“Carriers and technology partners need to make sure that bias doesn’t enter the data and the models we use to look at rates, distribute coverage, etc.,” Schwabrow said. “Because once it’s there, and you’re not auditing and refining the AI, it’s going to get worse and worse. You can’t just set it and forget it.”
Cloverleaf Analytics is still going strong with AI advancements
Despite its calls for broader guidance on the use of AI in insurance, Cloverleaf Analytics remains committed to generative AI and other advances.
Enhanced capabilities allow insurers to use ChatGPT to generate code, create spreadsheets, and create images, graphics, and designs to support their presentations.
“We’re incorporating OpenAI to help our insurance customers,” he said. “If you are an actuary, and you are trying to do the loss triangles or want to develop the lost development factors, but you are not as familiar with Python, we have integrated ChatGPT, which can write the code for you in Python.”
However, Clark emphasized that customer data is not exposed to the OpenAI engine. Cloverleaf is looking at using a private or shared version of ChatGPT to further upgrade its platform and leverage customer-specific data in the future.
The company also wants to use data for benchmarking insurance companies.
“We’re working with some rating bureaus and insurance departments, and one area they’ve expressed interest in working with us is providing feedback to carriers,” Clark said.
“Taking the privatized part [of ChatGPT] is the next step. But if that’s not the case, then it’s exploring other AI alternatives that we can use on our customers’ private data, because security is our number one priority.
What can you do with this story? Share your views below.
Keep up with the latest news and events
Join our mailing list, it’s free!