Risk mitigation strategies enable technological benefits
Insurance companies are moving fast to take advantage of opportunities to use artificial intelligence, including rapidly expanding generative AI capabilities. To take advantage of the benefits without falling prey to risks, insurance industry leaders are working to reinforce cybersecurity and risk management practices so they can make the best use of the emerging technology.
In many ways, artificial intelligence is a perfect fit for the insurance industry. Insurance companies have always been in the business of using data. They create novel datasets and use volumes of data to pool risk and more deeply understand cause and effect. They derive underwriting decisions based on past loss experiences and risk modeling, and they enable pricing that is better suited to expected risk and is adjusted to policyholder behaviors.
Generative and predictive AI tools enable insurance professionals to take these capabilities to another level.
“Humans are increasingly giving more license to algorithms to make sense of all this complex data,” Grant Thornton Senior Manager, Transformation Kaitlyn Ramirez said. “And this is advantageous in many new ways as humans are embracing the algorithms’ ability to unveil hidden insights that were previously undiscovered.”
Generative AI can use these algorithms to add value to the entire life cycle of insurance services and products and create a competitive advantage for insurance companies that harness them successfully. Key opportunities to use AI in insurance include:
- Creating personalized insurance plans that meet individuals’ specific needs.
- Summarizing information provided in a claim.
- Replacing repetitive and tedious manual human activities.
- Interacting with customers as “chatbots” to enhance support services.
- Summarizing complex contracts to enhance policy analysis and risk assessment.
- Performing underwriting tasks and allowing underwriters to focus on strategic tasks such as portfolio management.
“AI technology presents vast opportunities to enhance user productivity and make business operations more efficient,” said Grant Thornton Principal, Insurance Cybersecurity Jeff Witmyer.
But as the use of generative AI in particular grows, the cybersecurity and risk management challenges associated with the technology become more apparent.
5 AI-related threats
New technologies often result in new risks — and the old risks often don’t disappear, either.
“AI will continue to be weaponized by adversaries, and bad actors will continue to look for ways to interrupt the trustworthiness of AI,” Witmyer said.
Some of the most critical threats related to AI in the insurance industry include:
- Indirect discrimination. When proxy variables, identifiable data, or opaque algorithms are used to build profiles of people or groups for the purposes of risk pooling, there’s a danger that protected groups may be discriminated against in underwriting decisions. Regulators are paying close attention to discrimination that may occur through AI’s algorithms.
- Adversarial attacks. An undetected attacker may be able to deliver bogus inputs into an AI system to achieve a malicious goal, such as generating false results.
- Model poisoning. Attackers introduce malicious data into the training data that’s used to develop the AI capability. The poisoned data can cause behavior that’s significantly different from the outputs the AI model would have achieved if it had been trained with clean data.
- Privacy threats. AI models that incorporate personally identifiable information into the training process run the risk of creating models that inadvertently reveal sensitive information about individuals or groups. As AI models become more powerful and adaptable, they might also learn to extract sensitive information from users during the course of conversations.
- Lack of model explainability. The technical advancement of AI models makes it difficult to understand how the decisions they’re making relate to underwriting. To avoid regulatory risks and accusations of bias, insurance companies need clarity on how their AI platforms are arriving at decisions.
Risk mitigation starts with strong governance
Grant Thornton’s CFO survey for the fourth quarter of 2023 revealed that 32% of finance leaders are using generative AI, and another 57% are exploring use cases for the technology. But of those who are using generative AI, just 43% say their boards have taken an active role in understanding governance over the technology, and 55% say their organizations provide formal training on the use of generative AI.
Witmyer said it’s important for organizations and their boards to establish roles and responsibilities, adopt ethical AI development principles, promote transparency and improve internal user awareness and vigilance.
“It’s going to be very important to have proper acceptable use policies and training in place,” he said. “As with the advent of any new technology, you need to ensure that executives, developers, system engineers, users and others understand the appropriate uses and the risks of AI.”
Ramirez said insurance companies can mitigate risks by focusing on three main areas:
- Guiding principles. Insurance companies need to make sure their use of AI is consistent with their mission statements and organizational values. Ethical use of AI — and effectively communicating a company’s AI principles to consumers — builds trust in the AI and the insurance company.
- Technical design aspects. AI designers and developers have a responsibility to make sure AI platforms are using clean, verified, unbiased data and perform the tasks they are designed to do.
- Social technical aspects. The perception of an insurance company’s AI systems is an important reputational consideration. A model’s explainability will be critically important to regulators, and insurance companies that are known for building trust with their personal, empathetic services may need to choose their AI use cases carefully.
The rapid growth and adoption of AI technologies is bringing about significant advancements and improvements in the insurance industry. AI is leading to streamlining of processes, enhanced productivity, and the delivery of products and services that are tailored to individual needs.
But as insurance companies reap these benefits, they also need to be vigilant about privacy, security and the potential for misuse of AI while maintaining strong ethical principles.
“As companies adapt their business strategies for new AI capabilities, they must also adapt their risk mitigation strategies,” Witmyer said. “Cybersecurity and data privacy are essential parts of mitigating AI risks.”
Our insurance featured industry insights
No Results Found. Please search again using different keywords and/or filters.