Insurance – AI and Insurance: What’s in That Black Box?
Publication | 02.26.20
[Article PDF]
Artificial intelligence business solutions and other “cognitive” systems have the power to transform insurance. Here’s a sci-fi scenario for 2030, courtesy of the McKinsey consultancy: You’re using your mapping app when your digital personal assistant warns you that your planned route entails a high likelihood of accidents and auto damage. The assistant then offers a small reduction on your motor vehicle and life insurance premiums if you take its suggested route instead.
AI has already begun making its way into every aspect of the insurance business, including claims processing, fraud detection, risk management, marketing, underwriting, rate setting, and pricing. The potential for creating business efficiencies is enormous: Juniper Research predicts that cost savings to the insurance industry from AI will reach $2.3 billion by 2024.
AI leverages big data to find correlations, inferences, and predictions, and to make recommendations on that basis. But this cutting-edge technology may prove to be a double-edged sword. “These systems are built through the harvesting of personal information from millions of people and are used to make decisions affecting millions more,” says Laura Foggan, a Crowell & Moring partner and chair of the firm’s Insurance/ Reinsurance Group. “They’re exciting new business tools, but they also pose liability issues under existing laws and regulations. In addition, state and federal officials are considering new laws and regulations that are specific to AI systems.”
Data, Data Everywhere
More insurers today are mulling the use of “nontraditional” sources when assessing premium rates—sources that go beyond public or official filings. These include social media postings and data from sensors that can increasingly be found in our smartphones, vehicles, wearables, and elsewhere. Real-time collection of individualized data from these sensors opens the door for behavior-based policy pricing. The data mining and predictive modeling capacities of AI systems provide a way to turn the billions of data points provided from nontraditional sources into more detailed and objective risk assessments. Some customers will gladly provide personal information in exchange for savings on their premiums.
AI systems can also vastly improve insurers’ ability to detect fraud. Advanced predictive modeling can generate red flags during the claims intake process, routing suspect claims to investigation while proper claims are paid more expeditiously. But these new capabilities also come with new risks, Foggan warns:
- Privacy and security. Big Tech platforms have been plagued by high-profile controversies over the improper or dis- quieting use of data about their members, sometimes by unknown third parties. Insurers need to ensure they are complying with all laws respecting privacy and data security and maintaining trust with their customers.
- Proxy discrimination. Even if they do not recognize protected classes such as race or religion, AI algorithms could seize on “proxy” criteria (such as ZIP codes or even social media habits) that are historically or commonly associated with people in these classes. If the resulting decisions have a disparate impact on protected classes, they could pose a liability risk. Some scholarly research suggests that AI algorithms are especially susceptible to proxy discrimination. “Going forward, almost any use of predictive algorithms that harms a definable group of consumers could, in theory, spark a class action lawsuit,” Foggan says.
- Transparency. When an AI-based system makes a decision to deny a claim or hike a premium, customers will want an explanation. But algorithmic reasoning can be hard to fathom; third-party suppliers of algorithms may claim their inner workings are proprietary. When an algorithm manifests as a “black box,” many may feel skeptical about the results. For example, an AI system could find a powerful correlation between a given characteristic and a risk of fraud, but unless an insurer can demonstrate a causal relationship, the resulting decision may be challenged as discriminatory.
"Going forward, almost any use of predictive algorithms that harms a definable group of consumers could, in theory, spark a class action lawsuit." |
Regulations Ahead
“Insurers should prepare for increased legislation and regulation in the use of data fueling AI in decision making,” says Kelly Tsai, senior counsel at Crowell & Moring and a member of the firm’s Insurance/Reinsurance Group. Today, the European Union is at the cutting edge of AI regulation due to a (nonbinding) provision of the General Data Protection Regulation, Recital 71. This says that individuals should have the right not to be subject to AI evaluations of personal characteristics that automatically result in a determination with legal impact, unless expressly authorized by law. It also mandates safeguards on such evaluations aimed at preserving due process and reducing discrimination.
"Insurers should prepare for increased legislation and regulation in the use of data fueling AI in decision making." |
Meanwhile, many voices are expressing support for individuals to have a “right to an explanation” of how algorithms are used in decisions. A British regulator, the Information Commissioner’s Office, has released draft guidance aiming to help organizations explain AI decisions about individuals. With the right to an explanation becoming a regulatory battleground in Europe and elsewhere, “insurers and others using AI should be thinking about whether and how AI-based decisions can be explained to those who are affected,” Foggan says. They should also begin thinking about how to respond to proposals for regulatory requirements of an explanation, she adds.
In the U.S., various industry-specific consumer protection laws such as the Fair Credit Reporting Act and the Fair Housing Act already apply to the collection and use of personal information. Other federal and state laws and regulations address the use of personal information in specific contexts, such as cybersecurity and medical information. Meanwhile, regulators and legislatures are starting to venture into more AI-specific domains.
For example, last year, New York became the first state to issue guidance on the use of external consumer data in underwriting for life insurance. Insurance Circular Letter No. 1 (2019) warns that some algorithms and models “may either lack a sufficient rationale or actuarial basis and may also have a strong potential to have a disparate impact” on protected classes. It warns insurers that they “may not use an external data source [or vendor or algorithm] to collect or use information that… they would be prohibited from collecting directly.” Nor could they rely on “the proprietary nature of a third-party vendor’s algorithmic processes to justify the lack of specificity related to an adverse underwriting action.”
Last July, New York formed a commission to investigate and study regulations on AI, robotics, and automation. The commission will investigate privacy, safety, and other legal issues in the use of these emerging technologies in the business, nonprofit, academic, and governmental sectors. Other states could soon follow New York’s lead. In addition, the National Association of Insurance Commissioners has formed an AI Working Group that is charged with developing regulatory guidance for presentation to its Innovation and Technology Task Force by NAIC’s 2020 Summer Meeting. Model laws or regulations proposed by NAIC are often widely adopted by states.
At the federal level, two Democratic senators introduced the Algorithmic Accountability Act last April, which would require entities to ensure that their algorithmic decision systems don’t expose consumers to unfair bias, inaccuracies, or privacy and security risk. Some entities would be required to produce studies of how their systems’ design and training could pose risks. If the Federal Trade Commission deemed a company’s decision systems as high-risk, that company would be required to provide a cost-benefit analysis and a risk minimization plan.
The bill would encompass AI tools that are used in many industries, such as facial recognition, chatbots, recruiting tools, ad targeting, and credit calculations. While this bill—and a parallel House bill—has not yet advanced beyond committee, it offers an early indication of the kind of scrutiny that algorithmic modeling may come under in 2020 and beyond. Indeed, insurers need to start thinking about AI’s impact not only on them but also on their policyholders, notes Foggan. Many policyholders are already using AI in their daily operations, thereby incurring risks such as discrimination suits that could result in losses.
As promising as AI and cognitive systems may be for their industry, insurers must take care when determining what kind of information could be used in underwriting algorithms, and be willing and able to look under the hood of new technologies. When deciding when or how to adopt new technologies, they must factor in potential liabilities related to privacy, security, discrimination, or transparency.
Contacts
Insights
Publication | 12.05.24