With the recent news that the Association of British Insurers (ABI) has called for new rules to reflect the increased use of AI and machine learning in the insurance industry, comes a reminder that there is still a significant gap between the progress of technology and existing regulatory frameworks across the globe. The ABI called for regulators and the industry to establish clear ethical rules on the use of big data and artificial intelligence in order to prevent consumer harm, and warned that existing regulations may not be sufficient when claims and underwriting decisions are made by a computer.
IN DEPTH
This latest comment from the ABI follows a report in October 2019 from the Financial Services Authority and Bank of England on the use of machine learning in the financial services industry. Notably, every one of the insurers surveyed for this report confirmed that they had live machine learning applications in use. Some of the firms surveyed noted potential challenges due to the lack of clarity in how existing regulations would apply to AI and machine learning.
Based on the results of the 2019 survey, the FCA (Financial Conduct Authority) and Bank of England announced in January that they had established a “Financial Services AI Public Private Forum” with private firms and academics to explore, among other things, how “regulation and/or good industry practice could support safe adoption of AI/ML.” The Forum has yet to commence its work as applications to participate were being accepted until February 21.
Essentially, UK regulators are studying the issue and considering the need for action, but have yet to implement any specific guidance for the use of AI and big data. Regulators in the United States are in a similar state of play, with the National Association of Insurance Commissioners Innovation and Technology Task Force announcing last summer the formation of an Artificial Intelligence Working Group. Thus far the NAIC AI Working Group has published draft high-level principles on the use of AI in the insurance industry.
While insurance regulators on both sides of the pond continue to learn about the implications of AI and big data in underwriting and seek comment from industry, the European Commission released a draft policy on the regulation of AI on February 19, with plans for laws to follow by year-end. These AI laws would apply to all industries, not just insurance, which carries the risk that the EU laws might not address the unique circumstances of the use of AI in a highly regulated industry such as insurance.
Meanwhile in the United States the lack of precise regulations will soon be felt most keenly. The personal lines insurance industry in the United States is heavily regulated when it comes to pricing and terms and conditions, and it is precisely in this sector of the industry where AI may come to play the biggest role.
Policy terms and rates are generally subject to regulatory approval in the United States for life and health insurance, and homeowners and auto insurance. A US state insurance regulator may be inclined to question whether an insurer’s rates are actuarially justifiable if they are decided by a computer, using an algorithm mining data invisible to the insured. A US regulator might also presume an insurer using AI or other new technology does not have a means to avoid unintentional, yet unlawful discrimination in claims handling, rating and underwriting. There could also be potential issues under federal and state privacy laws governing the use of information, such as state laws prohibiting insurers from using genetic information for underwriting.
In one of the only concrete examples of regulation in this area to date the New York Department of Financial Services (NYDFS) last year published Circular Letter No. 1 on the use of big data in underwriting for life insurance. The NYDFS cautioned that:
“an insurer should not use external data sources, algorithms or predictive models in underwriting or rating unless the insurer has determined that the processes do not collect or utilize prohibited criteria and that the use of the external data sources, algorithms or predictive models are not unfairly discriminatory. The insurer must establish that the external data sources, algorithms or predictive models are based on sound actuarial principles with a valid explanation or rationale for any claimed correlation or causal connection. An insurer must also disclose to consumers the content and source of any external data upon which the insurer has based an adverse underwriting decision.”
Similarly, the California Department of Insurance requires insurers to disclose any underwriting rules – including algorithms—when making a rate filing. According to a CDI Legal Division Opinion published in August 2018, such underwriting rules are public information, which will force carriers to disclose intellectual property that they may otherwise consider a trade secret. Other states may follow California and/or New York’s example—and of course the United States has a different regulator in every state, so each state can in theory develop its own unique interpretation.
If this occurs then not only will insurers face additional challenges trying to demonstrate compliance with these varying criteria, but regulators will also face a significant obstacle – namely how to evaluate and police these new requirements when considering rate filings from insurers. How will regulators make sure they have staff who have the technical know-how to understand these algorithms and the impact they may have? Will they devise tests to see what the impact will be? For now, regulators and insurers alike are relying on existing regulatory frameworks to the extent possible, but all signs point to imminent regulatory changes that may have a significant impact upon the burgeoning insurtech industry in the United States.