
Today’s insurers use big data from myriad sources to underwrite more accurately, price risk and create incentives for risk reduction. From telematics that tracks driving behavior to social media that creates a digital footprint that could offer unprecedented insights, new sources of data are now capable of producing highly individualized profiles of customer risk.
Insurers are increasingly using artificial intelligence (AI) and machine learning to manage manual, low-complexity workflows, dramatically increasing operational efficiency.
Also behind the rise of AI-powered insurance is the ability to predict with greater accuracy losses and the behavior of customers. Some insurers say it also gives them more opportunity to influence behavior and even prevent claims from happening.
Yet, is there a risk that this new way of doing things could actually create unfairness and even undermine the risk-pooling model that is fundamental to the industry, making it impossible for some people to find cover?
After all, AI is not an agnostic technology and so can be used in ways that reinforce its creators’ biases. As a result, insurers need to be especially sensitive to ensure they develop and use AI and machine learning ethically and manage their customers’ data with watertight controls.