Overcoming AI’s Bias

 Friday, January 8, 2021

 Canadian Underwriter

If people intend to use data to help mitigate risk or save lives, that data must be trustworthy, says an artificial intelligence expert.

Sean Griffin, CEO and co-founder of Disaster Tech, a disaster risk data firm, pointed out during CatIQ Connect’s recent Quarterly Webinar Series that his company is part of an organization called the National Science Foundation AI Institute for Research on Trustworthy AI on Climate Weather and Coastal Oceanography. The key word in that title, he emphasized, is ‘trustworthy.’

“We need these models to be trusted,” he said during his session entitled AI Applications in Catastrophes: Risks and Rewards. “Not just by the researchers who build them, but by the practitioners and by the public who rely on these models to make life-saving or risk mitigation decisions that may affect a community for decades and generations to come.”

One issue affecting our reliance on AI is that it has built-in assumptions about data quality. Human influences will bias artificial intelligence and the way it interprets the data.

AI is something that humans train and put together, as Griffin observed. This process is “going to bias [AI], naturally, based on the human’s understanding of the issue at hand,” he said.