A timely reminder that in addition to having a vision and strategy that defines the role of AI there needs to combat bias.
"I'm not concerned about AI super-intelligence “going rogue” and challenging the survival of the human race – that's science fiction and unsupported by any scientific research today. But I do believe we have to think about any unintended consequences of using this technology."
, Chief Scientist, Salesforce.com
Read how to anticipate and avoid bias in his article. There is also a valuable review by McKinsey based on banking world and relevant to insurance and other sectors.
Read: Derisking machine learning and artificial intelligence
And remember; under GDPR enterprises can only use data for the purpose agreed by the data owner. Google and the NHS have already run into trouble over this. Easy to do when you implement AI for uses other than that which the data owner agreed to.
So how do we manage the threats of biased AI? It starts with humans proactively identifying and managing any such bias – which includes training AI systems to identify it. AI bias isn’t the result of the technology being flawed. Algorithms don’t become biased on their own – they learn that from us. So we have to take responsibility for helping to avoid any negative effects spawned from the AI systems that we’ve created. For example, a bank could exclude gender and race from its dataset when setting up an AI system to score the viability of applicants for loans.
https://www.weforum.org/agenda/2019/01/ai-isn-t-dangerous-but-human-bias-is/