Homo sapiens strikes back even if it was with the assistance of a computer that identified a fundamental flaw in a top-ranked AI system. There is a lesson here for all organisations seeking to leverage AI.

The discovery of this weakness points to a fundamental flaw in the deep learning systems that underpin today’s most advanced AI. 

The systems can “understand” only specific situations they have been exposed to in the past and for which there is sufficient and timely data to feed the algorithms. One likely reason the GO system Kayago was defeated is that the tactic exploited by Go player Pelrine is rarely used, meaning the AI systems had not been trained on enough similar games to realise it was vulnerable. 

The Go-playing bot did not notice this weak point, even when the encirclement was nearly complete, Pelrine said. “As a human, it would be quite easy to spot,” he added. 

Yet that is likely to occur in business as well. If the world of insurance were predictable then claims inflation would have been predicted, reserves updated, and premiums raised earlier. But many carriers seem to have been wrong-footed with share prices plummeting downwards.

Counter-fraud systems rely on the fact that fraudsters will not use AI technology to find weaknesses in their data and algorithms and then exploit them. 

Predictions based on data are naturally influenced by the majority of data i.e. that around the midpoint. There s very little data in the extremes at either end of a normal distribution. But if the extreme is the new normal the AI software is in danger of being misinformed and the predicted outcomes possibly wrong.  

 It is common to find flaws in AI systems when they are exposed to the kind of “adversarial attack” used against Go-playing computers.  Despite that, “we’re seeing very big [AI] systems being deployed at scale with little verification” said Stuart Russell, a computer science professor at the University of California, Berkeley.

Christopher Surdak is  An award-winning, internationally-recognized transformation expert. He has been Chief Transformation Officer at The Institute for Robotic Process Automation & Artificial Intelligence (IRPA AI), and recently   Acting Chief Transformation Officer, Program Office Director at The White House. I would take note of what he says about AI including :-

"The importance of training and data is dramatically underestimated, leading to exceptionally poor results. These poor results are not an anomaly, they are the predictable and inevitable result of an under-funded data engineering and training effort."

And again:

"Very few organizations manage to squeeze actual value out of AI, with many researchers pegging the success rate at five percent or less."

These come from a recent article  "The Four Horsepersons of the AI Apocalypse: Part One"  Can't wait for Part Two. 

Surdak urges us to flip datasets on their head and instead of over-analyzing the predictable middle data,over-represent the rare boundary cases in order to train AI what to do when rare events occur. If an event is predictable why do you need AI? 

In fact, AI is already proving its value in automating predictable and repetitive tasks freeing up people to focus on higher value-add tasks for which human intuition and experience are well suited.

Tackling the problem of surfacing and analysing unstructured data appearing in, say, invoices, brokers' slips, or medical reports.  Time and money can be saved by delivering concise data to claims adjusters and/or to decisioning systems. 

Omni:us is a company to keep an eye on balancing the use of AI and human intuition.

“AI is one of many tools that you can have on your desk to be more effective with the task at hand, which may be assessing a new insured. On top of that, you have your own gut feeling developed from the physical touch points from communicating with them directly. And when you are using omni:us technology for claims handling, you can override its decisions,” CEO Hauschild said.

"The AI makes recommendations to the claims adjuster, and it may be there are occasions when it recommends returning the process back to the handler. 

“It's about identifying the sweet spot within claims – where a carrier is comfortable with using automation for decision making, and where its governance structure requires a human for that,” Hauschild said.

Following a 10-week analysis of 760 home claims processed on the RightIndem platform 87% of claims submitted were for single items only and when analysed post-claim could have been processed via STP

  • 33% of customers submitted claims outside of office hours and this is increasing.
  • 80% of claims received had photos or videos attached.
  • 35% of photos/videos received had referral indicators (metadata flags).
  • 83% of customers who started the journey completed the digital journey.
  • 66% faster settlement by handlers as they had more valid information allowing validation.
  • 4 minutes was the fastest submission time – the average was 14 minutes.
  • 91% of customers completed the CSAT.
  • 82% of customers recommend the RightIndem platform.
  • 86% said it was easy to use.

Chris Andrew, vice-president for sales in the Europe, Middle East and Africa West region, says omni:us avoids any risk of AI-led automation failing, even given the complex nature of certain insurance product lines. That’s because omni:us executes a minimal viable product (MVP) for a preferred peril. The client then tests this against the “desired type of loss peril first”, in order to determine the value and the accuracy it would derive from the MVP, and then outlines the way it wants to work with omni:us. 

Daren Rudd, Vice President Consulting - Head of Insurance Business and Technology Consulting sums it up this way: 

" Firstly we need to be more mindful of the potential blind spots for any AI solution. That’s going to be a hard one to manage, but going in with that as a design principle should help to think through the issue.

Not sure I agree with the quote that AI should be focused on what humans can’t predict. Part of the value of AI, as I see it, is to remove humans from the drudge tasks, which are predictable and repetitive. That frees up people to spend time on the edge cases where human intuition and adaptive thinking come to the fore.

And training an AI for the edge cases and the unusual is going to be a tough one considering a lot of models need lots of data - by definition that data is not going to be there!

I think, as usual, it comes down to a balanced approach where we don’t become over-reliant on a single technology, we expect there to be failures and build ways to capture them, and we build in ways to constantly assess and adapt. Easy to say, much harder to deliver in practice! "

Further Reading

From Prediction to Transformation across Claims

 Stop Tinkering with AI 

Carriers can 'markedly improve claims handling with AI'

Despite progress in AI-powered claims handling, human judgment remains ‘key,’ Mitchell says

What P&C insurers can do as claims inflation pressures results in Europe

 Ecclesiastical, claims and fraud