‘CIOs are under increasing pressure to generate business value from generative AI, but Salesforce CIO Juan Perez is among those IT leaders who believe such impatience from the C-suite could doom many projects …... The enterprise landscape is littered with Version 1.0 generative AI proof of concept projects that did not materialize into business value and have been dumped. While the industry remains in the preliminary stages of uncovering and implementing AI use cases into business workflows, the blueprint for success remains elusive for many CIOs.’
CIO.com October 2024
2024 may well have been the year of generative AI pilot purgatory but what does 2025 offer? Without the firm foundations and infrastructure to leverage AI, a clear business strategy, and a culture and resources that encourage a ‘test and learn approach success will remain elusive. The cautionary message of the Three Little Pigs fairytale should be applied.
Three Little Pigs and the Big Bad Wolf.
Three little pigs were sent out into the big world by their mother to make their fortunes. Two decided to enjoy fast gratification and built flimsy houses to move fast and not miss the vast opportunities awaiting them. The third pig took the time to build a house made of brick on firm foundations before seeking its fortune.
Along came the Big Bad Wolf.
Arriving at the first pig’s house made of hay the wolf huffed and puffed and blew the house down quickly before eating the pig for dinner. The next day it sought out the second pig’s house made of sticks and, just as before, blew it down and ate the pig for dinner.
Looking forward to another meal the wolf trotted over to the third pig’s house and blew as hard as ever. The house was sturdily built of brick and, despite the wolf’s best efforts, easily withstood the onslaught. In a rage, the wolf decided to enter via the chimney, but the pig boiled a large pan of water which the wolf fell into and died.
Pig number three was able to start to make its fortune.
Printed versions of this fairy tale first appeared in the 1840s but originated far earlier. The morals of this story have become embedded in Western culture including:
• Planning and foresight are vital to success.
• Execute plans on firm foundations and infrastructure.
The big bad wolf I picture as those AI vendors and consultants publicising generative AI and LLMs to experiment with AI without the normal discipline, planning, and strategic vision to evaluate and learn effectively.
As an insurer will you be one of the piggies whose houses were blown down or the winning pig who acted with foresight and effective planning? There are several steps to take to achieve that. This list from Chris Surdak is useful.
1. Executive adoption decisions based on something other than FOMO
2. Compelling use cases – will tackle real problems and/or leverage significant opportunities
3. Honest and complete business case ROI
4. Systems engineering thinking
5. Effective test campaigns
6. Users buy-in to vision and the experiments
7. Understanding of the technology's true capabilities & the technology's true limitations
9. A backup plan if the experiment fails
10. Self-reflection over prior failures; what has changed in our approach this time?
11. Have the technology partners you chose ‘eaten their dog food’ and proved they can deliver the outcomes you desire using the tools they promote?
To adopt these steps let us consider:
• The current and trending status of AI
• Whether you have a plan to experiment or not you are already doing so
• What are the firm foundations and infrastructure that help ensure success?
Current and trending status of AI
In the world of Generative AI and LLMs, the shockwaves that DeepSeek generated explain everything. Investors and highly valued AI vendors from Anthropic and Meta to Mistral and OpenAI were blindsided by the newcomer from China that claimed to deliver more for less. Hardware, software, and cloud infrastructure companies suffered dramatic, plunging stock prices and even cries of ‘foul.’ It is still a Wild West landscape with every week seeing new products, releases, and claims of enabling transformation.
AI is, of course, far more than Generative AI, Large Language Models, and Agentic AI. It has its roots in the middle twentieth century and today many mature AI tools are deployed analysing data and predicting outcomes. The launch of ChatGPT by OpenAI heralded the hyped promise of Generative AI and trained LLMs to automate, reduce costs, improve customer service, complete research, and generate innovative plans. Vendors claim we are approaching the point when it will perform as well as and even better than humans. The point of Artificial General Intelligence (AGI).
Teams of Agents would replace much of human teamwork automatically answering questions, guiding purchase decisions, underwriting risk, settling claims, and changing insurance forever.
Those who rush in without proper planning and preparation will have found this to be a hypothetical rather than a real world.
On the other hand, progress is real. These new tools are already augmenting humans, and the rate of advance is almost bewildering. DeepSeek was downloaded in unprecedented volumes. Alibaba launched QWQ 32B soon afterwards, and OpenAI responded with 03-mini and Deep Research (does that show some paranoia about DeepSeek?). You need people working full-time tracking the pros and cons of each relative to the problems you wish to solve and opportunities you wish to leverage. More on that below.
It does show how you do need to have a coherent strategy to experiment, learn, and choose optimal use cases that offer you a competitive advantage. Like it or not you are experimenting already.
Insurers are already drawn into experimenting with Generative AI, Agentic AI and LLMs
Many of the point solutions that insurers adopt already embed AI tools in their products.
RightIndem embeds AI in its digital claims platform to:
• Prompt claimants registering claims via eNOL to fill in missing and vital details
• Automatically categorising clauses and subclauses to triage claims faster
• Detecting inconsistencies in claims e, g. described weather with actual weather conditions
• Validating claims against policy wording, inclusions, and exclusions
Insurers should embrace this development of AI in the context of the entire insurance value chain just like The AA.
Hyperexponential automates underwriting processes allowing underwriters to spend less time on admin processes and more time underwriting. It leverages Generative AI and other tools. Underwriting and Claims are two parts of the value chain that insurers should surely connect more. If you deploy both RightIndem and Hyperexponential you need the tools and support to orchestrate the intere4stections between both apps. An AI and data orchestration layer
Cytora digitises risk for the commercial insurer leveraging LLMs and multi-agents to process higher volumes of submissions, at lower marginal cost and greater control. SDome clients will combine hyperexponential and Cytora for risk management.
This picture expands to all parts of the value chain from product planning, through distribution, to supply chain management, counter fraud, and automating payments. Every time you start a trial with one or more apps/platforms that leverage AI you are experimenting with AI.
Far better to be the successful little piggie and plan with foresight for these pilots and deployments to be part of the overall business and AI strategies
And don't forget Quantum Computing!
Mixcrosoft's recent announcement of its Majorana 1 chips makes it possible that quantum computing applications may mature in 3-5 years from now. And these will supercharge Generative AI and Agentic AI potential.
Simon Torrance wrote that:
"🚀 𝗤𝘂𝗮𝗻𝘁𝘂𝗺 𝗖𝗼𝗺𝗽𝘂𝘁𝗶𝗻𝗴 + 𝗔𝗜: 𝗪𝗵𝘆 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗟𝗲𝗮𝗱𝗲𝗿𝘀 𝗦𝗵𝗼𝘂𝗹𝗱 𝗣𝗮𝘆 𝗔𝘁𝘁𝗲𝗻𝘁𝗶𝗼𝗻.
Recent breakthroughs have pushed quantum computing from theoretical promise to practical potential. Combine it with AI and things get really interesting for forward-thinking enterprises and visionary leaders...
🛸 Google's Willow quantum processor recently achieved what was previously thought impossible: it completed a benchmark computation in minutes 𝙩𝙝𝙖𝙩 𝙬𝙤𝙪𝙡𝙙 𝙩𝙖𝙠𝙚 𝙩𝙤𝙙𝙖𝙮'𝙨 𝙛𝙖𝙨𝙩𝙚𝙨𝙩 𝙨𝙪𝙥𝙚𝙧𝙘𝙤𝙢𝙥𝙪𝙩𝙚𝙧𝙨 𝙢𝙞𝙡𝙡𝙞𝙤𝙣𝙨 𝙤𝙛 𝙮𝙚𝙖𝙧𝙨. (Read that sentence again!)
💥 And Microsoft this week launched its Majorana 1 chip to produce more reliable and scalable qubits and unlock breakthroughs in medicine and material science.
That puts more pressure on insurers to implement suitable infrastructures.
Firm foundations and infrastructure
Too many insurers have deployed AI as a patchwork of unconnected point solutions rather than as a strategic enabler within a sound and well-connected infrastructure.
High-quality data is the bedrock of AI success and AI is as subject to ‘Garbage in, Garbage out’ as much as any other technology and process. You will have heard of hallucinations as generative AI sometimes delivers complete rubbish because it has been trained on large data that contains rubbish. Add to this LLMs are contextually blind and have no empathy and you can see where we are going.
The LLMs that have trawled data from public sources will combine true and useful facts with conspiracy theories, lies, and predict cause and effect where there is none. There are tools and processes to help weed these out and we are always warned that professionals should validate outputs for accuracy.
A firm foundation is vital to validating training, measuring output accuracy, and correcting mistakes until constant accuracy to acceptable levels is achieved.
Ensure automated data integrity through external verification - using tests for code and definitive answers for math problems. This process acts as a filter, excluding low-quality inputs. Organizations can replicate this rigour by developing domain-specific verification systems, such as knowledge graphs that map factual relationships and give context to the relationships between people, companies, professions, locations etc (e.g., verifying used vehicle pricing values against established and current research).
Refining models on small and specialised language models using proprietary data offers another route to accuracy but you will also want to ingest external data. Add to that the stream of historical and real-time data coming from the many platforms and applications deployed in an insurer and you can see the need for data curation, categorisation and orchestration.
Data curation
Curating data that is relevant, accurate and as complete as practical is an essential task.
1. Which key processes and decisions need improving to achieve strategic goals?
2. What data is required?
3. What mix of AI tools will best accomplish desired outcomes?
4. Configure tools to test, learn, and apply successful outcomes
5. Build out issue by issue within a longer-term strategic plan
The data collected will invariably be from multiple sources, mostly unstructured and must be categorized to make it AI Tool Ready creating a Universal Data Layer:
• Named entity recognition.
• Term matching
• Contextual awareness
• Leading to focus on relevant data
• Real-time data for dynamic decision-making
There is a lot of hard involved in this process- the foresight and planning of the surviving pig from the fairytale. Demanding work augmented with data automation tools such as the Aiimi Insight Engine. This can be leveraged by an insurer’s own data teams and augmented by Aiimi’s own vast team of data scientists, engineers, and architects. Similarly, Palantir is moving into insurance from its own strengths in military, government, and healthcare.
Data curation and categorisation are just too important not to collaborate with AI and data technology partners.
That is before you consider data orchestration.
Data Orchestration
To become a customer-centric organisation that personalises products and services requires managing the many relations across entities, people, and assets, joining the past and present with the predicted future outcomes. Adding to the complexity, these relationships are spread over multiple core systems of record, policy admin systems, point solutions, and data sources from data warehouses to silos.
Managing and orchestrating data for just one outcome is challenging enough so managing for multiple outcomes increases the complexity geometrically. This will involve a rigorous experimentation and ‘test & learn’ phase in which the accuracy of outcomes is measured until full confidence is achieved in every outcome.
This leads to the world of Agentic AI. Agents within the insurers connecting to and transacting with external agents making decisions to optimise outcomes for both customers and insurers.
Suffice to say that if enterprises have found it a challenge to turn single-use case pilots into successful deployments Agentic AI is a step too far. Accomplish success by applying the steps described above in sole use cases will allow you to move on. The pressure to advance from the traditional policy-centric business model of insurers to become customer-centric will put extreme pressure on legacy-based IT systems which are the underlying infrastructure of most insurers- the other part of the infrastructure challenge.
The Systems Infrastructure Conundrum
Insurers still rely on legacy technology; mainframes and decades-old software to reliably powering transactions and unchanging processes. Global insurers even more so because of M&A activities acquiring different technology stacks in each country.
Those that invested in core systems like Guidewire, Duck Creek and Majesco invariably did so in one line of business-like motor whilst its home and contents business runs on older technology. Additionally, Guidewire can hardly be labeled an agile and flexible core system. This has mattered less whilst most policies stay as annually renewed products with little innovation. Things are changing though. New competition, changing needs of younger generations, and unmet protection gaps have put pressure on insurers to be faster to market with personalised service and products.
That opens the potential for new MACH-architected core systems that enable personalisation, support dynamic innovation, and the data orchestration capabilities described above. We see insurers often choosing these platforms for new product launches to add flexibility, fluidity, and the ability to apply the ‘test & learn’ experimentation we have discussed. In alphabetic order, these include:
• EIS
• FintechOS
• ICE
• Instanda
• Genasys
• Ignite
• Novidea
Consider this scenario by Rorry Yates published by Insurance Thought Leadership.
“You're involved in a car accident and press your car's "help" button. Instantly, emergency services and roadside assistance are alerted and dispatched to your location. Your family receives an automated text, reassuring them that you are safe, while your car begins interacting with a network of connected systems. The car initiates your insurance claim, retrieving policy details, assessing damage via integrated sensors and pre-filling forms for you.
As the process unfolds, you receive real-time notifications: repair options tailored to your location, transparent updates on claim progress and reminders about next steps. The system anticipates your needs, suggesting rental car services or alternative transport arrangements, ensuring your journey is minimally disrupted.
This is the power of data orchestration -- an intelligent and connected data ecosystem working together to transform stressful moments into smooth, supportive experiences.”
Now consider that in a home and contents escape of water claim. It will be twice as hard to deliver similar quality outcomes in multiple lines of business. What about pets, CAT events, commercial property, life, and health insurance? It becomes impossible to deliver consistent high-quality customer experiences from buying protection to settling claims.
This is not just about AI, Agents, and the goals of achieving artificial general intelligence as articulated by OpenAI, Meta, Anthropic, and other generative AI innovators.
It is about the urgent need to pivot to customer-centricity, the effective deployment of ecosystems, and the ability to compete with companies unhindered by unconnected, policy-centric systems.
Generative AI, Agentic AI, LLMs, and SLMs – these are catalysts to the changes in business and systems models insurers must experiment with and adopt if they are to retain the dominance they have, and currently continue, to enjoy. Not forever however, and maybe not even for long. Unless they change.
Prioritise data curation, orchestration, and categorisation. Ensure you have core platforms and Policy Admin Systems that offer the infrastructure required to be customer-centric, innovative, and competitive.
Further Reading
CIOs under pressure to deliver AI outcomes faster
Why Chinese AI has stunned the world
AI-Native Companies Are Growing Fast and Doing Things Differently
Priorities CIOs Must Address in 2025, According to Gartner’s CIO Surve
Underwriting trailblazers outgun mainstream insurers.
How technology in claims processing is changing consumer choices in insurance
Appendix MACH architected cores
What is a MACH-architected system?
- Microservices
- API first
- Cloud native
- Headless
Microservices architected means that you can build a software product as a set of independent components — microservices — where each component operates on its own and interacts with others through APIs. As a result, teams can deploy, change, and improve separate software components without disrupting the rest of the system.
API-first architectures are more flexible allowing teams to choose the most appropriate frontend technology to solve priority business problems. Developers can unify logic across touchpoints and avoid duplication of development work, as well as eliminate channel silos.
APIs allow for fast communication between components, meaning that businesses can reduce the time to implement new touchpoints and accelerate speed-to-market processes.
Cloud-native platforms use the public, private, and hybrid cloud as part of a cloud migration strategy to develop scalable and dynamic data solutions. Insurers will be more resilient to performance issues that often bug on-premises systems.
Headless - far from being clueless! Insurers that employ headless architecture don’t have a default frontend system that defines how content is presented to end users. You will be able to deliver per
For example, if insurers hope to adopt AI as a foundational tool for delivering smarter, more adaptive customer experiences, the business and supporting technology model will need to change substantially. Harnessing multi-agent AI and applying it responsibly and securely across operations requires a robust data model. One that centers data on the customer and the dynamic world they inhabit, processes it in a fluid, near real-time manner and then integrates the resulting insights into personalized customer experiences.
