This article is about how many projects get a life of their own and take over from the project originators to the detriment of enterprises. I start in a different industry of ‘Fruit and Nut’ cases.
When I authored my thesis back in the mists of time, I chose the launch of a new bar by Cadbury called Aztec. I was a management trainee spending six months gaining work experience in Cadbury and six months at college and Uni. Paid all the time which was a godsend.
A six-month spell was in marketing working for the Group Product Manager of what was termed count lines that included Cadbury's Flake. Cadbury's core product was Cadbury Dairy Milk, launched in 1905, promoting purity and quality with that slogan "A glass and a half of full cream milk in every half pound," in 1928 and still referenced today.

Mars launched Galaxy as a direct competitor in 1960 with slogans like “Why have cotton when you can have silk?” Chocolate is frequently an impulse purchase, so campaigns, merchandising, PR and marketing are key to influencing purchase decisions in a ruthlessly competitive market. Recently Cadbury promoted the message that “There's a glass and half in everyone” emphasising generosity and human goodness with Mars promoting “Choose pleasure” a direct message of indulgence.

Cadbury was increasingly concerned at Galaxy's increased penetration of the market and decided to hit them where it would hurt- the hugely profitable Mars Bar - “A Mars a day helps you work, rest and play” evolving to “Pleasure you can't measure” that self-indulgent motive again.
Hence the launch of Aztec by Cadbury in 1968.

The initial strategy was to develop a distinct, preferable taste, less sweet than the Mars Bar and with an extra benefit. Pepper was added, caffeine to stimulate, and numerous other ideas. Trouble was to add enough of each ingredient made them taste awful and the final Aztec product came to taste similar to a Mars Bar both being layers of nougat and caramel covered in milk chocolate. Instead of a real difference of taste the difference was rooted in brand promotion. Chocolatl being originally made by the Aztecs with sumptuous television advertising of Aztecs, shot on location, in rich and colourful finery feasting on this indulgent “Feast of a Bar”.
A brand-new, highly automated and expensive production line had been built for Aztec and early sales on a rolling regional launch showed dynamic growth and suggested a successful future. Unfortunately for Cadbury, after initial high tasting rates, consumers did not find an indulgent difference. They could not detect any difference and went back to eating Mars Bars. Aztec was withdrawn in 1977.
My thesis was that the quest for differentiation took over the whole project as the goals had been built into a compelling story. The process took so long, with no success, that the project took on its own life. It drove the product managers and senior management with this strategic desire to hit Mars' profits. The production line was completed and product launched on emotion rather than logical analysis. The project had a life of its own.
You can see this in other fields. The sad saga of Britain's High Speed 2 train infrastructure orihinally planned to link London wth Birmingham in the middle and Leeds and Manchester in the North. A mix of over-specification, cost-plus pricing, Britains schlerotic planning system, and other beaurocratic delays. Long afer it was obvious that it would hit all the fabled failures of a project i.e. ‘Over time, over budget and in this case over-spec’ the project was curtialed and it will only link London and Birmingham.
Which brings me to Chatbots, OpenAI's ChatGPT, Anthropic's Claude AI, Google's Enigma and all the tools being rapidly launched and replace fast with new versions. Are the hyped slogans from Sam Altmann about current Generative AI capabilitie and the future deployent of ‘Artificial General Intelligence’ and ‘Artificial Super Intelligence’ plauisble. Cadbury and Mars caimed to offer goodness, purity and self-indulgence whilst I eventually left realising we actually offered tooth decay, potential obesity and health challenges. TBH- I still eat chocolate but only occasionally- nothing like a high chocolate content product melted in a fresh host baguette from a Parisian boulangerie!
Have insurers emotionally invested money, people and resources into Generative AI and Agentic AI projects through the fear of missing out (FOMO) rather than making decisions based on ambition and rigorous logical motivation and analysis? Has FOMO made them dive into POCs without taking the time to think WHY they want and need these new tools before tasking teams to decide how and with what vendors and partners?
Some US insurers, including Cigna and Humana, have found themselves the victims of class actions in which they have to justify why acceptance or rejection decisions were made covering clients risk . The lawsuits seek to use the discovery process to obtain auditable evidence of how these algorithmic systems make decisions, with plaintiffs demanding transparency about the algorithms' logic, training data, and decision-making criteria.
The inherent faults of LLMs and probabilistic predictions are suitable for boring, mundane, high volume and well understood tasks eg analysing unstructured, multiple sources of data to allow underwriters to spend less time on admin and more on actual underwriting.
Some enterprises,including Salesforce and Klarna, have terminated large numbers of staff as AI projects were implemented faster than practical only to have to replace them as the downsides of these new AI tools became evident. Pilot Purgatory and even worse deployment before the enterprises had the knowledge, capacity and resources to manage projects that had a life of their own resulting in high failure rates and disillusion.
The same can be said for investors in the big AI boom which are beginning to question the viability of LLM vendors and any hope of a decent ROI. ChatGPT may be able to create promotional slogans but can the claims of LLM vendors be trusted?
Sam Altman "…Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.” Mmmm….
In 2024, Zuckerberg announced a sweeping reorganization of Meta’s AI division, pledging billions in funding and recruiting what he called a “superintelligent” team to achieve the goal of superintelligence- human-like reasoning, learning, and adaptability across virtually any domain. But as the project takes shape, critical questions emerge about its feasibility, ethics, and Zuckerberg’s track record of overpromising on transformative technologies.
"However, Daniela Amodei, the president and cofounder of Anthropic, suggested that the term itself (AGI)— a shorthand to describe when machines might reach human-level intelligence — may no longer be a useful way to think about where AI is headed.
"AGI is such a funny term," Amodei told CNBC in a recent interview. "Many years ago, it was kind of a useful concept to say, 'When will artificial intelligence be as capable as a human?'"
Today, she said, that framing is breaking down.
The AI Field- full of innovative promise
I am not pessimistic about AI, a field that has been with us since the 1950s, but trying to put a different framing on tools compared to the field of AI. LLMs, Transformers are full of known inaccuracy that makes them unsuitable for life and death decisions. They have proved highly effective for pharma companies in reducing the time needed to research new drugs but as soon as the lab work is complete the options are critically evaluated by clinical expert humans (See Futher Reading)
Chris Surdak at the end of ‘The Care & Feeding of BOTS’ urged readers “Keep in mind, the pace of change will never again be as slow as it is right now. Hence, if you choose not to embrace the new, the cost of eventually giving in will only go up rapidly. And rest assured technology will win in the end”. At ReLeaf Financial he and the Releaf Team have done just this applying AI tools as part of the blockchain, crypto-mining, proof of intent validation platform that will transform global payments, roya;ty payments, notarisation and many other fields. But only where it is viable whilst experimenting with future use cases.
My point is that enterprises and humans just don't seem to learn from history which is a valuable way to look at change, or how civilisations, businesses and people managed change. Sometimes effectively but often disastrously. GenAI, Agentic AI and Artificial Super Intelligence are all about change and emotional humans too often do not learn from past mistakes or success.
Chris Surdak, CEO of ReLeaf Financial, worked with AI tools on NASA projects and wrote a whole book on the high failure rates of RPA projects: - '
An Owner's Manual for Robotic Process Automation
Just glance at the Amazon summary will show that Surdak showed how to avoid failure whilst also encouraging the adoption and deployment of RPA. Cut & Paste GenAI into the contents and you have a manual for achieving success with AI tools.
RPA deployment for far too many enterprises was a mirror image of GenAI projects with 80% to 95% failure rates. But it need not be so. Can't we learn from previous change management failures and successes how to get innovation and transformation right? Can we avoid commitment to the project ruling the roost like the Aztec Bar? Can we choose compelling and viable use cases to experiment with AI tools and platforms, ruthlessly rejecting those that are not viable and pruning project options down to a manageable number?
Jim Swanson Chief Information Officer at Johnson & Johnson said back in mid-2025 said the the company allocates resources only to the highest-value generative AI use cases, while it cuts projects that are redundant or simply not working, or where a technology other than GenAI works better (See Further Reading)
Enterprises and Leaders need to experiment now to gain the corporate knowledge and skilled professionals with training and resourcing to operationalise and scale viable projects. Not rush in where angels fear to tread but have the courage of convictions built on a rigorous select-test-learn-iterate and accept/reject tech projects.
Thisnot only important for insurance companies but more widely for humanity and civilisation itself. AI can be deployed for the good of humans or for power and control by bad actors. I fervently hope the first goal wins over the second.
Further Reading
Allianz and AI: ambitions kept in check Research Asorya.io
Most AI Initiatives Fail. This 5-Part Framework Can Help HBR
Johnson & Johnson pivots its AI Strategy WSJ
Beyond AI: Preparing For Artificial Superintelligence Forbes Magazine
The Adaptability Newsletter Rory Yates on Macro Trends for insurers
An AI revolution that is proving valuable and viable Insurtech World
An AI revolution in drug making is under way The Economist
When generative AI first gained traction, many leaders raced to fund pilots. Yet too many failed to scale or create measurable value. Why? Because they lacked the organizational scaffolding to bridge technical potential and business impact. Technology enables progress, but without aligned incentives, redesigned decision processes, and an AI-ready culture, even the most advanced pilots won’t become durable capabilities.
https://hbr.org/2025/11/most-ai-initiatives-fail-this-5-part-framework-can-help
unknownx500