"AIG, Great American and WR Berkley are among the groups that have recently sought permission from US regulators to offer policies excluding liabilities tied to businesses deploying AI tools including chatbots and agents."
"One exclusion WR Berkley proposed would bar claims involving 'any actual or alleged use' of AI, including any product or service sold by a company 'incorporating' the technology."
This is due to confusing LLMs with all of AI systems, which is the result of excessive hype, false claims, and very poor security of LLMs. It's also a good example of how a few noisy bad apples can damage a large and critically important industry.
"Even Mosaic, a speciality insurer at Lloyd’s of London marketplace which offers cover for some AI-enhanced software, has declined to underwrite risks from large language models such as ChatGPT."
"Ericson Chan, chief information officer of Zurich Insurance, said when insurers evaluated other tech-driven errors, they could 'easily identify the responsibility'. By contrast, AI risk potentially involves many different parties, including developers, model builders and end users. As a result, the potential market impact of AI-driven risks 'could be exponential' he said."
Source Financial Times
"A warning to my contacts -- regardless of current levels of success and fame. At times like this, empires can fall very rapidly. There is a fair amount of fraud occurring in the LLM bubble, and regulators are watching closely (rightfully so). Reputations are in the process of being damaged and destroyed. I would not be shocked to see successful criminal fraud convictions within the next few years. So be very careful with words and claims. When people and institutions lose trillions of dollars, they tend to demand justice.
This is the latest example of how the risk profile in LLMs are evolving just as I and others predicted and warned. This is not difficult to predict for those with experience in disasters, risk management, and liability (particularly prevention, one of our core strengths). Insurers are very good at risk -- they would not survive otherwise. "
Mark Montgomery CEO KYield
That's something we are increasingly familiar with, especially in the 'emerging risks' category.
This is about risks related to the surging adoption of artificial intelligence worldwide. And it's interesting to read it in a non-specialist newspaper.
The article starts with "Major insurers are seeking to exclude artificial intelligence risks from corporate policies". Which makes sense as there are multiple challenges surrounding these AI risks.
First and foremost, they are not really known. They didn't exist in the past (as a reminder ChatGPT was launched in November 2022) hence there are no historical data insurers could leverage to model risks in a traditional way. They are surging by design as AI adoption is growing across industries. And of course, not every insurance player is already covering them.
In that background, the article lists players which are exploring how to exclude AI-related risks from commercial insurance policies.
Another way to look at this retreat is to consider how to build... Resilience ! Such a market gap could open the door to specific policies, to cover these AI-related risks.
In that case, usual challenges to tackle 'emerging risks' would apply: spot and access relevant data sets ; get sense of these data through algorithms ; to unlock insurance capacity, as incumbents would be able to model, assess and price these risks.
There is also a systemic challenge when it comes to models themselves. In case one would make a mistake - "hallucinate" - it might damage players across many industries in case they are using these models or applications built on top of them.
LLMs are flawed and whilst considerable effort, and cost, is invested in trying to guardrail hallucinations and drift can we ever creat a silk purse from a sow's ear?
On the 27th October, Professor Yann LeCun stated, "Existing large language models (LLMs) will become obsolete within five years," and emphasized, "Text-based learning alone cannot achieve human-level artificial intelligence (AI); understanding the world through sensory input is essential."
| 출처 : 아시아경제 | https://www.asiae.co.kr/article/2025102711524920018
He made these remarks during his keynote speech at the "AI Frontier International Symposium" held at Dragon City in Yongsan, Seoul, on the same day. Professor Yann LeCun is one of the "four leading scholars in AI" and co-director of the Global AI Frontier Lab in New York, United States.
He began by saying, "Several more revolutions are needed to create truly intelligent machines," and went on to explain the "World Model" concept he has long advocated.
The World Model is similar to the way a baby, just a few months old, forms concepts by observing the world. According to Professor LeCun, to reach human-level intelligence, AI must be equipped with the ability to understand the physical world through various senses, such as vision, hearing, and touch, just like a baby.
The World Model is also a concept that predicts the next state (t+1) based on the current point in time (t). Predicting t+1 enables AI to learn and simulate physical laws, causality, and environmental changes. The goal is not simply to memorize data, but to create AI that understands and predicts how the world works.
He remarked, "Current AI architectures are extremely poor. Today's AI systems are not even comparable to the intelligence of humans or animals," and added, "In terms of understanding the physical world, they are not even as smart as a house cat."
He also noted that human intelligence is not general but specialized, which is why he does not favor the term "artificial general intelligence (AGI)." He stressed, "What we should aim for is advanced intelligence capable of understanding the world and performing physical reasoning."
| 출처 : 아시아경제 | https://www.asiae.co.kr/article/2025102711524920018
"The current risks from LLMs, GenAI and AgenticAI evolve swiftly as models are upgraded on a regular basis (see ChatGPT 5.1 or Gemini 3.0 Pro released in the last few days). This is a challenge for insurers in case they would consider modeling and assessing these risks. This would probably emphasize the benefits of dynamic pricing !
As AI-risks are related to tech infrastructures, should they be embedded into cyber insurance policies ? Should the insurance industry wait for AI adoption to plateau before offering dedicated policies ? Could new players enter the insurance market by tackling these risks thanks to technology (data & algorithms) ? Should the insurance regulator push for a broader coverage and define a standard policy ?
👉 How do you perceive these AI risks: threat or opportunity for insurance players?"
Florian Graillot Investor Astoria VC
Whatever you do, make sure you engage with technology partners that understand the dangers inherent with LLMs and ensure you only deploy projects for use cases where the dangers are controlled. And have an eye on the technologies that will supercede LLMs.
https://www.msn.com/en-us/money/companies/ai-is-too-risky-to-insure-say-people-whose-job-is-insuring-risk/ar-AA1QZS2p
unknownx500