
Shiznizzle "They" want to make this product and compete like crazy for the "best" LLM AI. "They" wont mind making tons of cash if it does ever make any money. But "they" do not want to be responsible for their AI bot telling some suicidal person when it tells them to "go on and do it as the time is right"? Actual language used. The fact that it is insurers that want this is irrelevant as it could be AI makers pushing them for these exclusions so they do not have to be responsible for their failures Reply
valthuer It’s hard to blame insurers for being cautious here. AI is evolving faster than our legal, technical, and financial systems can keep up, and recent incidents have shown that even small mistakes can trigger massive, unexpected liabilities. What this really highlights is the need for clearer standards, more transparency from AI developers, and smarter risk-sharing models. At the end of the day, companies, insurers, and regulators all want the same thing: reliable systems that don’t create unpredictable fallout. Until we get there, everyone is going to tread carefully—and that’s not panic, it’s just responsible risk management. Reply
TechieTwo My insurance agent tells me that insurance companies only want to insure concrete bridges located under water… for fire hazards. 🙁 Reply
SomeoneElse23 valthuer said: It’s hard to blame insurers for being cautious here. AI is evolving faster than our legal, technical, and financial systems can keep up, and recent incidents have shown that even small mistakes can trigger massive, unexpected liabilities. What this really highlights is the need for clearer standards, more transparency from AI developers, and smarter risk-sharing models. At the end of the day, companies, insurers, and regulators all want the same thing: reliable systems that don’t create unpredictable fallout. Until we get there, everyone is going to tread carefully—and that’s not panic, it’s just responsible risk management. Every day that I use an AI LLM I find mistakes. And they always act 100% confident in their response, despite being 100% wrong. It's a scary thought thinking someone may want to put this behind financial transactions, killing machines, robots, cars, or, well, anything really. Imagine buying an AI car and have to sign off that "Your AI agent makes mistakes". That'll go over real well with insurance! Reply
valthuer SomeoneElse23 said: Every day that I use an AI LLM I find mistakes. And they always act 100% confident in their response, despite being 100% wrong. It's a scary thought thinking someone may want to put this behind financial transactions, killing machines, robots, cars, or, well, anything really. Imagine buying an AI car and have to sign off that "Your AI agent makes mistakes". That'll go over real well with insurance! You’re absolutely right — the gap between how confident LLMs sound and how reliably they perform can definitely be unsettling. Your examples highlight why so many people are cautious about putting this technology into anything high-stakes. At the same time, that’s exactly why the insurance angle is important: it pushes everyone to be more realistic about the current limitations and to put proper safeguards in place before relying on AI in critical areas. We’re still in the phase where human oversight and clear accountability are essential. I think we’re ultimately on the same page: AI has potential, but it needs stronger guardrails and more predictable behavior before it can be trusted in situations where mistakes carry real-world consequences. Reply
-Fran- valthuer said: It’s hard to blame insurers for being cautious here. AI is evolving faster than our legal, technical, and financial systems can keep up, and recent incidents have shown that even small mistakes can trigger massive, unexpected liabilities. What this really highlights is the need for clearer standards, more transparency from AI developers, and smarter risk-sharing models. At the end of the day, companies, insurers, and regulators all want the same thing: reliable systems that don’t create unpredictable fallout. Until we get there, everyone is going to tread carefully—and that’s not panic, it’s just responsible risk management. They're not being cauteous. They know this will be a legal fight they can't win. Think about why there's a "CAREFUL HOT" label on hot coffee cups nowadays. Spoiler: it's not because the law would side with makers and companies when a civil lawsuit can demonstrate insufficient controls to prevent harm. Current LLMs and even potential AGIs are (for all intended purposes) non-deterministic. This means a manufacturer can't ensure "safe operation" at any point on anything that would depend on "AI" as a means of control of machinery. Imagine a house-robot that stabs you with a knife because they computed your hand was a piece of ham or something stupid like that. Which, let's be honest, I'm pretty sure it's not a wild assumption to make. Insurers know they would have so many claims, it's just not profitable. Ergo: they know AI-backed stuff will just fail, sometimes horribly. Regards. Reply
valthuer -Fran- said: They're not being cauteous. They know this will be a legal fight they can't win. Think about why there's a "CAREFUL HOT" label on hot coffee cups nowadays. Spoiler: it's not because the law would side with makers and companies when a civil lawsuit can demonstrate insufficient controls to prevent harm. Current LLMs and even potential AGIs are (for all intended purposes) non-deterministic. This means a manufacturer can't ensure "safe operation" at any point on anything that would depend on "AI" as a means of control of machinery. Imagine a house-robot that stabs you with a knife because they computed your hand was a piece of ham or something stupid like that. Which, let's be honest, I'm pretty sure it's not a wild assumption to make. Insurers know they would have so many claims, it's just not profitable. Ergo: they know AI-backed stuff will just fail, sometimes horribly. Regards. You’re raising a legitimate concern — the legal landscape around AI is extremely unforgiving, and you're right that non-deterministic systems create a level of uncertainty that traditional liability models just aren’t built for. That’s exactly why insurers are reacting so defensively: even if AI doesn’t always "fail horribly", the fact that they can't guarantee it won’t is enough to trigger serious risk-avoidance. Where I think there’s some middle ground is that this isn’t necessarily a verdict on AI itself, but on the gap between current technology and the safety guarantees the law requires. Until that gap closes — through better guardrails, standards, and transparency — insurers will keep treating AI as an unpredictable exposure. So I understand your argument. I just think the situation is less about AI being doomed to fail and more about the ecosystem around it not yet being mature enough to manage the risk responsibly. Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/insurers-move-to-limit-ai-liability-as-multi-billion-dollar-risks-emerge#main
- https://www.tomshardware.com
- DragonFire laser shoots down high‑speed drones traveling at 400mph, costs $13 per shot — UK Navy to begin deploying system on destroyers
- The upcoming Steam Machine won't be 'subsidized' like consoles to hit a more attractive price target, suggesting high relative pricing — Valve engineer confirms
- The Great Flip: How Accelerated Computing Redefined Scientific Systems — and What Comes Next
- Best Black Friday tech deals live 2025 — best tech and PC hardware deals on GPUs, CPUs, SSDs, and more
- Delivering AI-Ready Enterprise Data With GPU-Accelerated AI Storage
Informational only. No financial advice. Do your own research.