Learn

The Red Lines: What Not to Do

AI is powerful, but power without guardrails is dangerous. These are the ten mistakes that can cost you your reputation, your compliance standing, and your policyholders' trust. Learn them before you make them.

Each entry below is based on real incidents, documented regulatory actions, or well-established risks. This is not theoretical fear-mongering — it is practical guidance grounded in what has already gone wrong in the insurance and financial services industries.

1

Don't Submit AI Output Without Reading Every Word

AI generates text that sounds authoritative and professional. This makes it dangerously easy to assume the output is correct without actually reading it carefully. In insurance, unverified AI output can result in incorrect coverage determinations, regulatory violations, and financial exposure.

Risk

Incorrect coverage decisions, regulatory non-compliance, financial losses, professional liability, damaged client relationships

Real-World Example

In the landmark Mata v. Avianca case (2023), a New York attorney submitted a legal brief containing AI-fabricated case citations — cases that simply did not exist. The attorney was sanctioned by the court. While this occurred in the legal field, the exact same risk applies to insurance: an AI could generate a non-existent regulation, misquote a policy provision, fabricate actuarial data, or invent an industry standard. If you submit it without reading it, the liability is yours.

What to Do Instead

Read every word of every AI output you intend to use professionally. Verify all factual claims, regulation references, policy citations, and data points against primary sources. If you don't have time to verify an AI output, you don't have time to use it.

Insureversia's Take

This is rule number one for a reason. AI is the most convincing unverified source you will ever encounter. It writes beautifully, sounds authoritative, and is wrong just often enough to be dangerous. The Mata v. Avianca attorney didn't submit fabricated citations because he was careless — he submitted them because they looked completely real. Your AI-generated coverage analysis will look just as real. Read it. Verify it. Every time.

2

Don't Paste Policyholder Data into Public AI Tools

Consumer-grade AI tools (free ChatGPT, free Gemini) may use your inputs for model training. Entering policyholder personally identifiable information, claims data, medical records, or proprietary business information into these tools creates serious privacy and compliance risks.

Risk

Data privacy violations (GDPR, CCPA, state insurance privacy laws), regulatory penalties, policyholder trust breach, competitive intelligence exposure, potential data breach liability

Real-World Example

In April 2023, Samsung banned employee use of ChatGPT after engineers inadvertently uploaded proprietary source code and internal meeting notes to the platform. The data became part of ChatGPT's training set. In the insurance context, imagine policyholder medical records, claims histories, or proprietary pricing algorithms being processed by a public AI tool — the privacy implications under HIPAA, GDPR, and state insurance privacy laws would be severe.

What to Do Instead

Use enterprise-grade AI tools with appropriate data processing agreements for any work involving sensitive data. Establish a clear data classification policy that defines what can and cannot be entered into AI tools. When in doubt, anonymize or use synthetic data.

Insureversia's Take

This is where most organizations get caught. Someone copies a claims file into ChatGPT to 'quickly summarize it,' and suddenly policyholder medical records are floating through OpenAI's servers. The person meant well — they were trying to be efficient. But they just created a privacy violation that could cost the organization millions. Get enterprise tools. Train your people. Make the right thing the easy thing.

3

Don't Assume AI Understands Your Jurisdiction

Insurance regulation varies dramatically by jurisdiction. AI tools often default to general principles or U.S.-centric responses without distinguishing between state-specific requirements, international frameworks, or the interaction between different regulatory regimes.

Risk

Regulatory non-compliance, incorrect coverage determinations based on wrong jurisdiction's rules, fines and penalties, market conduct violations

Real-World Example

Insurance regulation in the United States is primarily state-based, with 50+ different regulatory frameworks. An AI asked about 'insurance regulations' might cite NAIC model laws that have not been adopted in your state, reference requirements from a different jurisdiction, or fail to account for state-specific variations in standard policy forms. The EU's insurance regulatory framework (Solvency II) differs fundamentally from U.S. state-based regulation, and AI tools frequently conflate the two.

What to Do Instead

Always specify the exact jurisdiction in your prompts. Verify jurisdiction-specific claims against the actual state department of insurance website or regulatory database. Never rely on AI for jurisdictional compliance without verification.

Insureversia's Take

Insurance is one of the most jurisdiction-specific industries in the world. A coverage determination that's correct in New York might be completely wrong in Texas. The NAIC publishes model laws, but each state adopts, modifies, or rejects them independently. AI tools don't understand this nuance — they see 'insurance regulation' as a single, coherent body of law when it's actually 50+ overlapping and sometimes contradictory systems. Always lock your jurisdiction in the prompt, and always verify against the actual state regulatory source.

4

Don't Use AI to Replace Professional Judgment

AI can analyze data, identify patterns, and generate recommendations. But it cannot exercise the professional judgment that comes from years of experience, deep understanding of context, and the ethical obligations that define insurance practice.

Risk

Poor underwriting decisions, unfair claims outcomes, regulatory non-compliance, professional liability, erosion of core competencies

Real-World Example

In 2021, the Stanford Digital Economy Lab published research showing that AI-driven insurance pricing models can inadvertently discriminate against protected classes through proxy variables — ZIP codes that correlate with race, credit scores that correlate with socioeconomic status, and health data that correlates with disability. Regulators in Colorado, New York, and the EU have explicitly stated that 'the algorithm made the decision' is not an acceptable defense for discriminatory outcomes. Human judgment remains the required check on algorithmic recommendations.

What to Do Instead

Use AI to inform your professional judgment, not replace it. AI generates analysis and recommendations; you make the decisions. Always review AI-generated recommendations against your professional experience, ethical obligations, and regulatory requirements before acting on them.

Insureversia's Take

Here's the uncomfortable truth: AI can process more data faster than you ever will. But it cannot understand what a claim means to the person filing it. It cannot weigh the ethical implications of a pricing decision. It cannot sense when an underwriting guideline, while technically correct, would produce an outcome that violates the spirit of insurance as a social good. That judgment is yours. AI is the most powerful research assistant you've ever had. But it is never the decision-maker. You are.

5

Don't Hide Your AI Use When Disclosure Is Required

Hiding AI use when transparency is required — by regulators, policyholders, or industry partners — creates trust violations that are far more damaging than the AI use itself. The regulatory trend is clearly toward greater transparency, not less.

Risk

Regulatory penalties, loss of license, damaged reputation, policyholder litigation, reinsurance treaty violations, market conduct findings

Real-World Example

In 2024, multiple state insurance departments began requesting information about insurers' use of AI in underwriting and claims decisions as part of regular market conduct examinations. Insurers that could not demonstrate transparent AI governance — including documentation of how AI was used, what oversight was in place, and what disclosure was provided to consumers — faced additional scrutiny and, in some cases, corrective action requirements.

What to Do Instead

Develop a proactive transparency strategy. Know your disclosure obligations in every jurisdiction. Document your AI use systematically. When in doubt, disclose more rather than less — transparency builds trust.

Insureversia's Take

Here's the thing about hiding AI use: it only works until it doesn't. And when it stops working — when a regulator asks, when a policyholder challenges a decision, when a reinsurer audits your processes — the cover-up is always worse than the original use. AI use in insurance is not inherently problematic. Hiding it is. Build disclosure into your process from day one, and you'll never have to explain why you didn't.

6

Don't Treat AI Output as Verified Data

AI generates text based on statistical patterns, not verified facts. When it produces actuarial figures, loss ratios, market statistics, or regulatory citations, these may look authoritative but could be entirely fabricated. Treating AI output as verified data is a fast path to costly errors.

Risk

Incorrect pricing decisions, flawed reserve calculations, inaccurate regulatory filings, audit failures, financial misstatements

Real-World Example

Multiple documented instances exist of AI tools generating plausible but fabricated statistics. When asked about insurance market data, AI might produce specific loss ratios, market share percentages, or premium volumes that sound precise but are not grounded in actual data. In actuarial work, even small data inaccuracies can cascade into significant pricing errors — a 2% error in loss ratio assumptions could translate to millions in under- or over-reserving across a large portfolio.

What to Do Instead

Never use AI-generated numbers, statistics, or citations in any professional output without verifying them against primary data sources. Treat every AI-generated data point as an unverified claim that must be checked. Use AI for analysis and interpretation, but source your data from verified databases.

Insureversia's Take

Here's what makes this particularly dangerous in insurance: our industry runs on numbers. Loss ratios, combined ratios, reserve adequacy, claim severity trends — these aren't decorative statistics, they're the foundation of pricing, reserving, and strategic decisions. When AI invents a plausible-sounding '78.3% loss ratio for the Southeast commercial property market in 2023,' it's not just wrong — it could cascade through your models and affect real financial decisions. Use AI to analyze your verified data. Don't let AI be your data source.

7

Don't Use AI for Tasks Requiring Emotional Intelligence

Insurance touches people at their most vulnerable moments — after a car accident, a house fire, a health crisis, the death of a loved one. AI cannot provide empathy, read emotional cues, or respond with the human sensitivity these moments demand.

Risk

Damaged policyholder relationships, complaints, E&O exposure, regulatory scrutiny for unfair claims practices, reputational harm

Real-World Example

In 2023, the National Eating Disorders Association (NEDA) replaced its human helpline counselors with an AI chatbot named 'Tessa.' Within days, users reported that Tessa was providing advice that could worsen eating disorders — including recommending calorie counting and weight loss to people seeking help for anorexia. NEDA shut Tessa down within a week. In insurance, the parallel is clear: using AI chatbots or automated communications for claims involving serious injury, death, or catastrophic loss carries the same risk of providing technically-generated but emotionally inappropriate responses.

What to Do Instead

Reserve tasks that require empathy, emotional intelligence, and human connection for human professionals. Use AI for background analysis and drafting that supports these interactions, but never as a substitute for the human interaction itself.

Insureversia's Take

Your policyholder just lost their home in a fire. Everything they owned. Their children's photos. Their grandmother's jewelry. They call to file a claim. Do you want them talking to a chatbot? The AI might process the claim faster. It might ask all the right questions. But it cannot understand what that policyholder is going through. It cannot pause, soften its tone, and say 'I'm so sorry — let's take this one step at a time.' Use AI to do the paperwork so your people can do the people work.

8

Don't Ignore Regulatory Guidelines on AI

Insurance regulators across the United States and internationally are issuing AI-specific guidance at an accelerating pace. These are not suggestions — they are requirements that carry real consequences for non-compliance.

Risk

Regulatory penalties, market conduct violations, increased examination scrutiny, license restrictions, forced corrective action plans

Real-World Example

Colorado's SB 21-169 (effective 2023) requires insurers to test AI and algorithmic systems for unfair discrimination and submit governance frameworks to the Division of Insurance. The NAIC Model Bulletin (December 2023) establishes expectations for AI governance across all insurance functions. Multiple state departments of insurance have begun incorporating AI governance questions into market conduct examinations. Insurers that cannot demonstrate compliance with applicable AI governance requirements face corrective action, increased examination frequency, and potential penalties.

What to Do Instead

Identify and review all AI-related regulatory guidance applicable to your jurisdictions immediately. Subscribe to regulatory updates from NAIC and your state DOIs. If your jurisdiction hasn't issued specific AI guidance yet, apply existing fair practices, data privacy, and consumer protection regulations to your AI use. Document your compliance.

Insureversia's Take

I know — 'read the regulatory guidance' sounds about as exciting as 'read the policy jacket.' But here's the reality: these regulations are your playbook. When something goes wrong with AI — and for someone, eventually, it will — the first question the regulator will ask is 'Did you follow the applicable guidance?' If your answer is 'I didn't know there was any,' you've already lost. The NAIC Model Bulletin, Colorado SB 21-169, and the EU AI Act are the big three right now. Read them. Build your governance around them. They're actually quite reasonable.

9

Don't Let AI Write Your Entire Report

AI-generated text has a distinctive quality: it's grammatically perfect, structurally consistent, and frequently generic. Submitting AI-written reports, analyses, or communications without substantial human editing signals to readers that you didn't actually do the work — and may contain errors that a knowledgeable professional would have caught.

Risk

Credibility damage, missed analytical nuances, generic analysis that doesn't address specific circumstances, potential errors in coverage-critical documents, erosion of professional skills

Real-World Example

In the legal profession, courts have begun identifying AI-generated briefs not through any technical detection, but through the writing style — overly broad statements, lack of specific case analysis, and a polished but generic quality that lacks the precision of practiced professional writing. The same applies in insurance: regulators, reinsurers, and sophisticated policyholders can often identify AI-generated analysis by its characteristic breadth without depth and its failure to address the specific nuances of a particular risk or claim.

What to Do Instead

Use AI for first drafts and structured outlines. Then rewrite substantially — add your specific analysis, remove generic filler, include details that only a knowledgeable professional would know, and ensure the final product reflects your actual understanding and judgment.

Insureversia's Take

The tell for AI-generated work isn't grammar errors — it's the absence of specificity. AI writes beautifully about insurance in general. But it can't tell your reader that this particular risk has three factors that make it unusual, or that this claim pattern is consistent with what you saw in the 2019 hailstorm season but with a twist. Your specific knowledge, your pattern recognition from years of experience — that's what separates professional analysis from AI-generated content. Use AI to build the skeleton. You write the substance.

10

Don't Assume Today's Limitations Are Permanent

AI capabilities are advancing at an exponential pace. What AI cannot do today, it may do competently tomorrow. Insurance professionals who dismiss AI based on its current limitations risk being blindsided by rapid improvement.

Risk

Professional obsolescence, competitive disadvantage, failure to adapt practice to evolving capabilities, inability to serve policyholders effectively as the industry transforms

Real-World Example

In 2020, AI in insurance was primarily limited to simple chatbots and basic claims triage. By 2023, AI could analyze complex policy documents, generate detailed underwriting assessments, detect fraud patterns across large datasets, and process claims from photos. In 2024, multimodal AI began assessing property damage from images with accuracy comparable to human adjusters. The pace of change consistently outstrips expert predictions, with capabilities arriving years ahead of expectations.

What to Do Instead

Stay informed about AI developments through trusted sources. Reassess your AI capabilities and workflows quarterly. Build adaptability into your practice — treat AI competence as an evolving skill, not a one-time learning event. Invest in continuous education and experimentation.

Insureversia's Take

In 2020, the 'smart' take was that AI couldn't handle insurance complexity. By 2024, it was writing underwriting memos and detecting fraud. If you build your career strategy on the assumption that AI can't do X, you'd better have a plan for the day it can. I'm not saying AI will replace insurance professionals — I genuinely don't think it will. But it will absolutely reshape what insurance professionals do. The ones who thrive will be those who stopped asking 'Can AI do this?' and started asking 'How can I do this better with AI?'

Now Learn What You Should Do

Knowing what to avoid is only half the equation. Our "What to Do" playbook gives you the positive, practical guide to using AI effectively and responsibly.

¿Listo para un aprendizaje estructurado? Explorar el Programa de Aprendizaje →

Comentarios

Cargando comentarios...