NAIC’s AI Model Bulletin: A Field Guide for Agent Training and Compliance QA

NAIC AI Bulletin: Compliance Training Field Guide

On Dec. 4, 2023, the NAIC Membership voted to adopt a Model Bulletin on the Use of Artificial Intelligence Systems by Insurers at the NAIC 2023 Fall National Meeting in Orlando. While it’s not a model law or regulation, it’s a clear signal of where state insurance regulators want consistency: if AI supports a decision that affects a consumer, the insurer still has to meet existing insurance laws (including unfair trade practices), and regulators may expect governance, testing, and documentation to back it up.

For licensing candidates, CE students, and agency managers, the practical takeaway is simple: AI doesn’t replace accountability. Your training and QA processes should assume AI-assisted underwriting, pricing, claims triage, marketing, and service decisions will be examined through the same compliance lens—plus added scrutiny on bias, accuracy, and data handling.

What Changed and How Fast

This bulletin moved quickly through the NAIC process in 2023: an initial draft was presented June 29, followed by two public comment periods ending Sept. 5 and Nov. 6, and then adoption on Dec. 4. The drafting work came through the NAIC Innovation, Cybersecurity, and Technology (H) Committee, chaired by Maryland Insurance Commissioner Kathleen A. Birrane, with Colorado’s Michael Conway and Iowa’s Doug Ommen as co-vice chairs.

Operationally, the “change” isn’t a new exam topic called “NAIC AI Bulletin.” The change is that more states may align around similar expectations for insurers’ AI governance and risk management. That affects what producers and insurer staff should document, what managers should supervise, and what CE/compliance training should reinforce: outcomes matter (fairness, accuracy, lawful conduct), and the organization should be able to show its work.

Frontline Talking Points for Agents

Use these as short scripts for client conversations and internal handoffs when AI or “automated” decisions come up (underwriting outcomes, rate impacts, claim handling steps, or marketing offers):

  • “AI can support decisions, but it doesn’t change the rules.” Consumer-impacting decisions still must comply with applicable insurance laws and unfair trade practice standards.
  • “We focus on accuracy and fairness.” The NAIC bulletin flags risk areas like inaccuracies and unfair bias/discrimination—so we verify inputs, document key facts, and escalate anomalies.
  • “Data handling matters.” Data vulnerabilities are a highlighted risk area; treat client data collection and sharing as a controlled process (only what’s needed, only through approved channels).
  • “If something doesn’t look right, we can review it.” Set expectations that unusual outcomes or mismatches between known facts and an automated result should be reviewed through your normal QA/escalation path.

Training note for licensing candidates: these talking points map to core exam themes you already see—unfair discrimination, producer responsibilities, and ethical sales/service conduct—now applied to AI-assisted workflows.

Manager/Compliance Leads: Supervision and QA Steps

Manager-focused section (build this into your weekly operating rhythm): the bulletin describes expectations like a written AI Systems (AIS) Program aligned to NAIC’s 2020 AI Principles and notes that regulators may request documentation in investigations or examinations. Even if your agency isn’t building AI models, you may be using carrier/third-party tools that influence consumer outcomes. Your job is to supervise the use and the documentation trail.

  • Inventory where AI shows up in your workflow. List the tools and touchpoints: lead scoring, marketing targeting, call scripts, quoting assistance, underwriting pre-screens, claim status tools, and chat/virtual assistants.
  • Define “consumer-impacting decisions.” In your SOP, flag steps where an automated output could change eligibility, price, coverage options, claim handling speed, or communication content.
  • QA for outcomes, not just process. The bulletin’s evolution included shifting toward outcomes. Add spot checks for “does this outcome match the documented facts?” and “could this create unfair bias/discrimination risk?”
  • Third-party controls. The drafting updates included revised language on third-party contracting and testing/validation. Translate that into your vendor/carrier tool governance: approved-tool list, change notifications, and a process for reporting suspected errors.
  • Documentation readiness. Assume regulators can ask what you knew, what you relied on, and what you did when something looked wrong. Train teams to capture: client-provided facts, key eligibility/underwriting notes, and escalation tickets.

Student Exam/CE Practice Tasks (Turn the Bulletin into Study Reps)

To keep this practical for TSI National learners, treat the bulletin as a “scenario generator.” You’re not memorizing NAIC committee names for the exam; you’re practicing how to apply licensing/CE rules when technology is involved.

  • Practice set: unfair trade practices + AI. Write 5 mini-scenarios where an automated tool recommends a coverage change or denies a quote. For each: identify the compliance risk (misrepresentation, unfair discrimination, inadequate disclosure) and the next best action (verify facts, document, escalate).
  • Accuracy drill. Take one underwriting/claims scenario and list the minimum data points that must be correct for a fair decision. Then list what you’d do if a data point appears wrong (re-collect, correct, annotate, escalate).
  • Bias red-flag checklist. Create a short list of “this needs review” indicators: inconsistent outcomes for similar risks, unexplained eligibility changes, or marketing outreach that appears to exclude protected classes (even unintentionally).
  • Data vulnerability habits. For CE/compliance: map where client data enters your process (forms, email, uploads, CRM notes). Identify one change you can make this week to reduce exposure (approved channels only, tighter access, cleaner notes).

These tasks fit the TSI training philosophy: concept clarity → focused drills → realistic practice tests → targeted remediation. Use your miss-log: when you miss an ethics/unfair practices question, rewrite it as an AI-assisted scenario and re-answer.

Escalation Triggers and Follow-Up Cadence

Because the bulletin highlights inaccuracies, bias/discrimination, and data vulnerabilities, your escalation triggers should mirror those risks. Keep it simple and repeatable:

  • Inaccuracy trigger: automated output conflicts with verified client facts (DOB, driving history, property characteristics, loss history, coverage elections).
  • Fairness trigger: pattern of inconsistent outcomes for similar risks, or a tool recommendation that appears to create unfair discrimination concerns.
  • Data trigger: tool requests excessive data, data is transmitted through unapproved channels, or you suspect a data exposure.

Cadence: (1) same-day internal ticket, (2) manager review within 48 hours, (3) weekly trend review (are these isolated, or systemic?), (4) monthly refresher micro-training using real, de-identified cases.

Manager Action Checklist

  • Build and maintain an AI-touchpoint inventory for your agency (tools, carriers, vendors, use cases).
  • Update SOPs to mark consumer-impacting decision points and required documentation at each point.
  • Create a QA sampling plan focused on outcomes: accuracy checks, fairness red flags, and data-handling compliance.
  • Publish an approved-tool list and a change-control step for new/updated vendor features.
  • Implement escalation tickets with required fields: what the tool output said, what facts were verified, what action was taken.
  • Run a monthly trend review of escalations and feed results into coaching and CE topic selection.

Learner Action Checklist

  • Convert one ethics/unfair practices topic into 5 AI-assisted scenarios and answer them in writing.
  • Use timed practice: complete 20 questions on unfair trade practices/ethics, then build a miss-log that notes “what fact was missing?”
  • Practice a client explanation (30 seconds): AI may support decisions, but the insurer must still comply with insurance laws and fairness standards.
  • Adopt a data-minimization habit: only collect/share what’s required, and use approved systems for transmission and storage.
  • Memorize your escalation triggers: inaccurate output, fairness concerns, or data vulnerabilities—then escalate and document.

Build these AI-era compliance habits into your licensing prep or CE plan with TSI National’s practice-focused training options at https://www.tsinational.com/.


Source: Original article

Educational information only; verify requirements with your state Department of Insurance.

Related Licensing and CE Resources