Insurance Coverage and Bad Faith

The lawyers at Houston Harbaugh have built a strong reputation over the past several decades representing insurance companies facing the full spectrum of complex legal challenges. No matter how big or critical the challenge, clients turn to the attorneys in Houston Harbaugh’s Insurance Coverage and Bad Faith practice group for our legal and business insights.

The Accountability Baseline: Why the "Human-in-the-Loop" is Your Newest Discovery Risk in Insurance Claims Handling

Defending the Algorithm™ Newsletter: Insurance AI Insights - Edition 1

This is the first installment of a new insurance claims-focused series within Defending the Algorithm™, written and edited by Pittsburgh, Pennsylvania Insurance and Commercial Litigation Attorney Christopher M. Jacobs, Esq., with research assistance from Gemini 3.0. This series explores the practical, legal, and regulatory challenges facing claims organizations as they integrate artificial intelligence into their daily workflows. Always in cooperation with the DRI Center for Law and Public Policy AI Task Force.


I. The Efficiency Trap: From Innovation to Potential Liability

For the past decade, the insurance industry has been sold a vision of "frictionless" claims handling. The promise was simple: use computer vision to assess property damage, predictive models to value bodily injury, and automated triage to route files. In 2026, those tools are no longer "emerging"—they are the standard.

However, as many insurance carriers are now discovering, efficiency is a double-edged sword. While AI has drastically reduced cycle times, it has also created a new kind of legal exposure. When an algorithm influences a coverage determination or a claim valuation, the insurer’s duty of good faith doesn’t disappear—it simply shifts focus. The professional who once relied on their "gut" and experience is now being asked to defend a decision made by a "black box" they may not fully understand.

II. The 2026 Regulatory Shift: Arizona, Colorado, and Beyond

We have officially moved past the era of "voluntary ethical guidelines". In the first half of 2026, we are seeing a cascade of state-level mandates relating to AI that have fundamentally changed the rules of the road for claims adjusters.

  • The Arizona Mandate (HB 2175): Effective June 30, 2026, Arizona law requires a licensed professional to individually review and sign off on any claim denial involving medical judgment, specifically prohibiting "sole reliance" on recommendations from any other source—including AI.
  • The Colorado Compliance Deadline (SB 24-205): With full compliance required by July 1, 2026, Colorado has implemented a strict governance framework for "consequential decisions," requiring insurers to provide consumers with a plain-language explanation of how AI was used in their claim adjudication.

The message from regulators is clear: you cannot outsource your accountability to a software vendor. In a coverage action involving extracontractual claims, the "proprietary" nature of a tool is rarely a valid shield when a policyholder’s benefits have been impacted by an unexplainable automated decision.

III. The "Rubber Stamp" Deposition Risk

In bad faith litigation, the most dangerous witness is the "Rubber-Stamp Adjuster". This is the claims professional who receives an AI-generated valuation, inputs it into the system, and issues a disclaimer without reviewing the underlying file data.

During a deposition, plaintiffs' counsel will focus on the "gap" between the AI’s recommendation and the adjuster’s independent judgment. If the adjuster cannot explain why the AI reached its conclusion, or how the carrier verified that conclusion, the "Human-in-the-Loop" becomes a legal fiction. To defend these claims, your file must reflect active, documented human discernment. The algorithm should be the assistant, not the supervisor. It is not AI Claims handling. It is AI enhanced claims handling.

IV. Four Practical Guardrails for the 2026 Claims Desk

To stay ahead of both the regulators and the plaintiffs' bar, we suggest claims leaders implement these four concise rules immediately:

  1. Understand the Tool: Know exactly what data your AI system uses (and what it ignores) before relying on its output for a high-impact decision.
  2. Explain the Logic: Ensure that your adjusters and deposition corporate representatives can describe, in plain language, the "reasoning" behind an AI-assisted valuation.
  3. Document the Deviation: The most defensible file is often the one where a human overrode the machine. Record those overrides with care—they are your best evidence of independent professional judgment. All human oversight should be documented.
  4. Supervise the Vendor: Contractually require your AI providers to provide "explainability" documentation. If you can’t see under the hood, you shouldn't be driving the car.

V. Looking Ahead: A Collaborative Perspective

The goal of this series is not to discourage the use of AI, but to ensure that it is deployed responsibly. In the editions to follow, I will be joined by colleagues from across our firm to discuss how these technologies are impacting underwriting, fraud detection, claims decisions and the broader landscape of commercial litigation.

As technology changes the means of claims handling, our focus remains on the measure of the duty we owe to the insured and the public trust.


Contact & Disclaimer

Houston Harbaugh's insurance and AI litigation team continues to monitor developments in algorithmic decision-making and emerging liability frameworks. For questions regarding enterprise AI risk management or policy development, please contact Christopher M. Jacobs, Esq. at jacobscm@hh-law.com or call 412-288-4019.

This post represents the author's personal views and does not constitute legal advice. Defending the Algorithm™ is a mark of Houston Harbaugh, P.C.

About Us

We’re committed to staying on top of the issues of today and tomorrow, such as the ever-changing landscape involving bad faith, cyber-insurance, and insurance for advanced technology sectors, artificial intelligence players, machine learning companies, and autonomous vehicle manufacturers and users.

Alan S Miller Attorney Houston Harbaugh

Alan S. Miller - Practice Chair

Alan has more than thirty-eight years of experience in complex litigation and counseling, concentrating in the areas of environmental law, insurance coverage and bad faith, and commercial litigation. He chairs the firm’s Environmental and Energy Law practice and the Insurance Coverage and Bad Faith Litigation Practice.

Alan’s environmental law practice has involved counseling, litigation and alternative dispute resolution of matters involving municipal, residual, and hazardous waste permitting and compliance, contribution and cost recovery actions under CERCLA and related state statutes, claims for natural resource damages, contamination from leaking underground storage tanks, air and water pollution regulatory permitting and enforcement actions, oil and gas drilling compliance and transactions, and real estate transactions involving contaminated and recycled industrial sites.