Insure Your Agent Operator Edition · No. 001
01

Do you have documentation of what your agents do and what guardrails are in place?

Write down, on a single page, every AI agent running in your business today. For each one, note what it is allowed to do, who it interacts with, which tools or APIs it can call, what data it can read and write, and which decisions require a human approval step. If you cannot fill this out in an afternoon, you do not have documentation and you cannot answer this question.

Good looks like a short operating manual that a new hire could read in fifteen minutes and understand the entire AI footprint of the company. It is boring, written in plain language, and updated every time someone ships a change to an agent. It lists the known failure modes and the guardrails that catch them. It names an owner per agent.

What you are looking for in your own answer is whether a reasonable person outside your company, reading the document, would conclude that the business knows what it is running. If the honest answer is "some of it lives in our heads," that is a red flag. If the honest answer is "our CTO keeps meaning to write it down," that is also a red flag. Regulators, insurers, and plaintiffs' lawyers will all ask for this document. You want it to exist before they ask.

Red flags

No single list of agents. Guardrails configured in prompts but not written down. No named owner per agent. No record of when last reviewed. Production agents running under a developer's personal API key.

Next step: the certification pathway
02

Have you reviewed your current insurance policies for AI exclusions?

Pull every policy your business holds: Errors and Omissions, Cyber, General Liability, Directors and Officers, any media or tech liability wrapper. Send them to your broker with one question: "Please tell me in writing how each of these policies responds to a claim arising from an AI agent deployed in our business." Then actually read the answers.

Good looks like a written confirmation from the broker on each policy, referencing specific clauses, with any AI exclusions called out by number and page. If the broker answers verbally and tells you it will be fine, that is not an answer. If the broker cannot tell the difference between your AI agent and a piece of custom software, you need a different broker. This area is moving fast enough that the quality of your broker matters more this year than it did last year.

The thing you are watching for is silent coverage becoming explicit exclusion. A policy that quietly covered you in 2024 may exclude AI at renewal in 2026. A policy that never addressed the question will hinge on how a claims adjuster reads the wording after the event. You want both the written position and the renewal trajectory on every line.

Red flags

Broker cannot produce a written response. Any AI-related clause that uses the word "exclusion" without an alternative buy-back. Policies up for renewal in the next ninety days that have not been re-read. Anyone using the phrase "we have always been covered for everything."

Next step: policy review checklist
03

Do you have an incident response plan if an agent causes harm?

Imagine the following phone call at 9:04 on a Tuesday morning. A customer tells your support team that your AI agent promised them something yesterday that the company cannot honour, and they have already acted on it. The loss is not catastrophic but it is real, and the customer is furious and threatening to post publicly. Who in your business picks up the call, who makes the decision to pause the agent, who informs affected customers, and who decides whether to notify a regulator?

Good looks like a one-page playbook with named roles, a technical kill switch that any on-call engineer can flip without a deployment, a template communication for customers, a template communication for regulators in your jurisdiction, and a documented decision tree for when to involve outside counsel. It has been rehearsed at least once, in a tabletop exercise, with the actual people who would be on the call.

The purpose is not to eliminate the incident. Incidents will happen. The purpose is to contain the harm, document your response honestly, and demonstrate to any regulator or insurer later that the business acted reasonably. A reasonable response with an imperfect outcome is defensible. No plan at all is not.

Red flags

No written kill switch procedure. No named incident owner. No template communication drafted in advance. Legal counsel not briefed on the AI footprint. "We would figure it out on the day."

Next step: incident response scaffolding
Further reading

Articles that go deeper on each question.

If one of the three questions landed harder than you expected, these articles take it apart properly.

What your answers mean

If you answered "yes, with evidence" to all three, you are ready for coverage.

The three questions are not a grade. They are a readiness check. Insurers writing new AI policies in 2026 will ask for exactly these artefacts before quoting. If you have them, the next step is straightforward. If you do not, the next step is to build them, which is also straightforward. Either way, you move.

Go to the coverage pathway
  1. A

    All three yes with evidence

    Move to the coverage pathway. You are a candidate for the first wave of AI liability policies.

  2. B

    Gaps on one or two

    Work through the pathway steps. The certification process produces the evidence you are missing.

  3. C

    Gaps on all three

    Start with question one. You cannot review policies or plan response without knowing what the agents actually do.