Take this page literally. Read each question, answer it honestly for your own business, then read the guidance. If you can answer all three well, you are ahead of ninety percent of operators we talk to. If you cannot, the next step for each question is right there.
Write down, on a single page, every AI agent running in your business today. For each one, note what it is allowed to do, who it interacts with, which tools or APIs it can call, what data it can read and write, and which decisions require a human approval step. If you cannot fill this out in an afternoon, you do not have documentation and you cannot answer this question.
Good looks like a short operating manual that a new hire could read in fifteen minutes and understand the entire AI footprint of the company. It is boring, written in plain language, and updated every time someone ships a change to an agent. It lists the known failure modes and the guardrails that catch them. It names an owner per agent.
What you are looking for in your own answer is whether a reasonable person outside your company, reading the document, would conclude that the business knows what it is running. If the honest answer is "some of it lives in our heads," that is a red flag. If the honest answer is "our CTO keeps meaning to write it down," that is also a red flag. Regulators, insurers, and plaintiffs' lawyers will all ask for this document. You want it to exist before they ask.
No single list of agents. Guardrails configured in prompts but not written down. No named owner per agent. No record of when last reviewed. Production agents running under a developer's personal API key.
Pull every policy your business holds: Errors and Omissions, Cyber, General Liability, Directors and Officers, any media or tech liability wrapper. Send them to your broker with one question: "Please tell me in writing how each of these policies responds to a claim arising from an AI agent deployed in our business." Then actually read the answers.
Good looks like a written confirmation from the broker on each policy, referencing specific clauses, with any AI exclusions called out by number and page. If the broker answers verbally and tells you it will be fine, that is not an answer. If the broker cannot tell the difference between your AI agent and a piece of custom software, you need a different broker. This area is moving fast enough that the quality of your broker matters more this year than it did last year.
The thing you are watching for is silent coverage becoming explicit exclusion. A policy that quietly covered you in 2024 may exclude AI at renewal in 2026. A policy that never addressed the question will hinge on how a claims adjuster reads the wording after the event. You want both the written position and the renewal trajectory on every line.
Broker cannot produce a written response. Any AI-related clause that uses the word "exclusion" without an alternative buy-back. Policies up for renewal in the next ninety days that have not been re-read. Anyone using the phrase "we have always been covered for everything."
Imagine the following phone call at 9:04 on a Tuesday morning. A customer tells your support team that your AI agent promised them something yesterday that the company cannot honour, and they have already acted on it. The loss is not catastrophic but it is real, and the customer is furious and threatening to post publicly. Who in your business picks up the call, who makes the decision to pause the agent, who informs affected customers, and who decides whether to notify a regulator?
Good looks like a one-page playbook with named roles, a technical kill switch that any on-call engineer can flip without a deployment, a template communication for customers, a template communication for regulators in your jurisdiction, and a documented decision tree for when to involve outside counsel. It has been rehearsed at least once, in a tabletop exercise, with the actual people who would be on the call.
The purpose is not to eliminate the incident. Incidents will happen. The purpose is to contain the harm, document your response honestly, and demonstrate to any regulator or insurer later that the business acted reasonably. A reasonable response with an imperfect outcome is defensible. No plan at all is not.
No written kill switch procedure. No named incident owner. No template communication drafted in advance. Legal counsel not briefed on the AI footprint. "We would figure it out on the day."
If one of the three questions landed harder than you expected, these articles take it apart properly.
The pre-deployment checklist that turns the documentation question into something you can actually answer in an afternoon.
Question twoA clause by clause look at E and O, cyber, general liability, and D and O, with the exact language to send to your broker.
Question threeThe incident response lessons from the tribunal decision every operator should read before writing their own playbook.
The three questions are not a grade. They are a readiness check. Insurers writing new AI policies in 2026 will ask for exactly these artefacts before quoting. If you have them, the next step is straightforward. If you do not, the next step is to build them, which is also straightforward. Either way, you move.
Go to the coverage pathwayMove to the coverage pathway. You are a candidate for the first wave of AI liability policies.
Work through the pathway steps. The certification process produces the evidence you are missing.
Start with question one. You cannot review policies or plan response without knowing what the agents actually do.