Five questions to ask before deploying an AI agent in your business.
A pre-deployment checklist for founders and operations leads who want to move fast without shipping an exposure they cannot defend. None of this requires lawyers. All of it is cheaper now than after an incident.
Key takeaways
- Most AI agent problems are operational, not technological. They come from unclear scope, unclear ownership, and missing logs.
- A good pre-deployment check takes about a week of part-time work from one person and saves months of uncertainty later.
- The EU AI Act's first operator obligations start on 2 August 2026. An SME deploying an agent in the second half of 2026 is doing so under the new regime.
- The five questions in this article match the evidence that certification bodies and insurers will ask for once dedicated AI coverage opens in Europe.
Why five questions and not fifty
Most AI deployment checklists you can find online are either too technical for an operator to use, written by vendors trying to sell something, or so generic that they read like compliance theatre. What is actually useful for an SME founder is a short list of questions they can answer honestly in an afternoon and use to decide whether the agent is ready for production. That is what this article is.
If you can answer all five with a yes and evidence, you are ahead of the majority of operators running AI agents in Europe today. If you cannot, each question points at a specific fix. The questions sit at the right level for someone running a business with ten to two hundred people, not a Fortune 500 with a dedicated AI governance team.
Question one: what exactly is this agent allowed to do?
The first question sounds obvious and is almost always answered badly. Write down, on one page, what the agent is allowed to do in production. Not what it can technically do. What you have decided it is allowed to do. The list should include the actions it can take (send emails, issue refunds, update records, call APIs), the data it can read, the data it can write, the systems it integrates with, and the explicit limits on each of those.
A good version of this document is written in plain language that a non-technical director could read and understand. It names the agent, gives it a version number, and lists the approved use cases. It also lists what the agent is not allowed to do, which is usually the more important section. An agent that has a specific list of out-of-scope actions is much easier to supervise than one that has only a list of approved actions and an unlimited fallback.
Red flags on this question include scope that lives only in prompts, no version control on the scope document, and a scope that has not been reviewed since the last model upgrade. If the answer to "what does our agent do" depends on who you ask, you do not have a scope.
Question two: who owns this agent and how do they prove it?
Every agent needs a single named owner inside the business. Not a team. A person. The owner is the one who approves changes, signs off on the monthly review, and is the escalation point when something goes wrong. In an SME this is usually an operations lead, a technical founder, or a senior engineer, with a reporting line to a director who holds the commercial risk.
The way to tell whether the ownership is real is to ask the owner three questions. What did the agent do last week? When was the scope last reviewed? What is the kill switch? If any of those answers is fuzzy, the ownership is nominal rather than operational. That is a problem, because regulators, insurers, and tribunals all assume there is a human on the other end of an AI system, and if they cannot find that human the responsibility falls upward to the board.
Good looks like a name, an email, a one-line job responsibility in the scope document, and a monthly report. Bad looks like "the CTO keeps an eye on it."
Question three: what happens when the agent is wrong?
Every production agent will produce a wrong output eventually. The question is not whether the wrong output will happen but what the business does about it. The answer should include three things.
First, logging. Every conversation the agent has with a user, every tool call it makes, and every output it generates should be stored somewhere you can retrieve. If you cannot reconstruct what the agent said yesterday, you cannot defend the business today. The logs need to be retained for long enough to cover your response times under GDPR, consumer protection law, and any sectoral regulator that might ask.
Second, monitoring. Someone should be reviewing a sample of agent output on a regular cadence. Weekly is ideal for anything customer-facing. The review looks for hallucinated policies, wrong prices, over-promises, and missed escalations. An SME does not need a dedicated AI quality team for this. One hour a week from the named owner is enough for most deployments.
Third, the kill switch. A technical mechanism that any on-call engineer can use to pause the agent without needing a deployment or a senior engineer's approval. Kill switches are boring until the morning you need one. A deployment that does not have one is not ready for production.
Question four: does this agent touch anything the EU AI Act cares about?
The EU AI Act divides AI systems into categories by risk. Most SME deployments are not high-risk, but all deployers have transparency, human oversight, and record-keeping obligations starting on 2 August 2026. The question every operator should answer before go-live is whether the agent falls into any of the following categories that require extra care: credit scoring, employment decisions, education, critical infrastructure, access to essential services, or anything involving biometric data.
If the answer is no, you still have obligations under the general provisions, but the compliance effort is proportionate. Document the scope, log the conversations, give users a clear way to know they are talking to an AI, maintain human oversight. If the answer is yes, the compliance work is larger and you should start it before the agent goes live, not after. The Act is clear that deployers, not only providers, carry responsibility. An SME that deploys a third-party model for a high-risk use case cannot point at the vendor and walk away.
For the full regulatory context, read our Why It Matters page. For the practical pathway from compliance to coverage, see Get Covered.
Question five: what does your insurance actually say?
The last question is the one operators leave until it is too late. Your errors and omissions, cyber, general liability, and D and O policies were almost all written before autonomous agents existed. Insurers are now adding explicit AI exclusions at renewal, and the policies that sound relevant often respond differently than operators expect.
The fix is simple but requires some effort. Send your broker a written request on each of the four policies. Ask specifically how each one responds to a claim arising from an AI agent deployed in your business, reference any AI-related clauses by number and page, and request confirmation that the most recent renewal endorsements have been reviewed. Do not accept a verbal answer. We wrote a full companion article on this: Does your business insurance cover AI mistakes? Probably not.
Dedicated AI agent liability policies are being built right now by specialty carriers with reinsurance support from firms including Munich Re. The first binding European policies are expected in the third quarter of 2026. Certification is the entry point. The Future Proof Certified methodology at agentcertified.eu is designed to feed this underwriting, and the waitlist for the first wave is at agentinsured.eu.
How to use this checklist
Block a week of part-time work from the person who will own the agent. Day one: write the scope document. Day two: confirm ownership and draft the kill switch. Day three: set up logging and monitoring. Day four: map the agent against the EU AI Act. Day five: send the broker request and start the certification enquiry. At the end of the week you should have a five-page folder that your board can read in thirty minutes, and that an insurer can read in fifteen.
None of this is complicated. What is complicated is doing it after the fact, under pressure, with a customer on the phone and a tribunal deadline coming up. Moffatt v. Air Canada in 2024 and Mata v. Avianca in 2023 are the cases that explain what that looks like. The five questions above are what stops you from being the next entry on that list.
Frequently asked questions
What should I document before deploying an AI agent?
Scope in plain language, tools and APIs the agent can call, data it can read and write, a named owner, and the approval step required for any action that touches money, legal commitments, or personal data.
Who should own an AI agent inside an SME?
One named person at the operational level, with a reporting line to a director or senior manager. The owner approves changes, signs off on the monthly review, and is the escalation contact when an incident happens.
Do I need legal review before deploying a customer-facing AI agent?
For any agent that can make commitments on price, policy, or terms with customers, yes. The Air Canada tribunal decision confirmed that statements by an agent bind the business. A short legal review is cheaper than defending a claim.
Does the EU AI Act apply to an SME running a basic AI agent?
Most SME agents fall outside the high-risk category, but all deployers have obligations on transparency, human oversight, and record keeping starting 2 August 2026. Map your agent against the Act before go-live, not afterwards.