A founder who has been actively considering AI for her firm for the past quarter has, in her notes app, a list of questions she has not been able to answer. The list looks something like this. Which parts of our business would AI actually touch. What processes would it change. How do we control quality and accountability. What happens when it gets things wrong. Who owns it internally.
She thinks she needs more technical knowledge to answer these. She has watched several explainer videos. She has read two long articles about how large language models work under the hood. She has tried ChatGPT three times. None of it has helped her answer the list. She is starting to suspect AI is harder to assess than she thought, and she is privately a little embarrassed to be this stuck.
The reason has nothing to do with technical understanding. Her list, on closer reading, is a list of business questions. They look technical because they involve a technology. Closer up, every one is a business question: process ownership, accountability, quality control, fallback handling, governance. Naming the question type changes who needs to be in the room, and what answering the question actually involves.
Where do these questions actually come from?
That list is not invented. It comes verbatim from a 2025 LinkedIn piece called “AI Isn’t the Problem for SMEs. Not Knowing Where to Start Is.”, which captured what owner-managed firms were actually asking themselves about AI. The same questions show up again and again in qualitative research, because they reflect what an owner is genuinely trying to assess. Worth reading the original list slowly, in the founder’s own voice.
If a founder lists these questions and tries to answer them as technical questions, the answers are wrong. The answer to “what happens when AI gets things wrong” is not a model-card spec sheet. The answer to “who owns it internally” is not a feature. The answer to “how do we control quality and accountability” is not a settings panel. The answers are operational and organisational, and they involve people, processes, and oversight that already exist in the business.
What founders are missing, in the version of this conversation I find most useful, is permission to treat the list as a list of business questions. Once that move is made, the questions stop looking unanswerable.
What kind of question is each one, actually?
Take the five and re-classify them. “Which parts of our business would AI actually touch?” is a process audit question. The right room is the owner plus whoever knows the workflows in detail, often the operations lead. “What processes would it change?” is the same audit, going one level deeper into how the work currently flows. Neither needs an AI engineer present.
“How do we control quality and accountability?” is a governance question. It belongs to the same category as “how do we control quality and accountability for client work that gets delegated to a junior”. The founder has answered that question many times before, just not for a non-human contributor. The principles transfer. The right room is the owner plus the senior leader who currently owns quality.
“What happens when it gets things wrong?” is a risk question. It belongs to the same category as “what happens if our supplier slips a deadline” or “what happens if a junior makes a typo in a client report”. Risk-management thinking already exists in the business; this question slots in. “Who owns it internally?” is an accountability and operating-model question. Someone owns every other tool the business uses. AI should sit under whoever owns the function it most affects, and the cost of getting that wrong is recoverable.
None of those rooms need a technologist as the lead. Most of them already exist.
Why does the classification matter?
Classification matters because different question types need different rooms, different decision-makers, and different time horizons. A process audit takes a few hours with the right people in a room. A governance principle takes longer to settle but already has analogues in the business. A risk question can be handled with the existing risk register approach, with one extra row added.
When founders try to answer governance questions through a tool-evaluation lens, they get stuck. The right tool, on its own, doesn’t define quality, doesn’t assign accountability, and doesn’t pick which function owns the work. Those decisions belong to the business, and they look identical to decisions the business has already made about other tools, processes, and people.
This is also where the most common stall pattern shows up. Founders who can name the questions on their list are, almost by definition, ready to act. What keeps them from acting is the misclassification of the questions as technical. That misclassification sends them in search of a technologist when what they actually need is two hours with their own ops lead and a willingness to treat AI as a new kind of contributor.
What does the reclassification look like in practice?
The most useful first move is to print the list and write next to each question what type it actually is. Process audit. Governance. Risk. Accountability. Operating model. The handwriting matters; the act of writing it down forces the reclassification. Once each question has a type, the next move is which person in the business is the natural owner. In most owner-managed firms, that person already exists.
The second move is to drop the assumption that an AI consultant is the first call. The first call is internal. An AI consultant is helpful for the technology choices once the governance questions are answered. Most stalled AI work is stalled at the governance layer, where a consultant cannot do the founder’s thinking for her. A useful consultant tells the founder that, then waits while the internal work is done. The work that follows is much smaller than it looked from outside.
What happens once each question is reclassified is that the questions get smaller. “How do we control quality?” is hard. “How do we control quality on the four documents the AI assistant will produce per week, given the senior who already owns quality on those documents?” is much easier. The shift is from a technical-feeling question to a business-shaped question, and it is what the entire stalled cohort of owner-managed AI buyers is missing.
What changes when you stop trying to be the engineer?
The first thing is relief. A founder who has been carrying the list as evidence of her own technical inadequacy can put that frame down. Her stuckness is, on closer reading, evidence of having been asked the wrong type of question. The AI conversation she has been part of is mostly run by people who think the questions are technical because they themselves are technical.
The second thing is forward motion. With the questions reclassified, the next steps look like familiar work. Process audits, governance design, risk assessment, owner assignment. The founder has done all of this before in different domains. The novelty is that the contributor is AI rather than a person; the methods are the methods she already knows.
The third thing, sometimes, is a fresh, smaller technical question that does need a technologist. “Given the governance and the workflow we have just designed, which class of model is appropriate, and what is the configuration we need?” That is a genuinely technical question. It is also a much smaller and more pleasant question to answer than the original list, because by the time it is asked, the business has already answered the more important questions itself.
If you would like to talk through what each of those questions looks like in your firm specifically, book a conversation.



