Your questions about AI aren't technical questions

A founder at a desk in a quiet office, holding a printed list of handwritten questions, a pen in the other hand, eyes on the page mid-thought, the laptop closed beside her
TL;DR

The questions a founder sits with about AI ('what happens when it gets things wrong, who owns it internally, how do we control quality') read as technical to her, but every one is a governance, operations, or risk question. Naming the type changes who needs to be in the room and what answering the question involves.

Key takeaways

- The list of unanswered AI questions most founders carry ("which parts of our business would AI touch, what happens when it gets things wrong, who owns it internally") looks technical and is, on closer reading, a list of business questions. - Each question sits in a familiar business category. Process audit. Governance. Risk. Accountability. Operating model. The principles already exist in the business; the novelty is applying them to a non-human contributor. - When founders try to answer governance questions through a tool-evaluation lens, they stall. The right tool, on its own, doesn't define quality, doesn't assign accountability, and doesn't pick which function owns the work. - The most useful first move is to print the list and write next to each question what type it actually is. Once each question has a type, the natural owner already exists in most owner-managed firms. - The first call when AI work is stalled is internal, not external. An AI consultant is useful for the technology choices once the governance questions are answered, and a useful consultant says so.

A founder who has been actively considering AI for her firm for the past quarter has, in her notes app, a list of questions she has not been able to answer. The list looks something like this. Which parts of our business would AI actually touch. What processes would it change. How do we control quality and accountability. What happens when it gets things wrong. Who owns it internally.

She thinks she needs more technical knowledge to answer these. She has watched several explainer videos. She has read two long articles about how large language models work under the hood. She has tried ChatGPT three times. None of it has helped her answer the list. She is starting to suspect AI is harder to assess than she thought, and she is privately a little embarrassed to be this stuck.

The reason has nothing to do with technical understanding. Her list, on closer reading, is a list of business questions. They look technical because they involve a technology. Closer up, every one is a business question: process ownership, accountability, quality control, fallback handling, governance. Naming the question type changes who needs to be in the room, and what answering the question actually involves.

Where do these questions actually come from?

That list is not invented. It comes verbatim from a 2025 LinkedIn piece called “AI Isn’t the Problem for SMEs. Not Knowing Where to Start Is.”, which captured what owner-managed firms were actually asking themselves about AI. The same questions show up again and again in qualitative research, because they reflect what an owner is genuinely trying to assess. Worth reading the original list slowly, in the founder’s own voice.

If a founder lists these questions and tries to answer them as technical questions, the answers are wrong. The answer to “what happens when AI gets things wrong” is not a model-card spec sheet. The answer to “who owns it internally” is not a feature. The answer to “how do we control quality and accountability” is not a settings panel. The answers are operational and organisational, and they involve people, processes, and oversight that already exist in the business.

What founders are missing, in the version of this conversation I find most useful, is permission to treat the list as a list of business questions. Once that move is made, the questions stop looking unanswerable.

What kind of question is each one, actually?

Take the five and re-classify them. “Which parts of our business would AI actually touch?” is a process audit question. The right room is the owner plus whoever knows the workflows in detail, often the operations lead. “What processes would it change?” is the same audit, going one level deeper into how the work currently flows. Neither needs an AI engineer present.

“How do we control quality and accountability?” is a governance question. It belongs to the same category as “how do we control quality and accountability for client work that gets delegated to a junior”. The founder has answered that question many times before, just not for a non-human contributor. The principles transfer. The right room is the owner plus the senior leader who currently owns quality.

“What happens when it gets things wrong?” is a risk question. It belongs to the same category as “what happens if our supplier slips a deadline” or “what happens if a junior makes a typo in a client report”. Risk-management thinking already exists in the business; this question slots in. “Who owns it internally?” is an accountability and operating-model question. Someone owns every other tool the business uses. AI should sit under whoever owns the function it most affects, and the cost of getting that wrong is recoverable.

None of those rooms need a technologist as the lead. Most of them already exist.

Why does the classification matter?

Classification matters because different question types need different rooms, different decision-makers, and different time horizons. A process audit takes a few hours with the right people in a room. A governance principle takes longer to settle but already has analogues in the business. A risk question can be handled with the existing risk register approach, with one extra row added.

When founders try to answer governance questions through a tool-evaluation lens, they get stuck. The right tool, on its own, doesn’t define quality, doesn’t assign accountability, and doesn’t pick which function owns the work. Those decisions belong to the business, and they look identical to decisions the business has already made about other tools, processes, and people.

This is also where the most common stall pattern shows up. Founders who can name the questions on their list are, almost by definition, ready to act. What keeps them from acting is the misclassification of the questions as technical. That misclassification sends them in search of a technologist when what they actually need is two hours with their own ops lead and a willingness to treat AI as a new kind of contributor.

What does the reclassification look like in practice?

The most useful first move is to print the list and write next to each question what type it actually is. Process audit. Governance. Risk. Accountability. Operating model. The handwriting matters; the act of writing it down forces the reclassification. Once each question has a type, the next move is which person in the business is the natural owner. In most owner-managed firms, that person already exists.

The second move is to drop the assumption that an AI consultant is the first call. The first call is internal. An AI consultant is helpful for the technology choices once the governance questions are answered. Most stalled AI work is stalled at the governance layer, where a consultant cannot do the founder’s thinking for her. A useful consultant tells the founder that, then waits while the internal work is done. The work that follows is much smaller than it looked from outside.

What happens once each question is reclassified is that the questions get smaller. “How do we control quality?” is hard. “How do we control quality on the four documents the AI assistant will produce per week, given the senior who already owns quality on those documents?” is much easier. The shift is from a technical-feeling question to a business-shaped question, and it is what the entire stalled cohort of owner-managed AI buyers is missing.

What changes when you stop trying to be the engineer?

The first thing is relief. A founder who has been carrying the list as evidence of her own technical inadequacy can put that frame down. Her stuckness is, on closer reading, evidence of having been asked the wrong type of question. The AI conversation she has been part of is mostly run by people who think the questions are technical because they themselves are technical.

The second thing is forward motion. With the questions reclassified, the next steps look like familiar work. Process audits, governance design, risk assessment, owner assignment. The founder has done all of this before in different domains. The novelty is that the contributor is AI rather than a person; the methods are the methods she already knows.

The third thing, sometimes, is a fresh, smaller technical question that does need a technologist. “Given the governance and the workflow we have just designed, which class of model is appropriate, and what is the configuration we need?” That is a genuinely technical question. It is also a much smaller and more pleasant question to answer than the original list, because by the time it is asked, the business has already answered the more important questions itself.

If you would like to talk through what each of those questions looks like in your firm specifically, book a conversation.

Sources

  • LinkedIn 2025, "AI Isn't the Problem for SMEs. Not Knowing Where to Start Is.": verbatim list of unanswered AI questions from owner-managed firms ("Which parts of our business would AI actually touch? What processes would it change? How do we control quality and accountability? What happens when it gets things wrong? Who owns it internally?"). Source.
  • Effica/Novusbroker 2025: practitioner observation on tool-first behaviour and the failed-rollout pattern ("It's like buying a power tool at the hardware store because it looked impressive, bringing it home, and then wandering around looking for something to use it on"). Source.
  • JPost 2025: practitioner observation that "successful businesses start with their actual problems and then determine whether AI offers the best solution. That sequence matters enormously." Source.
  • McKinsey & Company (2025). The State of AI Global Survey. 88 per cent of organisations use AI but only 39 per cent report enterprise EBIT impact, the strategic-application gap. Source.
  • Goldratt, E. M. (1984). The Goal, A Process of Ongoing Improvement. The foundational Theory of Constraints text. Diagnosing the real bottleneck before applying technology. Source.
  • Boston Consulting Group (2025). Are You Generating Value from AI, The Widening Gap. Five per cent of future-built firms capture disproportionate value through systematic application. Source.
  • MIT CISR (Woerner, Sebastian, Weill and Kaganer, 2025). Grow Enterprise AI Maturity for Bottom-Line Impact. Maturity-staged framework for AI strategy progression. Source.

Frequently asked questions

Why do my questions about AI feel technical when they aren't?

Because they involve a technology. The wrapping is technical; the substance is business. 'How do we control quality and accountability?' is the same question you have answered many times for human contributors. The principles transfer. Once you spot that, the question stops looking unanswerable.

Which questions on my list are technical and which aren't?

Almost none of the questions a founder sits with at the start are technical. 'Which parts of our business would AI touch' is a process audit. 'How do we control quality' is governance. 'What happens when it gets things wrong' is risk. 'Who owns it internally' is operating model. The genuinely technical question ('which class of model and what configuration') only comes up after the business questions are answered.

Do I need an AI consultant to answer these questions?

Not for the first round. The first round is internal: process audit, governance principles, risk thinking, owner assignment. A useful AI consultant tells you that and waits while the internal work happens. They are most useful once the governance is in place and the technology choices are the next step.

What's a practical first move with the list?

Print the list of questions. Next to each one, write the type: process audit, governance, risk, accountability, operating model. Then write the name of the person in your business who is the natural owner of that type. In most owner-managed firms, every owner already exists. That single hour of writing reframes the rest of the work.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation