The dashboard that replaces your gut, not the one nobody reads

A whiteboard with a hand-drawn weekly scorecard showing twelve metrics with coloured status dots, a printed copy on the table below, an operational lead leaning in to update a number
TL;DR

The point of a scorecard is not founder visibility. The founder already has visibility through their gut, accumulated over years of operating. The point of a scorecard is to make that gut explicit so others can apply it. The walk-in test is the diagnostic: a new manager from outside the firm reads the scorecard cold and identifies what is working and what needs attention within ten minutes. The four-dashboard pattern (financial, operational, commercial, people) gives the structure. Leading not lagging metrics, five to fifteen of them, one owner per metric, weekly visibility, colour coded. If only the founder can interpret the numbers, the scorecard has failed and the founder is still the dashboard.

Key takeaways

- The replace-the-gut criterion: a manager other than the founder can read the scorecard cold and reach the same operational call the founder would. If only the founder can interpret it, the scorecard has failed. - Leading not lagging metrics. Lagging measures outcomes after the fact (revenue, profit, customer satisfaction post-project). Leading measures inputs that predict outcomes (proposals sent, utilisation, invoice turn-around). Leading metrics give time to course-correct. - Five to fifteen weekly metrics maximum, each with one owner, each with a target, each colour coded. More than fifteen and attention dilutes; fewer than five and drift goes undetected. - The walk-in test: a new manager from outside reads the scorecard cold and orients themselves within ten minutes. Dense, abbreviation-heavy dashboards fail this in fifteen minutes. - The four-dashboard pattern. Financial: P&L, cash, AR/AP. Operational: delivery quality, capacity, timeline. Commercial: pipeline, conversion, customer concentration, revenue trends. People: retention, engagement, vacancy, time-to-hire. - Tooling stays proportionate. A shared spreadsheet works for 12 to 30-person firms. Klipfolio, Geckoboard, Databox, Notion-as-dashboard automate the update step once underlying systems hold accurate data.

A founder of a 26-person services firm pays £680 a month for a beautifully formatted Tableau dashboard nobody reads. The CFO sends it weekly. It contains 38 metrics. The founder skims it on the train and forms an impression based on three numbers he can recognise without context. He makes operational calls based on that impression.

The leadership team does not refer to the dashboard in any meeting. None of them could explain what 70 percent of the metrics mean. The dashboard functions as theatre that costs £8,000 a year, not as the data layer of the business.

Why does most dashboard work fail?

Dashboards fail in five recognisable ways. Too many metrics. Lagging-only metrics. No owner per metric. No target or threshold. Beautifully formatted dashboards that live in Tableau and are referred to in zero meetings. Each failure mode is independently fatal. A dashboard with thirty metrics dilutes attention regardless of how good the design is. A dashboard of lagging-only metrics arrives too late to act on. A dashboard with no owner per metric becomes nobody’s problem.

The deepest failure mode is the dashboard that the founder can read but the team cannot. The founder looks at customer satisfaction dropping from +35 to +15 and thinks “pause new sales until we fix delivery.“ The operations director without founder-level experience reads the same number and does not know whether to act. The dashboard has not replaced the founder’s gut; it has been built around the founder’s gut while leaving the team unable to apply the same logic.

The replace-the-gut criterion

The ultimate test of a scorecard is whether it allows someone other than the founder to make the same operational decision without asking for founder guidance. Many founders carry heuristics in their heads. When utilisation drops below 65 percent, hire. When proposal win rate falls below 30 percent, there is a sales process problem. When customer satisfaction drops more than five points, there is a delivery problem.

These heuristics are based on accumulated experience and are usually not written down. The scorecard makes these heuristics explicit. Not just “track utilisation“ but “utilisation target 70 percent, amber 65 percent, red 60 percent, action when red is to discuss capacity in next L10.“ The threshold and the action are the heuristic, written down so the team can apply it. Once the heuristic is on the dashboard, any manager can read the data and reach the same call the founder would have reached. The gut has been transferred from founder memory to firm artefact.

Leading not lagging indicators

Lagging indicators measure outcomes after the fact: revenue closed in the month, profit for the quarter, customer satisfaction assessed after a project completes. These matter, but they arrive too late for course correction. Leading indicators measure inputs and activities that predict the eventual outcome.

Examples: proposals sent (predicts future revenue), proposal win rate (predicts conversion efficiency), staff utilisation (predicts cash generation), invoice aging (predicts cash flow), customer satisfaction trend (predicts retention and referrals), staff turnover (predicts delivery risk). If the leading indicator is healthy, the lagging usually follows. If the leading shows a problem, the team has time to intervene before the monthly results are published and the damage is visible. A scorecard that is mostly lagging metrics is useful for board reporting. It is not useful for operational decision-making at weekly cadence. The L10 needs leading metrics to drive its IDS section; lagging metrics belong in the monthly review.

The five-to-fifteen rule

Each metric has a target and a current value, colour coded: green (on-track, above target), amber (caution, between target and a lower threshold), red (off-track, below threshold). The metric is updated weekly, and the owner of the metric is listed. During the scorecard review in the L10, each metric is reported as on-track or off-track.

If red, it goes to the issues list and is discussed in IDS. If green or amber, it is noted and the meeting moves on. The discipline is five to fifteen weekly metrics maximum, each chosen because it truly predicts an outcome important to the firm. Anything beyond fifteen dilutes attention; the team starts skipping over metrics they do not own. Anything below five is too thin to detect drift. Most firms find their scorecard settles at eight to twelve metrics after two or three quarters of refinement, with metrics added or removed based on whether the team actually used them in the IDS.

The walk-in test as a diagnostic

A new manager from outside the firm walks in cold, reads the scorecard for five minutes, and should be able to identify what is working and what needs attention within ten minutes. If the scorecard is dense, confusing, uses unexplained abbreviations, or mixes leading and lagging metrics with no structure, the new manager will be lost.

If the scorecard is clear, metrics are named in plain language (not “EBITDA adj. vs. guidance“ but “Monthly profit as percentage of target“), targets are visible, and current performance is colour coded, the new manager can orient themselves immediately. The walk-in test is the cheapest audit available. Find a peer founder or a senior operator who does not know the firm and ask them to read the scorecard for fifteen minutes. The questions they ask are the diagnosis. “What does this abbreviation mean?“ “What is the target on this one?“ “Who owns this metric?“ Each unanswered question is a place where the dashboard works for the founder but not for anyone else.

The four-dashboard pattern

The four-dashboard pattern structures the scorecard system. Financial: profit and loss, cash position, accounts receivable aging, accounts payable aging. The metrics that ensure the firm does not run out of cash or lose money. Operational: delivery quality (rework rate, project schedule adherence, customer satisfaction), capacity (utilisation, billable hours per person, project pipeline), timeline (invoice turn-around, proposal turn-around).

Commercial: sales pipeline (new opportunities created, pipeline value by stage, win rate, sales cycle length), customer concentration (top-twenty risk, new customer wins), revenue trends. People: retention (voluntary turnover, vacancy rate by function), engagement (eNPS or engagement survey scores), hiring effectiveness (time-to-hire, quality of hire as measured by retention or performance). Each dashboard has an owner. The CFO owns financial, the operations leader owns operational, the sales leader owns commercial, the people lead owns people. Each updates their dashboard weekly.

Tooling stays proportionate

For a firm of 12 to 30 team members, a shared Google Sheet or Excel file updated each week is sufficient if the discipline around weekly updates is in place. The benefit of more sophisticated tools (Klipfolio, Geckoboard, Databox, Notion-as-dashboard) is that they automatically pull data from accounting software, CRM systems, and project management tools, removing the manual update step.

The drawback is that they require the underlying data in Xero, Salesforce, or your project management tool to be accurate, which is often not the case in founder-led firms where data-entry discipline lags. The pattern that works: start with a shared spreadsheet to test which metrics actually drive decisions. After two quarters, the metrics that earn their place are obvious. Then move those specific metrics to an automated dashboard if the underlying data is reliable. The wrong sequence is to buy Klipfolio first and then design the metrics; the dashboard then becomes another input to ignore.

What to do this week

Pull your existing dashboard and run the walk-in test. Find a peer founder or senior operator who does not know your business. Send them the dashboard and ask three questions. Within ten minutes, can you tell what is going well? What needs attention? Could you make a call on what to do next? Their answers tell you whether the dashboard works for anyone other than you.

Then count the metrics. If there are more than fifteen, identify the five you actually act on weekly. The other ten are noise. Build a stripped-down version of the scorecard with those five plus three or four leading metrics that predict the outcomes you care about. Put it in front of the L10 next week.

If you would like a second pair of eyes on which metrics belong on the scorecard, book a conversation.

Sources

  • Five-to-fifteen scorecard format from EOS. Source.
  • Leading versus lagging indicators. https://bscdesigner.com/leading-vs-lagging.htm ; Source.
  • Sales pipeline metrics. Source.
  • Effects of owner dependence on business valuation. Source.
  • Klipfolio dashboard category. Source.
  • Wickman, G. (2007). Traction, Get a Grip on Your Business. The Entrepreneurial Operating System (EOS) covers vision, people, data, issues, processes, traction across 250,000+ implementing businesses. Source.
  • Harnish, V. Scaling Up. The four-domain framework (people, strategy, execution, cash) for scaling owner-led businesses past the founder-dependent stage. Source.
  • Kaplan, R. and Norton, D. (1992). The Balanced Scorecard, Measures That Drive Performance, Harvard Business Review. The foundational article on multi-dimensional performance measurement. Source.
  • ICAEW. Business Performance Management, technical guidance. UK SME-relevant reference on KPI selection, performance dashboards and review cadence in owner-led firms. Source.
  • McKinsey & Company. How Effective Boards Approach Technology Governance. Four engagement models calibrated to risk and value impact, the structural backdrop for operating-rhythm design. Source.

Frequently asked questions

What is a leading versus a lagging indicator?

A lagging indicator measures an outcome after the fact (revenue closed, profit for the quarter, customer satisfaction assessed after a project completes). A leading indicator measures an input or activity that predicts the outcome (proposals sent, customer onboarding completion rate, staff utilisation, invoice turn-around). Lagging metrics matter, but they arrive too late for course correction. Leading metrics give the team time to act before the lagging arrives.

How many metrics should a weekly scorecard have?

Five to fifteen, with each having a clear owner, a target, and weekly visibility. More than fifteen dilutes attention; the team starts skipping over metrics. Fewer than five is too thin to detect drift. Each metric should be one the team can act on if it goes red, not one that is interesting to track for its own sake.

What is the walk-in test?

A new manager from outside the firm walks in cold, reads the scorecard for five minutes, and should be able to identify what is working and what needs attention within ten minutes. If the scorecard is dense, mixes leading and lagging metrics, uses unexplained abbreviations, or has metrics with no targets, the new manager will be lost. The test will surface this in fifteen minutes; the founder rarely sees it because they brought the metrics into the world.

What goes in each of the four dashboards?

Financial: profit and loss, cash position, accounts receivable aging, accounts payable aging. Operational: delivery quality (rework rate, project schedule adherence, customer satisfaction), capacity (utilisation, billable hours per person, project pipeline), timeline (invoice turn-around, proposal turn-around). Commercial: sales pipeline (new opportunities, pipeline value by stage, win rate, sales cycle length), customer concentration, revenue trends. People: retention (turnover rate, vacancy rate by function), engagement, hiring effectiveness (time-to-hire, quality of hire).

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation