A founder of a 26-person services firm pays £680 a month for a beautifully formatted Tableau dashboard nobody reads. The CFO sends it weekly. It contains 38 metrics. The founder skims it on the train and forms an impression based on three numbers he can recognise without context. He makes operational calls based on that impression.
The leadership team does not refer to the dashboard in any meeting. None of them could explain what 70 percent of the metrics mean. The dashboard functions as theatre that costs £8,000 a year, not as the data layer of the business.
Why does most dashboard work fail?
Dashboards fail in five recognisable ways. Too many metrics. Lagging-only metrics. No owner per metric. No target or threshold. Beautifully formatted dashboards that live in Tableau and are referred to in zero meetings. Each failure mode is independently fatal. A dashboard with thirty metrics dilutes attention regardless of how good the design is. A dashboard of lagging-only metrics arrives too late to act on. A dashboard with no owner per metric becomes nobody’s problem.
The deepest failure mode is the dashboard that the founder can read but the team cannot. The founder looks at customer satisfaction dropping from +35 to +15 and thinks “pause new sales until we fix delivery.“ The operations director without founder-level experience reads the same number and does not know whether to act. The dashboard has not replaced the founder’s gut; it has been built around the founder’s gut while leaving the team unable to apply the same logic.
The replace-the-gut criterion
The ultimate test of a scorecard is whether it allows someone other than the founder to make the same operational decision without asking for founder guidance. Many founders carry heuristics in their heads. When utilisation drops below 65 percent, hire. When proposal win rate falls below 30 percent, there is a sales process problem. When customer satisfaction drops more than five points, there is a delivery problem.
These heuristics are based on accumulated experience and are usually not written down. The scorecard makes these heuristics explicit. Not just “track utilisation“ but “utilisation target 70 percent, amber 65 percent, red 60 percent, action when red is to discuss capacity in next L10.“ The threshold and the action are the heuristic, written down so the team can apply it. Once the heuristic is on the dashboard, any manager can read the data and reach the same call the founder would have reached. The gut has been transferred from founder memory to firm artefact.
Leading not lagging indicators
Lagging indicators measure outcomes after the fact: revenue closed in the month, profit for the quarter, customer satisfaction assessed after a project completes. These matter, but they arrive too late for course correction. Leading indicators measure inputs and activities that predict the eventual outcome.
Examples: proposals sent (predicts future revenue), proposal win rate (predicts conversion efficiency), staff utilisation (predicts cash generation), invoice aging (predicts cash flow), customer satisfaction trend (predicts retention and referrals), staff turnover (predicts delivery risk). If the leading indicator is healthy, the lagging usually follows. If the leading shows a problem, the team has time to intervene before the monthly results are published and the damage is visible. A scorecard that is mostly lagging metrics is useful for board reporting. It is not useful for operational decision-making at weekly cadence. The L10 needs leading metrics to drive its IDS section; lagging metrics belong in the monthly review.
The five-to-fifteen rule
Each metric has a target and a current value, colour coded: green (on-track, above target), amber (caution, between target and a lower threshold), red (off-track, below threshold). The metric is updated weekly, and the owner of the metric is listed. During the scorecard review in the L10, each metric is reported as on-track or off-track.
If red, it goes to the issues list and is discussed in IDS. If green or amber, it is noted and the meeting moves on. The discipline is five to fifteen weekly metrics maximum, each chosen because it truly predicts an outcome important to the firm. Anything beyond fifteen dilutes attention; the team starts skipping over metrics they do not own. Anything below five is too thin to detect drift. Most firms find their scorecard settles at eight to twelve metrics after two or three quarters of refinement, with metrics added or removed based on whether the team actually used them in the IDS.
The walk-in test as a diagnostic
A new manager from outside the firm walks in cold, reads the scorecard for five minutes, and should be able to identify what is working and what needs attention within ten minutes. If the scorecard is dense, confusing, uses unexplained abbreviations, or mixes leading and lagging metrics with no structure, the new manager will be lost.
If the scorecard is clear, metrics are named in plain language (not “EBITDA adj. vs. guidance“ but “Monthly profit as percentage of target“), targets are visible, and current performance is colour coded, the new manager can orient themselves immediately. The walk-in test is the cheapest audit available. Find a peer founder or a senior operator who does not know the firm and ask them to read the scorecard for fifteen minutes. The questions they ask are the diagnosis. “What does this abbreviation mean?“ “What is the target on this one?“ “Who owns this metric?“ Each unanswered question is a place where the dashboard works for the founder but not for anyone else.
The four-dashboard pattern
The four-dashboard pattern structures the scorecard system. Financial: profit and loss, cash position, accounts receivable aging, accounts payable aging. The metrics that ensure the firm does not run out of cash or lose money. Operational: delivery quality (rework rate, project schedule adherence, customer satisfaction), capacity (utilisation, billable hours per person, project pipeline), timeline (invoice turn-around, proposal turn-around).
Commercial: sales pipeline (new opportunities created, pipeline value by stage, win rate, sales cycle length), customer concentration (top-twenty risk, new customer wins), revenue trends. People: retention (voluntary turnover, vacancy rate by function), engagement (eNPS or engagement survey scores), hiring effectiveness (time-to-hire, quality of hire as measured by retention or performance). Each dashboard has an owner. The CFO owns financial, the operations leader owns operational, the sales leader owns commercial, the people lead owns people. Each updates their dashboard weekly.
Tooling stays proportionate
For a firm of 12 to 30 team members, a shared Google Sheet or Excel file updated each week is sufficient if the discipline around weekly updates is in place. The benefit of more sophisticated tools (Klipfolio, Geckoboard, Databox, Notion-as-dashboard) is that they automatically pull data from accounting software, CRM systems, and project management tools, removing the manual update step.
The drawback is that they require the underlying data in Xero, Salesforce, or your project management tool to be accurate, which is often not the case in founder-led firms where data-entry discipline lags. The pattern that works: start with a shared spreadsheet to test which metrics actually drive decisions. After two quarters, the metrics that earn their place are obvious. Then move those specific metrics to an automated dashboard if the underlying data is reliable. The wrong sequence is to buy Klipfolio first and then design the metrics; the dashboard then becomes another input to ignore.
What to do this week
Pull your existing dashboard and run the walk-in test. Find a peer founder or senior operator who does not know your business. Send them the dashboard and ask three questions. Within ten minutes, can you tell what is going well? What needs attention? Could you make a call on what to do next? Their answers tell you whether the dashboard works for anyone other than you.
Then count the metrics. If there are more than fifteen, identify the five you actually act on weekly. The other ten are noise. Build a stripped-down version of the scorecard with those five plus three or four leading metrics that predict the outcomes you care about. Put it in front of the L10 next week.
If you would like a second pair of eyes on which metrics belong on the scorecard, book a conversation.



