She has reorganised her diary three times this year. Once in January with a colour-coded block plan she found on a Substack. Once in March after a coaching call where she promised to protect Tuesday mornings. Once in late April, after a week so fragmented she could not remember what she had actually done with it. Six weeks later, the diary looks exactly as it did before, and she is more tired than she was in January.
The diary was never the problem. The invisible work between the diary entries was. And until something can hold all sixty calendar entries in view at once, no human eye is going to catch the pattern, because no human eye can.
This is the post in the AI for your own work cluster that sits at the front of the Eliminate quadrant in the EAD-Do framework. It is the diagnostic move. Run it before you cut anything.
What is an AI week-audit, in plain terms?
An AI week-audit is a twenty-minute diagnostic where you paste two weeks of calendar export and inbox metadata into a chat model, ask it to surface time-spend patterns and recurring friction, then sit with what it returns. It does not prescribe cuts. It produces a description of how the past two weeks actually went, with the patterns a single human reader cannot hold in working memory at once made visible.
Reclaim.ai’s calendar-audit work is the most direct named precedent for the move. Microsoft’s “Breaking Down the Infinite Workday” 2025 Work Trend Index documents the underlying pattern, meeting fragmentation, after-hours leakage, the meeting-on-meeting drift, that an audit reliably surfaces in any owner’s two-week window.
Why does the diary itself never tell you the truth?
Because the diary is the plan, not the record. The 9am block labelled “client call” was actually 8.55am to 10.05am, of which fifteen minutes was a slack-handover from the previous meeting, twelve minutes was a Slack interruption from the operations team, and the call itself ended on action items that became three follow-ups by Wednesday. None of that is in the calendar. All of it is in the week.
Three things stop a human reader spotting the pattern. Recency bias means your felt sense of last week is dominated by Friday. The Hawthorne effect means the moment you start watching your own time, your time-use changes. And working-memory limits, the canonical “magic number seven” finding, mean nobody holds sixty calendar entries plus their context in mind well enough to spot the cluster that crept in across two of them. A model can hold all sixty, and it has no recency bias because it has no Friday.
What does the two-week audit prompt actually look like?
The shape is concrete enough to copy. Export the last fourteen days of your calendar as iCal or paste from Google Calendar. Add a line of inbox metadata if you can: total emails sent and received per day, the three correspondents you exchanged the highest volume of messages with. Open Claude or ChatGPT, paste the lot, and prompt with something close to this:
Below is two weeks of my calendar plus inbox totals. I am the founder of a UK services firm. Read it as a time-spend audit. Surface three categories of pattern. First, meetings or task clusters that have grown across the two weeks. Second, recurring blocks of less than 30 minutes that show up more than three times. Third, people, projects, or topics that appear in more than five separate entries. Be specific, name the entries. Do not recommend changes. Just describe what is there.
The “do not recommend changes” line matters. It stops the model defaulting into a productivity-app pitch and keeps it on the diagnostic. What comes back is a structured read of what actually happened, anchored in your real entries, that you can argue with rather than nod through.
What does the audit reliably find that you missed?
In a typical owner-operator week, three patterns recur. A meeting cluster that has crept, the pipeline review that started fortnightly and is now three forty-five-minute calls a week. A recurring half-hour interruption, the operations check-in that always overruns into the slot after. A conversation past its purpose, the project that was supposed to wrap in March and is now in fourteen calendar entries across April.
You felt all three. You could not name them because you were inside them. Once a model has put them on a page in front of you, with your own entries cited back at you, the question shifts from “do these matter” to “which of these matter to the business model and which are residue”. That is the judgement step, and it stays with you.
A useful audit also flags what it cannot read. A two-hour block labelled “deep work” is opaque to the model unless you tell it what was actually done in that block. A standing call labelled with a client codename means nothing to the model without context. The first audit usually surfaces five or six of these unreadable blocks, and the second audit, run two weeks later with better labelling, gets sharper. The labelling discipline is a side benefit. AI describes the past two weeks. You decide what is true going forward, which is the move the next post in this cluster, killing the wrong meetings, unpacks in detail.
When should you do this and how often?
Run it when you have noticed friction across two weeks but cannot name what is causing it, or quarterly as a standing diagnostic. Twenty minutes once a quarter is a low cost for a reliable read of what crept in across the last ninety days. More often than that and the audit catches natural noise rather than real drift; less often and the patterns compound into the next quarter.
After the audit, one cut per week with a four-week check-back beats a Sunday-night blitz. The blitz feels productive and resets to baseline by the second Tuesday. The one-per-week cadence holds because each cut has time to be missed (or not) before the next one is made. If a cut goes unmissed for four weeks, it was the right cut. If it bites in week two, you reverse it cheaply. The audit is the diagnostic. The cuts are the work, and the work is the founder’s, not the model’s.
Two practical add-ons make the quarterly cadence sturdier. Keep a one-line decisions log on each cut you make: the entry that came out, the date, and what you expected to feel different. Four weeks later the same model can read the log alongside your fresh calendar and tell you which cuts held, which crept back, and which need reversing on evidence rather than nerves. That feedback loop is what turns the audit from a one-off cleanse into a standing rhythm, and it is also where the founder’s judgement gets visibly better quarter over quarter.



