What 155 sessions taught me about how I actually work

The breakdown of 155 Claude Code sessions loaded into a spreadsheet on a quiet morning, and the number I wasn’t expecting was not in the execution column.

Execution was where I’d predicted it: writing tasks, automation work, publishing, inbox management. About 45% of the total. That’s the stuff the demos show. I could account for all of it.

Plan/Design came in at 28%.

That stopped me. I hadn’t thought of myself as someone who used Claude to plan. My mental model was clear: I’m the strategist, Claude is the executor. I think, it does. I judge, it produces.

Twenty-eight percent said that model was wrong.

When I went through the sessions tagged as planning, the pattern was obvious in hindsight. Three sessions on the architecture of a multi-agent publishing pipeline. Two sessions thinking through how to frame a difficult client conversation before getting on the call. A session working out the structure of an advisory engagement with a startup. A session pressure-testing a product decision — not asking Claude to make it, but asking it to push back on my reasoning before I committed.

One session where I talked myself out of a bad hire by walking through the logic with Claude before I’d said yes.

None of those were execution tasks. I wasn’t asking for output. I was asking Claude to think with me — to hold the other side, surface what I was missing, help me see the shape of a problem before I started moving.

The productivity story about AI is almost entirely told in output terms: you write faster, code faster, clear your inbox faster. That story is true. But it assumes execution is the bottleneck.

It often isn’t.

The decisions that cost me most in the past few years weren’t the ones I executed badly. They were the ones I committed to too quickly — before I’d thought through the consequences, before I’d examined the assumptions, before I’d sat with the problem long enough to see what it actually was. The execution was fine. The clarity wasn’t.

Plan/Design at 28% is me trying to not make that mistake as often.

A founder I advise described something similar. He told me the sessions he got the most value from weren’t the ones where he cleared his task list — they were the ones where he figured out what shouldn’t be on it. That reframes what the tool is. Not a fast typist. Not an executor. A thinking partner who has no ego about the answer.

The execution view of AI is seductive because it’s measurable. You can count what got done. The planning view is harder to quantify — it shows up as fewer wrong turns, better decisions, less expensive backtracking. Six months later it looks like good judgment. Or luck, if you weren’t paying attention.

Most productivity frameworks are built around throughput. How much shipped. How fast. That framing makes sense for manufacturing. For operators, it optimises the wrong thing. Moving fast in the wrong direction is just an efficient way to end up somewhere you didn’t want to be.

What I track is attention. Where mine goes, and what quality of thinking it produces when it gets there. The session categories are a mirror, not a performance review. They tell me what kind of problems I’m bringing to Claude, and by extension, what kind of operator I’m being in any given week.

Some weeks the planning sessions drop and execution spikes. Usually it means I’ve already done the thinking on something and I’m moving. Good. Sometimes it means I’ve stopped thinking and started just doing things. Less good.

The 28% is a feature. Thinking is work — not a warm-up to work, not the soft part before the real stuff starts, not the thing you do once and then move on. It’s the part that determines whether everything that follows is pointed in the right direction.

If your usage breakdown were sitting in front of you right now, what number would stop you?