About two years ago, our CEO sent me an analysis.
He had gone through every customer in our book of business and sorted them into tiers: who was growing, who was stuck, who was quietly shrinking without anyone making noise about it. He did it by hand, pulling data from four different sources, cross-referencing usage patterns against contract values, and noting which accounts had executive relationships versus just operational contacts.
It probably took him four hours. It was extremely good.
He sent it to me, I used it for about three weeks, and then I automated it.
The Manual Process Is the Spec
Here is what I have learned about smart people doing things by hand: they are almost always doing it right.
The analysis was good because the methodology was good. He knew which signals to weight, which data sources to trust, which combinations of factors actually predicted trajectory. He had the domain knowledge. He just applied it manually.
That is not a problem. That is a spec document disguised as a spreadsheet.
When someone with real expertise solves a problem manually, they are encoding the answer to: “what does good judgment look like on this problem?” That is exactly what you need before you automate anything. The hard part of automation is not the technical implementation. It is knowing what to build toward. A manual process from someone who knows what they are doing solves that problem for you.
I took his methodology, asked him to walk me through how he made each decision, documented the logic, and then replicated it.
What the Automation Does
Every week, the system runs his analysis automatically.
Same four data sources. Same signals: usage trajectory, contract value relative to feature adoption, executive contact presence, time since meaningful expansion conversation. Same tier logic, translated into code.
It produces a sorted list: growing accounts, stable accounts, accounts that look like they are shrinking without anyone saying so. The third category is the one I care about most, because those are the accounts that churn quietly while everyone assumes someone else is handling it.
The output is a brief. I read it on Monday mornings.
What changed: instead of that analysis existing once, during one four-hour session, it exists every week. The methodology compounds across time. The accounts that look fine this week but were flagged last week get a different kind of attention than the ones that have been green for three months straight.
Trend is more useful than snapshot. The manual process could only produce snapshots. The automation produces trends.
The Thing I Ask Before Building Anything
Before I automate anything now, I ask: has someone already solved this manually and well?
Not every problem has been. Some things need original methodology and that is a harder starting point. But a surprising number of the most valuable automations I have built started with someone else’s smart manual process: a framework someone used for a quarterly review, a checklist a senior colleague built from experience, a decision tree someone had in their head and had never written down.
When you find one of those, the conversation to have is: “walk me through how you think about this.” Not so you can understand it. So you can replicate it at scale.
The CEO analysis runs while I am doing other things. It runs every week regardless of whether I have four hours free. It catches the accounts that would have drifted past the manual-review threshold and never been flagged.
That is not me being smarter than the original methodology. That is just the original methodology being applied more often than any human could sustain.
What to Watch For
The failure mode with this approach is automating a flawed methodology.
If the manual process is smart, you preserve and amplify the intelligence. If the manual process has a bias or a blind spot, you systematize that too. It runs every week at scale, making the same blind spot invisible because you stopped looking at the thing the automation handles.
The check I do: before automating a methodology, find the cases where the manual version got it wrong. Every process has them. If I cannot identify when the original analysis failed, I do not know what to watch for when the automated version fails.
In this case: accounts that had been flagged as declining but then successfully expanded after a product change. The methodology was overweighting trailing usage data and not adjusting fast enough for recent signals. I built in a 30-day recency weight. The automated version catches that now better than the original did.
You do not have to build the automation perfectly. You build it close, then you keep the human review at the places where it still gets it wrong.
Blake Bailey runs Bailey Business Ventures, an AI transformation consulting practice. He has never met a smart manual process he did not want to run on a schedule.