There’s a version of the AI conversation that goes: automate everything you possibly can, as fast as you can, and figure out the rest later. I understand the appeal. I also see the results when it goes wrong.

The honest version of this conversation includes the word no. Not every workflow should be automated. Not every process benefits from it. And some automations that look good on paper actively make things worse once they’re live.

Here’s where I advise clients to hold back.

When the process itself is broken

Automating a bad process doesn’t fix it — it makes it faster. I’ve seen businesses spend significant budget building automated workflows for processes that shouldn’t exist in their current form. The automation works perfectly. The outcome is still wrong because the underlying logic was wrong.

Before you automate anything, you need to be able to describe what good looks like. Not roughly — specifically. If you can’t define the correct output clearly, you’re not ready to automate.

When the volume doesn’t justify it

This one is straightforward but often ignored. Every automation has a build cost: design, implementation, testing, documentation, and maintenance. If the task happens six times a year, the maths usually doesn’t work.

The threshold varies by complexity. Simple automations (a triggered email, a data formatting rule) can justify themselves at low volumes. Complex, integrated workflows need meaningful volume to pay back the investment.

When I do an AI clarity audit, one of the outputs is an honest ROI estimate for each proposed automation. Surprisingly often, the most-discussed idea in the room doesn’t make the final shortlist, and something nobody mentioned does.

When the stakes of an error are too high

Rule-of-thumb: the higher the consequence of a mistake, the more human oversight you need. This isn’t an argument against automation in high-stakes environments — it’s an argument for designing automation with appropriate checks built in.

Fully automated invoice payment is different from automated invoice drafting that a human reviews before payment. Automated customer comms are different from automated contract generation. The technology may be the same. The appropriate level of human involvement is not.

If an automated error would cost you a customer, a fine, or a legal liability, design for that before you go live — not after.

When the edge cases are the whole job

Good automation handles the common case reliably. But some jobs exist specifically to handle the uncommon case. Customer complaints. Complex technical support. High-value account management. These involve judgment, empathy, and context that varies enormously from case to case.

Routing and initial triage can often be automated usefully here. Full automation of the interaction itself usually can’t — and the attempts I’ve seen have damaged relationships rather than improved efficiency.

Know the difference between the admin around the work and the work itself.

When you don’t have the data

Machine learning and AI models need data to work from. If you want to automate a classification task, you need examples of things classified correctly. If you want to predict something, you need historical data to learn from.

A surprising number of automation projects stall here. The idea is sound, the technology exists, but the data to train or configure it doesn’t. Fix the data problem first. The automation can wait.


Knowing what not to do is half the strategy. If you want a clear view of where AI genuinely adds value in your business — and where it doesn’t — a strategy call is the right place to start.