Why your automation is breaking trust (and how to fix it)


Automation Without Visibility Is Dangerous

Welcome back to Founder Mode!

I keep seeing the same mistake.

A team finds a task they hate doing by hand. It is slow. It is repetitive. It feels like the perfect place to automate. So they set up the workflow. They turn on the texts. They schedule the follow-ups. They build the sequence. Everyone feels productive.

Then the system breaks trust.

Not because the automation failed.

Because the team automated something they did not understand.

That is the real problem.

At Pretty Good AI, I spend a lot of time thinking about where automation helps and where it hurts. The answer is not to avoid automation. I love automation. I want more of it. But I only want it after we can see the system clearly.

If you automate before you measure, you scale confusion.

That is the through line.

Why this happens

Automation feels like progress.

Measurement feels slow.

That tension tricks teams all the time.

If you are under pressure, it feels better to launch a new workflow than to slow down and map the real funnel. It feels better to say, “We automated follow-up,” than to say, “We spent two weeks figuring out where records get stuck.”

One sounds fast.

The other sounds boring.

But the boring work is what makes the fast work real.

Without that visibility, automation just amplifies whatever already exists. If the system is healthy, automation helps. If the system is broken, automation makes the break louder.

The story I keep coming back to

One example stuck with me.

A team wanted to improve patient engagement. On the surface, the idea looked smart. They built an SMS campaign to keep patients informed and excited. It was meant to reduce drop-off and give people confidence that the process was moving.

It sounded like a win.

The problem was that they had not actually measured how long records took to get tracked and processed.

So the texts went out early. Patients got excited. They thought things were moving. Then nothing happened for weeks.

That silence did more damage than saying nothing at all.

Now the patient is not just waiting. The patient is confused. They are frustrated. They feel misled. The team did not improve the experience. They made it worse.

The automation did exactly what it was told to do.

The system underneath it was the problem.

That is why we stopped the automation and went back to measurement first.

Not because automation was bad.

Because automation without visibility is dangerous.

Visibility comes before control

I think a lot about this sequence:

Visibility leads to control.
Control leads to acceleration.

Most teams want to skip the first two steps and jump straight to acceleration.

That almost never works.

If you cannot see how long each step takes, where delays happen, what percentage of cases stall, and what causes the stall, then you do not have control. And if you do not have control, speed is fake.

You are not going faster.

You are just moving noise around.

In healthcare, that gets dangerous quickly because people are not interacting with a shopping cart or a newsletter signup. They are waiting for care. They are anxious. They want answers. Bad automation does not just create inefficiency. It creates distrust.

That is why I push teams to get very honest about the funnel.

Not the feature.

The funnel.

Funnels matter more than features

A lot of teams fall in love with features because features are visible. You can demo them. You can show them to leadership. You can put them in a roadmap. You can point to them and say, “Look, we shipped something.”

Funnels are less glamorous.

Funnels ask harder questions.

  • Where exactly does the handoff break?
  • How long does each step take?
  • What percent of people make it through?
  • What percent disappear in the middle?
  • What part is manual but undocumented?
  • What part depends on one person remembering to do the next thing?

Those questions are not as fun.

But they are the real work.

If the funnel is weak, no amount of messaging polish will save it.

This is one reason I care so much about Pretty Good AI being practical. I do not want AI to become another layer of activity that looks impressive and hides the truth. I want it to help teams see the truth faster.

That means using AI to summarize process data, flag bottlenecks, identify drop-off patterns, and expose where humans think a workflow is working but patients are having a very different experience.

That is a much better use of AI than slapping a text bot on top of a system no one has measured.

What measurement actually gives you

People hear “measure first” and think it means slowing down.

I think it is the opposite.

Measurement is what earns you the right to move fast.

When you measure first, you learn:

  • How long does the process really take?
  • Where expectations break
  • What should be automated
  • What should stay human
  • What message can go out confidently
  • What message should wait until a real milestone happens

That last one matters a lot.

A good automation is not just timely. It is truthful.

If the system cannot support the promise in the message, do not send the message.

That sounds simple, but it gets ignored all the time.

Automation amplifies what exists

This might be the simplest way to say it:

Automation is an amplifier.

  • If your process is clear, automation improves it.
  • If your process is messy, automation spreads the mess.
  • If your timing is right, automation builds trust.
  • If your timing is wrong, automation breaks trust at scale.

That is why I do not think the first question should be, “What can we automate?”

I think the first question should be, “What do we understand well enough to automate?”

That small change matters.

It forces you to earn the automation.

Where Pretty Good AI fits in

The reason I keep coming back to this at Pretty Good AI is that I think AI should make systems clearer before it makes them faster.

That is the right order.

Use AI to:

  • Map the workflow
  • Summarize real delays
  • Spot hidden bottlenecks
  • Compare expected timing to actual timing
  • Surface where communication and operations are out of sync

Then use automation once you know what is true.

That is how you get the upside without the damage.

AI should not just create more output. It should improve judgment.

For me, that is the whole game.

What I would tell a team right now

If your team is about to automate confirmations, follow-ups, reminders, or SMS updates, I would pause and ask a few questions first.

Do we know how long each stage actually takes?

Do we know where delays usually happen?

Do we know which milestones are real and which are just internal guesses?

Do we know what the patient is experiencing between each step?

Do we know that the message we want to automate is true often enough to trust?

If the answer is no, then I would not automate yet.

I would measure first.

That does not mean wait forever. It means to do the work in the right order.

5 key takeaways

1. If you automate before you measure, you scale confusion.
Automation does not fix uncertainty. It multiplies it.

2. Visibility comes before control.
You cannot speed up what you do not understand.

3. Funnels matter more than features.
A polished message cannot save a broken workflow.

4. Automation amplifies whatever already exists.
It makes good systems better and bad systems worse.

5. Measure first, then automate.
That order protects trust and makes the automation actually useful.

Final thoughts

I love speed.

I love building systems that remove manual work.

I love automation when it is done right.

But I trust measurement more.

Because measurement tells you what is real.

And in any business, especially in healthcare, reality matters more than momentum theater.

The best teams are not the ones that automate first.

They are the ones who see clearly first.

Then they automate with confidence.

That is how you build trust.
That is how you move faster.
And that is how you use AI in a way that actually helps people.

Measure first. Then automate. Always in that order.

See you next week!

-kevin

Recent Social Posts

Recent Podcasts

video preview
show
AI Agents Are the New Employ...
Mar 12 · Founder Mode
27:22
Spotify Logo
 

2810 N Church St #87205, Wilmington, DE 19802
Unsubscribe · Preferences

Founder Mode

Founder Mode is a weekly newsletter for builders—whether it’s startups, systems, or personal growth. It’s about finding your flow, balancing health, wealth, and productivity, and tackling challenges with focus and curiosity. Each week, you’ll gain actionable insights and fresh perspectives to help you think like a founder and build what matters most.

Read more from Founder Mode
AI agents are changing how companies operate. Learn why founders must shift from building to system design and orchestration.

The End of Prompt Engineering with Dennis Pilarinos Welcome back to Founder Mode! We’re in the middle of a shift. Not a small one. Not another tool. Not another framework. A real shift in how companies get built. As I said in this episode: “It used to be build the product, hire the team, scale the ops. Now it's design the system, orchestrate the agents, and make the calls that can't be automated.” That’s the job now. And if you don’t see it yet, you will soon. AI Isn’t a Tool Anymore. It’s a...

AI agents are changing the founder job. Here is how I think about systems, judgment, hiring, and what should still stay human.

AI Agents Are the New Employees Welcome back to Founder Mode! For a long time, the founder's job was pretty clear. You built the product. You hired the team. You scaled the operation. That is still part of the job. But it is no longer the whole job. What is changing now is the layer above all of that. Founders are starting to manage AI agents the way they once managed software tools, contractors, and teams. That means less manual work and more judgment. Less clicking and more deciding. Doing...

Kevin shares lessons from Pretty Good AI on how AI reveals broken workflows, exposes bottlenecks, and becomes a forcing function for alignment.

AI As a Mirror Welcome back to Founder Mode! When teams talk about AI, they usually focus on one metric. How many calls did it answer? How many tickets did it resolve? How many hours did it save? But after building Pretty Good AI and deploying it inside real operations, I have learned something different. The real KPI is not what the AI handles. It is what it exposes. AI is not just automation. It is a mirror. And sometimes what it shows you is uncomfortable. We Thought It Was a Demand...