Rules Nobody Questions
Welcome back to Founder Mode.
There is something I have started to notice more and more as we build Pretty Good AI.
Most businesses run on rules that nobody questions.
Not because they are right. But because they have been there for a long time.
They live in conversations, habits, and assumptions. They get passed down from one person to another. Over time, they harden into policy.
Then AI shows up.
And suddenly, those rules have to be written down.
That is when things get interesting.
The Moment Everything Has to Be Defined
When you build AI logic, you cannot rely on instinct.
You have to define what a new patient is.
You have to define routing rules.
You have to define how scheduling works.
You have to define edge cases.
At Pretty Good AI, this is where we see the biggest shift happen.
Teams start writing down their workflows for the first time.
And almost immediately, contradictions appear.
The Three-Year Rule
One of the most common examples we see is the three-year rule.
On paper, it sounds simple. If a patient has not been seen in three years, they are considered new.
But when we ask different people on the same team, we get different answers.
Some follow billing definitions.
Some follow internal scheduling habits.
Some just do what feels right in the moment.
The AI cannot “feel it out.” It needs one rule.
So we ask the team to pick one.
That is when the tension shows up.
The rule was never clear to begin with.
Five Workflows for One Task
In another deployment, we mapped out a basic intake process.
We expected one workflow.
We found five.
Each staff member had their own version of how things should be done.
All of them believed they were correct.
All of them were slightly different.
When everything lives in people’s heads, that works. Humans adapt. They fill in gaps. They correct each other.
AI does not do that.
It exposes the inconsistency.
The Ideal vs The Actual
There is always a gap between how a team thinks the process works and how it actually works.
When we sit in meetings, we hear the ideal version.
When we look at transcripts, we see the real version.
Calls are routed differently than expected.
Appointments booked outside the intended flow.
Exceptions handled in ways nobody documented.
At Pretty Good AI, we trust the transcript.
The transcript is the truth.
It shows what actually happened, not what we think should have happened.
Assumptions Become Policy
Over time, small assumptions become hard rules.
Someone makes a decision once.
It works.
It gets repeated.
No one revisits it.
Years later, that assumption is still driving behavior.
But the context has changed.
The business has grown.
The team has changed.
The tools are different.
The rule stays the same.
AI forces you to look at those rules again.
And sometimes, you realize they no longer make sense.
The Honest System
This is where AI becomes powerful in a different way.
It is not just automation.
It is honesty.
AI does not hide inconsistency.
It does not smooth over unclear logic.
It does not adapt to unwritten rules.
It follows exactly what you tell it.
That makes it a mirror.
If the system is clear, the output is clear.
If the system is messy, the output is messy.
The AI is not wrong.
It is just honest.
Why This Matters
Most teams want AI to improve performance.
What it often does first is expose reality.
That can feel uncomfortable.
It challenges how things have always been done.
It forces conversations that were avoided.
It requires decisions that were delayed.
But this is where real progress happens.
At Pretty Good AI, we have learned to lean into this moment.
When contradictions show up, that is not a problem.
That is the opportunity.
5 Key Takeaways
- Most workflows run on assumptions, not clear rules.
- AI forces you to define what was previously implicit.
- Documentation exposes contradictions quickly.
- The transcript shows reality better than meetings do.
- The AI is not wrong. It is just reflecting the system.
Final Thoughts
Building Pretty Good AI has changed how I think about systems.
I used to believe that better technology would fix problems.
Now I believe clearer thinking does.
AI is just the tool that forces that clarity.
If your system feels shaky after introducing AI, that does not mean the technology is failing.
It means you are finally seeing the truth.
And once you see it, you have a choice.
Ignore it, or fix it.
The teams that choose to fix it move faster than everyone else.
See you on Friday!
-kevin
Recent Social Posts
Recent Podcasts
2810 N Church St #87205, Wilmington, DE 19802
Unsubscribe · Preferences