Why Your AI Is Breaking and How to Fix It


AI Doesn’t Fail. Your Process Does.

Welcome back to Founder Mode!

Over the past month, I have had the same conversation again and again.

“The AI messed up.”

Every time I hear that, I pause. Because in most cases, the AI did not mess up. It followed the rules exactly as they were written. The real problem was buried deeper.

At Pretty Good AI, we have learned this the hard way. Most AI failures happen slowly. They are not dramatic crashes. They show up as wrong bookings, strange routing decisions, or small errors that make teams uneasy.

And almost every time, when we trace it back, the issue is not intelligence. It is a process.

AI does not break clean systems. It exposes broken ones.

The Location Routing Disaster

One of our early deployments looked perfect on paper.

Multi-location clinic. High call volume. Overwhelmed front desk. Clear opportunity for automation.

We launched voice AI to route calls and schedule appointments.

Within days, patients started getting booked at the wrong locations.

The team immediately blamed the AI.

So we pulled the logs and looked at the inputs.

Here is what we found. Several doctors worked across multiple clinics. Their schedules shifted during the week. None of this was documented in a structured way. The front desk staff knew it from memory.

The AI had no memory. It only had the calendar data provided.

The system was incomplete. The AI followed it perfectly.

Once we mapped the real location-aware scheduling rules, the problem disappeared.

The AI was not confused. The workflow was.

Booking Perfectly From Bad Data

Another case taught us the same lesson in a different way.

The AI was booking patients into time slots exactly according to the defined rules.

The issue was that the appointment types had been mislabeled years ago. Humans had quietly compensated for it over time. They knew which slots were actually flexible.

The AI did not compensate. It followed the labels.

The team asked us to make the AI smarter.

But smarter would not fix bad labels. Clean data would.

We spent two weeks cleaning appointment definitions and slot rules. After that, the system worked beautifully.

Again, the AI was doing its job. The database was not.

Make the AI Smarter

This is the most common request we hear.

“Can we just make the AI smarter?”

Sometimes the answer is yes. But most of the time, the real issue is clarity.

AI cannot fix what is not defined. It cannot guess unwritten rules. It cannot repair decades of undocumented workarounds.

Eighty percent of a successful AI deployment is configuration.

Mapping workflows.
Clarifying ownership.
Documenting rules.
Cleaning inputs.
Aligning teams.

The model is often the easiest part.

This surprises people. They expect the technical layer to be the hard part. In reality, the operational layer is where the real work lives.

AI Is a Mirror

Here is something I tell our clients now.

AI reflects the system it is plugged into.

If the system is clear, AI looks brilliant.
If the system is messy, AI looks broken.

Teams often want magic. They want the AI to smooth over chaos and compensate for poor documentation.

That is not how this works.

Clean inputs beat clever prompts every time.

You can spend hours refining your prompt engineering. But if your scheduling logic is wrong or your data is inconsistent, the output will still be flawed.

At Pretty Good AI, we have shifted our mindset. When something breaks, we first audit the process before touching the model.

That change alone has saved us weeks of unnecessary debugging.

The Human Part of This

There is something deeper here.

Blaming the AI is easier than admitting the workflow is unclear.

It feels less personal to say the model failed than to say our internal process is broken.

But the teams that succeed are the ones willing to look at themselves honestly.

We have had moments at PGA where we thought the system was perfect. Then production exposed a blind spot.

It is humbling. But it is also freeing.

Once you accept that the AI is not the problem, you gain control. You can fix the documentation. You can rewrite rules. You can clean data.

You cannot argue with reality. But you can improve it.

You Can’t Automate Chaos

This is the core lesson.

You cannot automate chaos.

If your workflows depend on tribal knowledge, the AI will expose it.
If your database is inconsistent, the AI will scale the inconsistency.
If your scheduling rules are vague, the AI will execute that vagueness perfectly.

AI is not magic. It is leverage.

And leverage amplifies whatever you give it.

5 Key Takeaways

  1. AI reflects your process. It does not repair it.
  2. Location-aware scheduling and clean data are basic requirements, not advanced features.
  3. Eighty percent of AI success comes from configuration and clarity.
  4. Clean inputs matter more than clever prompts.
  5. When AI fails, audit the workflow before blaming the model.

Final Thoughts

Building Pretty Good AI has made me rethink what “technical excellence” really means.

It is not just about model performance. It is about operational clarity. It is about making sure the system underneath the AI is strong enough to support it.

The teams that win with AI are not the ones chasing the most advanced models. They are the ones willing to document, align, and clean up the messy middle.

If your AI rollout feels shaky, do not panic. Do not assume the technology is broken.

Ask a harder question first.

Where is our process unclear?

Fix that, and the AI will usually fix itself.

See you next week!

-kevin

Recent Social Posts

Recent Podcasts

video preview
show
When Enough is Enough with J...
Feb 19 · Founder Mode
48:46
Spotify Logo
 

2810 N Church St #87205, Wilmington, DE 19802
Unsubscribe · Preferences

Founder Mode

Founder Mode is a weekly newsletter for builders—whether it’s startups, systems, or personal growth. It’s about finding your flow, balancing health, wealth, and productivity, and tackling challenges with focus and curiosity. Each week, you’ll gain actionable insights and fresh perspectives to help you think like a founder and build what matters most.

Read more from Founder Mode
Jason Fried shares why keeping surface area small, working 40 hours, and staying independent leads to better products and lasting companies.

When Enough is Enough with Jason Fried Welcome back to Founder Mode! In this episode of Founder Mode, we sat down with Jason Fried, co-founder of 37signals, the makers of Basecamp, HEY, and now Fizzy. Jason has been building software for over two decades. He has strong opinions about design, work, independence, and how companies should operate. This conversation was not about growth hacks. It was about restraint. Here is what stood out to me. Software Should Feel Like Something You Can Touch...

Kevin shares lessons from Pretty Good AI on audit-driven sales, reliable automation, voice trust, and why depth beats width in enterprise AI.

AI, Agency, and Audit-Led Sales Welcome back to Founder Mode! There is a strange pattern I keep seeing as we build Pretty Good AI. The things people say they want are often not what they actually respond to. We assume customers want more human touch. We assume business owners want big platform pitches. We assume the fastest way to grow is to sign more logos. But when you step into the real world and look at behavior instead of opinions, the truth is different. Reliability beats charm. Depth...

Stephanie Joyce shares how to scale multi-location service businesses by prioritizing people, building systems, and navigating crisis with clarity.

Scaling Multi-Location Businesses with Stephanie Joyce Welcome back to Founder Mode! In this episode of Founder Mode, we sat down with Stephanie Joyce, a medspa founder and multi-location operator who’s led through turnarounds, acquisitions, and rapid growth. If you're scaling a service business, or even thinking about adding a second location, this one is packed with playbooks. Stephanie's operated in high-pressure environments where getting it wrong meant losing customers, team trust, or...