When AI Agents Go Rogue
Welcome back to Founder Mode!
In this episode, Jason and I did something a little different. No guest. No big prep doc. Just two founders talking through what we are seeing right now with AI agents, chief of staff tools, and the shift from chatbots to systems that actually go do work.
We talked about why AI feels more broken when it takes action, why the right mental model is a junior employee and not magic software, and why communication may matter more than ever.
This one felt raw, current, and very real.
Let’s get into it.
1. AI Feels More Wrong When It Acts
One of the biggest changes right now is that AI is moving from answering questions to doing tasks.
That sounds great until it gets something wrong.
When a chatbot gives a bad answer, you can catch it in real time. You are still in the loop. You can correct it, re-prompt it, or ask a follow-up.
But when an agent goes off and does the task for you, the mistake shows up later.
That is what makes it feel worse.
As I said in the episode:
“When you're asking a question, you're able to kind of like correct it in real time. You don't notice the wrongness in the same way.”
That is the heart of it.
Wrong answers are annoying.
Wrong actions are expensive.
2. The Right Mental Model Is a Junior Employee
I think a lot of people are expecting too much from these systems too fast.
They treat the AI like it should already be perfect. But that is not how this works.
The better way to think about it is like a new junior employee or intern.
You give them a task. They try. You review the work. You coach them. You tighten the process. You add guardrails. Then you try again.
That is a much healthier expectation.
It also explains why so many people feel disappointed right now. They hired the AI and expected a senior operator. What they actually got was a fast, eager junior teammate that still needs direction.
That does not mean the tools are bad.
It means the management model has changed.
3. People Going Deep on AI Are Working More, Not Less
This part is funny, but it is true.
A lot of us got into AI because we thought it would save time. In some cases, it does. But if you are really going deep on it, it often leads to more work, not less.
You prompt again. You tweak the system. You test a different model. You add another workflow. You fix the memory. You refine the output. You chase the last 10 percent.
There is a dopamine loop here that is very real.
It starts to feel like a game where every little improvement unlocks another level.
That can be exciting.
It can also eat up your whole evening.
If you are not careful, AI becomes less like an assistant and more like a machine that keeps pulling you back in for one more round.
4. Communication Is Becoming a Bigger Advantage
This may be the biggest non-technical takeaway from the episode.
As AI gets better, communication gets more valuable.
You still need to explain what you want clearly. You still need to review what came back. You still need to tell a human teammate, a customer, or a hiring manager what happened and why.
That matters a lot.
We talked about reviewing candidates and seeing people struggle to explain the work they had built. That is a problem.
Because in this new world, it is not enough to say you used AI.
You have to explain:
- What you were trying to do
- What the tool did
- Where it failed
- What you changed
- Why the result is good
If you cannot explain the work, you probably do not understand it well enough.
That is true whether you are coding, writing, designing, or running ops.
5. The Founder’s Job Is Shifting Again
This episode reminded me that founder work keeps evolving.
A few years ago, the focus was getting the team, the tools, and the systems in place.
Now the job is becoming more about deciding:
- Where to trust AI
- Where to keep the human in the loop
- How to set the right expectations
- How to communicate what happened
- How to keep quality high while speed goes up
The winners are not going to be the founders who use the most AI.
They are going to be the founders who know how to use it well.
That means taste still matters. Judgment still matters. Communication still matters.
And maybe more than anything, restraint matters.
Just because the agent can do something does not mean it should.
“The right mental model is really like, think of this as a new junior employee.”
That framing changes everything.
It lowers the frustration.
It raises the right questions.
And it gives you a much more useful way to build with these tools.
5 Key Takeaways
- AI agents feel more broken than chatbots because the mistake shows up after the task is already done.
- The best mental model today is a junior employee, not a flawless machine.
- Founders who go deep on AI often end up working more, not less, because of the constant tuning loop.
- Communication is becoming a bigger differentiator than ever because you still have to explain the work.
- The future is not just about using AI. It is about knowing when to trust it, when to review it, and how to work with it well.
Final Thoughts
This episode felt close to home because it captured what a lot of us are experiencing in real time.
The tools are powerful.
They are also messy.
They save time in one place and create new work in another.
They can make you faster, but only if you know how to guide them.
That is why I keep coming back to the same idea.
This is not magic.
This is management.
You are not just prompting anymore. You are assigning work, reviewing outcomes, correcting mistakes, and building systems around a new kind of teammate.
That is the shift.
And the founders who learn that fastest are going to have a real edge.
If this changed how you think about AI agents, work, or the future of building, share it with someone who is trying to figure it out too.
🎧 Listen to Episode 50 here:
This podcast builds on the Founder Mode newsletter.
Let’s build.
-kevin
2810 N Church St #87205, Wilmington, DE 19802
Unsubscribe · Preferences