
Do not index
Do not index
The AI Intern Problem
So I was on a call with Jill yesterday, walking through her Poppy setup, and she said something that stopped me in my tracks: "I keep expecting AI to be this genius assistant, but it feels more like... an overeager intern who just wants to please me."
Holy shit. Yes.
I've been thinking about this for weeks now, watching founders get frustrated with AI, and Jill just crystallized what I've been seeing everywhere. We've been sold this myth that AI is supposed to be our brilliant right-hand, our strategic partner, our creative genius. But here's what's actually happening: AI isn't your genius assistant. It's the intern that fills gaps to please, not to flag what's missing.
And once you understand this distinction, everything changes about how you work with AI. Everything.
The Discovery: Why AI Feels "Dumb" When You're Smart
Here's what I've been noticing in my work with ambitious founders. You know you're dealing with complex, nuanced problems. You have sophisticated thinking, deep expertise, years of pattern recognition. So when you prompt AI and get back something that feels... generic, obvious, or just plain wrong, you assume the AI is broken.
But what if the problem isn't the AI's intelligence? What if it's that we're asking an intern to do a CEO's job?
I've been learning to bear with the process, and instead of trying to get it to understand, I'm actually finding that I'm getting myself to understand vis a vis the chat first. This shift has been everything. Instead of getting frustrated when AI doesn't "get" my vision immediately, I started treating it like what it actually is: a really capable intern who needs clear direction, context, and feedback loops.
The breakthrough came when I realized that most AI frustration stems from what I call "entropy creep." You know that feeling when you're trying to explain something complex, and with each back-and-forth, the conversation gets muddier instead of clearer? That's entropy. And it happens because we're expecting AI to fill in gaps it can't see.
Take Jill's situation. She's building sophisticated marketing systems, but every time she tried to get AI to help with copy, she'd end up in these endless correction loops. The AI would generate something, she'd give feedback, it would overcorrect, she'd clarify, it would miss the point entirely. Sound familiar?
The problem wasn't the AI's capability. It was that she was mixing copy clarity with design automation. Two completely different processes that need to be separated if you want to cut those endless correction loops.
When I showed her how to separate these workflows in Poppy, something clicked. Instead of trying to get one AI conversation to handle strategy, copy, design feedback, and execution all at once, we created distinct spaces for each phase. The AI could be brilliant at what it's actually good at, instead of failing at what we wished it could do.

The Transformation: From Frustrated User to AI Architect
Here's where it gets interesting. Once Jill stopped expecting AI to read her mind and started treating it like a sophisticated intern, her whole relationship with the technology shifted. She went from feeling like AI was making her work harder to building systems that actually amplified her expertise.
But this isn't just about mindset. There are specific, tactical things that change when you make this shift.
First, you start building knowledge bases instead of hoping AI will magically understand your context. In Poppy, this means your knowledge base becomes the AI's knowledge base. Instead of re-explaining your brand voice, your client history, your strategic frameworks every single time, you train the AI visually through groups, examples, and connected chats.
I'm seeing this transformation with founders across different industries now. The shift from fighting AI to building with it is creating these breakthrough moments where suddenly everything clicks. Not generic AI outputs, but authentic content that sounds like them because the AI has been trained on their actual stories, their language patterns, their strategic priorities.
The difference is profound. Instead of fighting with AI to understand what you mean, you're building systems that already know. Instead of getting generic outputs that need heavy editing, you're getting first drafts that capture your actual thinking.
And here's something most people don't realize: this approach works because it mirrors how you'd actually train a human intern. You wouldn't expect someone new to immediately understand your business, your clients, your way of thinking. You'd give them context, examples, feedback loops. You'd build their understanding over time.
That's exactly what training in Poppy does. But instead of training one person, you're building systems that can scale your expertise across multiple projects, clients, and use cases.
The compound effect is wild. Jill isn't just getting better AI outputs now. She's thinking more systematically about her own processes. She's clearer about what she actually wants because she's had to articulate it for the AI. She's building reusable frameworks instead of starting from scratch every time.
The Invitation: What This Means for How You Work
So here's what I'm curious about: What would change in your work if you stopped expecting AI to be psychic and started treating it like the sophisticated intern it actually is?
Because here's what I'm seeing with founders who make this shift. They stop getting frustrated with AI and start getting excited about what becomes possible. They stop fighting the technology and start building with it. They stop trying to hack their way to better outputs and start creating systems that amplify their actual expertise.
The secret isn't better prompts. It's better architecture. And that's something most people don't know about yet. I'm kind of gatekeeping it right now, honestly, because the founders who understand this distinction are building such sophisticated systems that it feels almost unfair.
But maybe that's the point. Maybe the future belongs to the people who understand that AI isn't magic. It's a tool that becomes powerful when you know how to build with it properly.
If you're curious about what this looks like in practice, I'd love to show you how visual training works in Poppy. Not because I want to sell you something, but because I think you'd find it fascinating how different the experience becomes when you're working with AI that actually understands your context.
What do you think? Does the intern metaphor resonate with your AI experience? I'm genuinely curious about what you've been noticing in your own work with these tools.