Ai Business

When NOT to Use AI (The Judgment Call)

· Felix Lenhard

I’m about to commit heresy for someone who writes extensively about AI in business. Here it is: I don’t use AI for roughly 30% of my work. Deliberately. And that 30% is what makes the other 70% actually valuable.

The AI conversation has become so one-sided—use AI for everything, AI makes everything better, if you’re not using AI you’re falling behind—that the nuance has been lost entirely. The nuance is this: AI is a tool with specific strengths and specific weaknesses, and knowing when NOT to use it is just as important as knowing when to use it.

This isn’t a contrarian take for attention. It’s a practical one based on two years of building AI-native operations and discovering, sometimes painfully, where AI helps and where it hurts.

The Situations Where I Choose No AI

Let me be concrete. Here are the specific categories of work where I consistently choose to work without AI assistance:

First-meeting client conversations. When I meet a new consulting client for the first time, I don’t prepare AI-generated talking points, I don’t have an AI summary of their company, and I don’t bring AI-drafted questions. I go in with basic research (done manually) and genuine curiosity. Why? Because first meetings are about reading the person—their energy, their unstated concerns, the things they emphasize and the things they avoid. An AI-prepared script would make me perform instead of listen.

Relationship-critical communications. When a client is upset, when a community member feels unheard, when a partner is reconsidering a deal—these moments require my full, unmediated attention. AI-drafted responses in emotionally charged situations have a distinctive quality: they’re technically appropriate but emotionally hollow. People sense it. I’ve learned to write these responses myself, even when it takes longer, because authenticity in difficult moments builds trust that efficiency never will.

Original framework development. The core ideas behind Subtract to Ship—the methodology, the frameworks, the diagnostic tools—were developed without AI assistance. Not because AI couldn’t help with the production of these ideas, but because the thinking process itself was the point. Using AI to shortcut the thinking would have produced derivative frameworks that synthesized existing ideas rather than generating new ones.

Personal creative work. My Late to the Table books on magic performance involved creative decisions—what to include, what voice to use, how to structure the material—that needed to come from my own aesthetic judgment. AI could have helped me produce more content faster, but the books wouldn’t have been mine in the way that matters.

Strategic decisions about my own business. Where to invest, which clients to pursue, when to pivot, what to build next—these decisions integrate so much tacit knowledge about my situation, my values, my risk tolerance, and my ambitions that AI input would be noise at best and misleading at worst.

I wrote about this indirectly in my piece on the AI productivity trap. The trap isn’t just about volume—it’s about applying AI to tasks where it dilutes quality rather than enhancing it.

The Three-Question Filter

When deciding whether to use AI for a specific task, I run it through three questions:

Question 1: Does this task primarily require production or judgment?

Production tasks—writing drafts, processing data, formatting documents, researching facts—are AI territory. Judgment tasks—deciding strategy, evaluating quality, navigating relationships, making ethical calls—are human territory.

Most tasks contain both elements. The key is identifying the dominant one. Drafting a blog post is primarily production (the ideas are already decided). Deciding what to write about is primarily judgment (requires understanding my audience, my goals, and my authentic perspective).

Question 2: What’s the cost of a subtle error?

AI makes subtle errors. Not obvious ones—those are easy to catch. Subtle ones: a slightly wrong tone, a technically correct but misleading statistic, an example that’s accurate but inappropriate for the context, a recommendation that’s sound in general but wrong for this specific situation.

If the cost of such an error is low—a blog post that needs a correction, an internal document that can be revised—use AI confidently. If the cost is high—a client relationship damaged, a legal document with implications, a public statement that can’t be retracted—either skip AI or build in multiple layers of human review.

Question 3: Is the process itself valuable, or only the output?

Sometimes the thinking process generates insights that the output alone doesn’t capture. When I develop strategy for my business, the act of thinking through the options—sitting with uncertainty, weighing tradeoffs, noticing my own reactions to different paths—produces understanding that a strategy document doesn’t contain.

If the process itself has value (learning, relationship-building, creative development), doing it yourself is better than doing it faster with AI. If only the output matters (formatting, data processing, routine communication), AI is the obvious choice.

This filter has become instinctive for me, but when I first developed it, I literally asked these three questions before starting any task. It sounds tedious but it only takes 10 seconds and prevents the much more costly mistake of using AI where it doesn’t belong.

Where AI Makes Things Worse

Beyond the situations where I choose to skip AI, there are tasks where AI actively degrades quality. These are worth flagging specifically:

Humor and personality. AI-generated humor is recognizably AI-generated humor. It’s technically structured like a joke but lacks the timing, specificity, and personality that makes humor work. When I need humor in my content or presentations, I write it myself. Every time.

Cultural nuance in the DACH market. Operating in Austria, I deal with cultural contexts that AI handles poorly. The difference between how you pitch to a Viennese investor versus a Grazer one. The unwritten rules of Austrian business networking. The specific way Austrians communicate indirectly about concerns. AI either misses these entirely or applies stereotypes that are worse than missing them.

I’ve covered some of these cultural dynamics in my piece about starting a business in Austria, and they’re exactly the kind of tacit knowledge that resists AI assistance.

Ethical gray areas. When a business decision involves competing values—profit versus principle, speed versus thoroughness, growth versus sustainability—AI tends to split the difference in a way that satisfies nobody. These decisions require personal values, lived experience, and the willingness to own the consequences. Outsourcing them to AI is not delegation; it’s abdication.

Creative work at the edge of your ability. The work that stretches you—the writing that’s harder than what you’ve done before, the strategy that pushes into unfamiliar territory, the presentation that demands more than your usual level—this is where growth happens. Using AI to smooth over the difficulty also eliminates the growth. I’d rather produce something imperfect that stretched me than something polished that didn’t.

This connects to what I’ve explored in the context of magic and performance. Performers who rely on technical aids instead of developing their own skills plateau fast. The same applies to knowledge workers who over-rely on AI.

The False Efficiency Trap

The most insidious argument for universal AI use is efficiency. “Why would you spend two hours on something AI can do in ten minutes?” Here’s why:

Because some two-hour processes are investments, not costs. When I spend two hours thinking through a strategic decision without AI assistance, I’m not being inefficient. I’m developing the strategic judgment that makes all my other decisions better. The two hours invested in thinking pays dividends across every decision I make for the next month.

Because signal quality matters. AI produces output at a consistent medium quality. For most tasks, medium quality is fine—better than fine, actually, because the alternative was often lower quality done manually under time pressure. But for your highest-stakes work, medium quality isn’t enough. The extra time to produce something excellent—manually, with full human attention—is worth it for the pieces that represent you most directly.

Because relationships run on authenticity. I could use AI to maintain five times as many professional relationships—more emails, more follow-ups, more personalized check-ins. But the people who matter most to my business would notice the difference. They’d get more communication from me and feel less connected. Volume isn’t intimacy. Authenticity at lower volume beats efficiency at higher volume every time.

Because your audience knows. Readers, clients, and community members are getting increasingly good at detecting AI-mediated communication. Not the fact that AI was involved—they don’t care about that. But the absence of genuine human presence in the final product. Content that’s been AI-produced and lightly edited has a sameness to it. Deep practice in writing means developing your own distinct voice, which only happens when you write some things entirely yourself.

The Practical Framework

Here’s how I allocate my work between AI-assisted and AI-free:

Always AI-assisted: Data processing, research synthesis, first drafts of routine content, administrative tasks, formatting, translation, scheduling, financial modeling, competitive analysis.

Sometimes AI-assisted (depending on stakes): Client deliverables, public content, presentations, proposals, community communications.

Never AI-assisted: First client meetings, strategic decisions, original creative work, relationship-critical communications, ethical decisions, humor and personality.

This allocation means roughly 70% AI-assisted, 30% human-only. The 70% gets done faster and often better than I could do manually. The 30% gets done with the full weight of my experience, personality, and judgment—qualities that AI can’t replicate and that my audience values most.

The ratio isn’t fixed. It shifts based on what I’m working on. During a book-writing phase, the human-only portion grows to 40-50% because the creative work demands it. During an operations-heavy period, the AI-assisted portion grows to 80%. The framework is a guide, not a rule.

How to Develop Your Own Judgment

If you’re building an AI-integrated practice and wondering where to draw the line, here’s my suggestion: err on the side of using AI for three months. Use it for everything. Pay attention to where the results feel wrong—where the output is technically fine but something’s off. Those are your judgment zones.

Then deliberately pull AI out of those zones and compare. Work without AI for the tasks that felt off. If the results improve—in quality, in authenticity, in your own satisfaction—you’ve found a boundary worth maintaining.

This is an ongoing calibration, not a one-time decision. As AI improves, some of my current human-only zones may become AI-assisted zones. And as my work evolves, new human-only zones will emerge. The important thing isn’t getting the boundary exactly right. It’s having the awareness that a boundary exists and the discipline to maintain it.

Takeaways

  1. Knowing when NOT to use AI is as important as knowing when to use it—roughly 30% of high-value work benefits from human-only execution.
  2. Use the three-question filter: Is this primarily production or judgment? What’s the cost of a subtle error? Is the process itself valuable, or only the output?
  3. AI actively degrades quality in humor, cultural nuance, ethical gray areas, and creative work at the edge of your ability.
  4. Some “inefficient” human processes are investments in judgment, relationships, and growth—not costs to be optimized away.
  5. Start by using AI for everything, then pay attention to where results feel wrong—those instincts reveal your personal boundaries between AI-assisted and human-only work.
ai decision-making strategy critical-thinking

You might also like

ai business

The Future of AI in Business: What's Coming in 2027

Predictions grounded in what's already working today.

ai business

Training AI on Your Brand Voice

How to make AI sound like you, not like a robot.

ai business

AI for Invoice Processing and Bookkeeping

Automate the most tedious part of running a business.

ai business

The AI Audit: Where Is Your Business Wasting Human Hours?

Find the manual processes that AI should handle.

Stay in the Loop

One Insight Per Week.

What I'm building, what's working, what's not — and frameworks you can use on Monday.