Every January, people publish “best AI tools” lists that are obsolete by March. I’m not going to do that. Instead, I’ll share the specific stack I run my entire business on as of April 2026, explain why each piece is there, and—more importantly—describe the principles behind the choices so you can build your own stack even as specific tools change.
My tech stack has been stable for about eight months now. That stability is the point. I’m not chasing new tools. I’m running operations. The difference between a productive AI setup and a distracted one is whether you’re using tools or collecting them.
The Stack Overview
Here’s what runs my business today, by function:
Core AI Models: Claude is my primary model provider across three tiers. Opus 4.6 for complex analysis, strategic thinking, and long-form content — when the task requires genuine depth, I want the model that reasons most thoroughly. Sonnet 4.6 for the bulk of daily work — email drafts, data processing, routine content, client communications. It is fast, reliable, and cost-effective. Haiku 4.5 for high-volume automated tasks where speed and cost matter more than depth — processing hundreds of data entries, powering rapid classification steps in workflows, handling the repetitive grunt work inside my automation pipelines.
The reason for three tiers instead of one: each model represents a different trade-off between capability, speed, and cost. Using Opus for every task would be like driving a Ferrari to the supermarket. Using Haiku for strategic analysis would be like bringing a bicycle to a highway. Matching the model to the task is the most impactful cost optimization I have made.
Development: Claude Code for all software development and technical building. This is the biggest change from my stack a year ago. Claude Code is not an autocomplete tool — it is an agentic development environment that reads your codebase, understands the full context, writes code, runs it, debugs errors, and iterates. Multi-agent orchestration means it can work on frontend and backend changes in parallel. I build websites, data pipelines, custom business tools, and automation infrastructure with it. I am not a developer by training. Claude Code makes that distinction less relevant every month.
Workflow Orchestration: n8n, self-hosted on a European server. This connects my AI models to my business processes — receiving inputs, routing to the right model tier with the right context, managing output, and triggering the next step. This is the glue that turns individual AI capabilities into operational workflows. The n8n nodes call the Anthropic API with tool use, enabling agentic workflows where the AI decides what tools to call, processes the results, and takes the next action autonomously.
Knowledge Management: Obsidian with organized markdown files, connected to my AI tools through MCP (Model Context Protocol). MCP is the standard for connecting AI agents to external tools and data sources. Instead of manually copying context into each AI session, MCP lets the AI agent query my file system directly — pulling the right client profile, referencing the brand voice guide, checking the process documentation. The architectural reason this matters: it eliminates the context preparation overhead that used to eat fifteen to twenty minutes at the start of every complex session. With 1M token context windows, I can also load entire project folders directly when MCP access is not configured for a specific tool.
Output Processing: Tools for formatting, scheduling, and distributing the work my AI workflows produce. Blog publishing, email campaigns, client reports, social media — each channel has its formatting requirements. Structured outputs from the Anthropic API ensure that AI-generated content arrives in the exact format each channel needs, eliminating the manual reformatting step.
Quality Control: Automated checking that runs on every piece of AI output before it reaches my review queue. Readability scoring, format compliance, banned-word filtering, link validation. These checks are themselves AI-powered — a lightweight Haiku pass that costs fractions of a cent per check and catches issues before I spend time on manual review.
Communication: Standard email, calendar, and video tools. Nothing AI-specific here — the point is integration with the rest of the stack, not AI features in the communication tools themselves.
This stack costs me roughly €400-€600 per month total. That’s the entire operational technology cost for a business that produces the output of a team.
Principle 1: Fewer Tools, Deeper Mastery
The biggest mistake I see founders make with AI tools is breadth without depth. They have subscriptions to 12 AI services, use each one occasionally, and are proficient in none. In 2026, this problem has a specific name: the AI wrapper trap. Most of those 12 services are thin interfaces on top of the same foundation models. You are paying twelve subscriptions for what amounts to one model’s capability with twelve different UIs.
My stack has seven functional layers. Each layer has one primary tool. I know that tool inside out — its strengths, weaknesses, quirks, and optimal configurations. When a new tool launches that promises to be “10x better” at one of these functions, I evaluate it against my existing tool’s proven performance. Nine times out of ten, the switching cost exceeds the marginal improvement.
This isn’t about being resistant to change. It’s about recognizing that proficiency with a good tool beats superficial use of a great tool. I talked about this in the context of deep practice versus mindless repetition — the same principle applies to technology adoption.
The exception: when a new tool offers a genuinely different capability (not just a better version of existing capability), evaluation makes sense. The shift from basic language models to agentic AI systems with tool use and multi-agent orchestration was that kind of inflection — it changed what was architecturally possible. A marginal improvement in writing quality is not.
Principle 2: Integration Beats Features
A mediocre tool that integrates smoothly with your workflow beats an excellent tool that sits in isolation. I’ve repeatedly chosen less powerful options because they connected natively with the rest of my stack.
My workflow orchestration layer is the most important piece of the stack precisely because it’s the integration layer. It connects my AI models to my knowledge base, routes outputs to the right processing tools, and feeds results back into the system. Without it, I’d have a collection of tools. With it, I have an operation.
MCP is the integration standard that made this dramatically easier in 2026. Before MCP, connecting each AI tool to each data source required custom API work — a different integration for every combination. MCP standardizes the protocol: any AI tool that supports MCP can connect to any data source that exposes an MCP interface. One standard, universal connectivity. The practical impact: I added a new data source to my stack last month. Total integration time: twenty minutes. Before MCP, the same integration would have taken a day of API work.
When evaluating any tool, I ask: “How does this connect to what I already run?” If the answer is “manual copy-paste” or “custom API work with no MCP support,” the tool needs to be substantially better than alternatives to justify the friction.
This is especially relevant for DACH-market founders. Many European business tools — banking, invoicing, legal document management — have limited integration options compared to US equivalents. Building a stack that actually connects requires deliberate choices and sometimes accepting that the globally-popular option won’t work as well as a European-focused alternative.
Principle 3: Data Architecture Matters More Than Model Choice
Here’s something the AI tool discourse consistently misses: which model you use matters far less than how you organize the data you feed it. I have said this in interviews: if you have no skills and AI, you get 10x better. If you have some skills and AI, you get 100x better. If you’re an expert with AI, you’re basically unbeatable. The same applies to data — bad data in, bad output out, regardless of how sophisticated the model is.
My Knowledge Management layer is where I’ve invested the most time and thought. It contains:
Client configurations: Detailed profiles for each consulting client and editorial client, including their voice, preferences, history, and strategic context.
Content library: Every piece I’ve published, indexed by topic, angle, and performance. This prevents duplication and enables intelligent cross-referencing.
Research repository: Organized collections of sources, findings, and analyses by subject area. When a new project requires research, relevant existing work is surfaced automatically.
Process documentation: Every workflow, every template, every checklist. My AI agents reference these to maintain consistency across operations.
Quality benchmarks: Examples of excellent output for each content type and function. Agents compare their output against these benchmarks as a quality check.
This knowledge architecture is what makes a generic AI model perform like a custom-built solution. The model is general; the data makes it specific. Founders who switch models frequently while neglecting their data architecture are optimizing the wrong variable.
I’ve organized this around the same principles I use when I help clients build knowledge systems — the same approach I detailed in my writing about building AI workflows that replace departments.
Principle 4: Separate Experimentation from Production
I maintain two distinct environments: a production stack (stable, tested, reliable) and an experimentation sandbox (where I try new tools, test new configurations, and break things).
The production stack doesn’t change without a formal evaluation:
- New tool or configuration is tested in the sandbox for at least two weeks
- Output quality is compared against the current production tool
- Integration requirements are assessed
- Migration plan is documented
- Only then does it enter production
The sandbox changes constantly. I try new models the week they launch. I test new workflow tools. I experiment with different agent configurations and multi-agent architectures. But none of this affects my client work or my publishing operation until it’s proven.
This separation is what lets me stay current without destabilizing my business. Founders who experiment in production — switching AI providers mid-project, trying new tools on client deliverables — create unnecessary risk and inconsistency.
The Specific Budget Breakdown
Since transparency matters, here’s what I spend monthly:
- Claude Pro subscription (includes Opus 4.6, Sonnet 4.6, Haiku 4.5, Claude Code): €20
- Anthropic API (automated workflows, agentic agents, batch processing): €100-€200
- n8n self-hosted (European server): €20-€30
- Knowledge management (Obsidian is free, server storage): €10-€20
- Output processing and publishing tools: €50-€80
- QC automation tools: €20-€40
- Communication tools: €50-€80
Total: €270-€470/month
For context, a single part-time virtual assistant would cost more than my entire tech stack. And the stack operates 24/7 without vacation, sick days, or management overhead.
That said, this isn’t the minimum viable stack. You could start with a single Claude Pro subscription (€20/month) and manual workflow management. I built up to this stack over 18 months, adding layers as the business grew and the ROI justified each addition.
What’s Missing (And Why)
A few categories I deliberately don’t include:
AI meeting transcription/summarization. I’ve tested several and found that my meetings are more productive when I take notes manually. The act of deciding what to write down forces me to identify what matters in real time. Transcription gives me everything, which means I still need to do the judgment work of identifying what matters — but now after the meeting instead of during it.
AI-generated images/video. My content is text-based. When I need visuals, I use simple charts or screenshots. AI-generated imagery doesn’t add value to my work and introduces the risk of generic visual content that could have come from anyone. This may change as my content evolves, but right now it’s not in the stack.
Multiple AI providers. I consolidated to a single provider (Anthropic) after spending months paying for both Claude and GPT subscriptions. For my use cases, the quality differential between Claude and alternatives does not justify the cost and context-switching of maintaining multiple provider relationships. If your use case requires specific capabilities only available from another provider, add it. But audit whether you actually need it or just like having options.
Social media AI tools. I manage social media manually with basic scheduling tools. The AI-powered social media tools I’ve tested optimize for engagement metrics rather than audience value, which misaligns with my content strategy. Simple scheduling plus AI-drafted content through my primary model produces fewer but better social posts.
The point isn’t that these tools are bad. It’s that they don’t serve my specific operation. Every tool in your stack should earn its place by directly contributing to your workflows. Anything that doesn’t contribute is a distraction with a subscription fee.
Building Your Own Stack
If you’re starting fresh, here’s the sequence I’d recommend:
Month 1: Claude Pro subscription. Use it for everything — writing, analysis, brainstorming, coding with Claude Code. Learn its strengths and weaknesses through daily use. No additional tools — just you and the model.
Month 2-3: Add structured knowledge management. Organize your business context — voice guides, process documents, client information — so you can feed it to the AI consistently. Set up MCP connections where possible. This step doubles the quality of your outputs.
Month 4-5: Add workflow orchestration. Connect your AI model to your business processes through n8n or similar, using the Anthropic API with tool use for agentic workflows. Work flows automatically from input to output with your review at defined checkpoints.
Month 6+: Add specialized tools as needed — output processing, QC automation, additional model tiers for specific tasks. Each addition should address a demonstrated bottleneck, not an anticipated one.
Resist the urge to build the full stack from day one. Each layer should be proven before the next is added. The foundations — the subtraction audit applied to your tech stack — should be solid before you build higher.
Takeaways
- A complete AI business stack costs €270-€470/month and replaces operational capacity that would cost multiples of that in staffing.
- Fewer tools with deeper mastery beats many tools used superficially — watch especially for AI wrappers that duplicate capabilities you already have.
- Data architecture (how you organize the information your AI works with) matters more than model choice for output quality. MCP makes your data accessible to AI agents without manual context loading.
- Separate your experimentation sandbox from your production stack — test everything in isolation before it touches client work.
- Build your stack sequentially over 6+ months, with each layer addressing a demonstrated need rather than an anticipated one.