Featured

· TechTruth · TechTruth · 7 min read

TechTruth · Part 2 of 2

The Deck Says Agent. The Code Says Wrapper.

The wrapper epidemic didn't end — it just learned a new word.

  • TechTruth
  • AI
  • due diligence
  • agentic AI
  • agents
  • wrappers
  • VC
  • investing
  • ghost scan

In Part 1: The AI Wrapper Epidemic — What We Learned, I broke down the four patterns that never die in AI pitch decks — and why Europe makes governance and diligence sharper, not softer. The wrapper story didn’t end with better headlines; it moved.

But here’s what most of the “death of wrappers” commentary misses: the pattern isn’t going away. It’s migrating.

Same pattern, new slide

The wrapper epidemic of 2023–2025 was about chatbots and content generators — thin UIs on top of OpenAI or Anthropic APIs, sold as proprietary AI platforms. That wave is cresting. Founders, investors, and users have collectively wised up. The dot-com comparison isn’t subtle anymore: anyone can see that a pretty interface on top of GPT is not a company, just as a storefront on existing payment rails was not an e-commerce revolution.

But the next wave of wrappers is already here, and it’s dressed in new language: agentic AI.

Gartner projects that 40% of enterprise applications will embed AI agents by mid-2026, up from less than 5% in early 2025. That’s an enormous market opening. And into that opening are pouring hundreds of startups claiming to build “autonomous agent platforms,” “multi-agent orchestration,” and “agentic workflows” — many of which are, architecturally, the same thing we just saw: a thin layer on top of someone else’s intelligence, repackaged with buzzwords.

The tells are familiar. The agent “platform” that’s really just chained API calls with a scheduler. The multi-agent system that has no error recovery, no state management, no audit trail. The autonomous workflow that works beautifully in a demo and catastrophically in production. Gartner itself predicts that over 40% of agentic AI projects will be cancelled by 2027 due to escalating costs and unclear business value. RAND Corporation research shows AI projects fail at twice the rate of traditional IT — and agents, with their compounding error chains and opaque decision paths, are harder to get right, not easier.

What the decks say vs. what the code shows

This isn’t hypothetical. The agentic wrapper is already in the pitch decks landing in my inbox — and the tells are as recognizable as the ones we saw two years ago, just dressed in newer language.

“Autonomous agent platform” that’s really chained API calls with a scheduler. I’ve reviewed decks this quarter where the entire “agentic” capability is a sequence of LLM prompts fired in order, with a basic if/else router deciding which prompt goes next. No state management. No error recovery. No memory between sessions. Ask the founder what happens when step three fails and they stare at you. That’s not an agent — it’s a script with a language model in it.

The multi-agent architecture slide with no governance layer. This is the new version of the ML-everywhere diagram. Six or seven agents drawn in nice boxes, arrows flowing between them, labels like “Research Agent,” “Planning Agent,” “Execution Agent.” But no audit trail. No escalation path. No explanation of who intervenes when Agent 4 contradicts Agent 2. No cost controls for compounding token usage. In production, multi-agent systems without governance don’t just fail — they fail expensively. Gartner calls this “agent washing”: rebranding existing chatbots or RPA tools as agents without delivering real agentic capability. Their data shows 62% of enterprises are experimenting with agentic AI, but only 14% have anything close to production-ready. That gap is a breeding ground for wrappers.

The “fully autonomous” claim that falls apart in the logs. I’ve started asking for execution logs during technical DDs — not demo recordings, actual logs. In more cases than I’d like, the “autonomous” agent requires a human to intervene every three to four tasks. That’s not scaling software. That’s managing a remote workforce with extra steps. A healthy agentic system at Series A should show a human-in-the-loop rate below 2% for core workflows. Most of what I see is ten times that.

The agent demo that costs more than the process it replaces. This one’s new and specific to agentic systems. Every autonomous task triggers multiple reasoning steps, tool calls, retries, and validations. Context windows balloon. Token costs compound. I’ve seen founders proudly demo an agent that automates a $50 task — and when you calculate the inference cost per run, it’s $60. The unit economics don’t just not work; they actively destroy value. And most teams aren’t even tracking this yet.

The “we use MCP/A2A” slide that substitutes protocol adoption for product depth. Model Context Protocol and Agent-to-Agent are real, important standards — open protocols that let agents use tools and talk to each other. But listing them on a slide is not a moat, just as listing REST APIs on a slide in 2015 wasn’t a moat. The protocol is infrastructure. What you build on top of it, how you govern it, and what proprietary data flows through it — that’s where the value lives. Several of the decks I’ve reviewed this year use protocol adoption as a proxy for technical depth. It isn’t.

These aren’t edge cases. They’re the majority of what I see. And the pattern is accelerating — InnMind reports reviewing fifty decks a week where 90% look identical, and more than half of the latest Y Combinator batch involves AI agents in some form. The volume is high. The differentiation is low. The correction is predictable.

This is exactly why we’re building the next version of TechTruth — not just to verify AI claims in pitch decks, but to go deeper. For agentic claims specifically, we’re developing what we call a ghost scan: a non-invasive pass through the codebase and infrastructure that reveals how the system actually works under the hood. Not what the architecture slide says. Not what the demo shows. What the code does. Does the “agent” actually maintain state, or does it start from zero every run? Is there a real orchestration layer, or is it a linear prompt chain dressed up as multi-agent? Are there governance hooks, logging, escalation paths — or is it raw LLM calls with a retry loop? The answers to these questions are in the repo and the infra, not in the deck. We’re turning twenty-five years of technical DD pattern recognition into a tool that gives investors those answers systematically — before they write the check. Coming soon.

What actually survives

Two years ago, the question was: is this real AI or a wrapper? Today the question is: is this a real agent or a wrapper that learned a new word? The answer lives in the same place it always has — not in the deck, but in the system behind it.

I’ve been building AI systems since 2003, and the same lesson keeps repeating: what survives isn’t the model, the wrapper, or the agent. It’s the system around it.

The companies that make it through every hype cycle share the same properties. They own their data — not scraped from the public web, but proprietary, high-value, and hard to replicate. They operate deep in a specific vertical, with structural domain knowledge that generic tools can’t replicate. They integrate into customer workflows at the operational level, not the demo level, which creates stickiness that no chatbot or agent interface can match. And they build genuine infrastructure — monitoring, governance, feedback loops, human oversight — because they understand that AI in production is fundamentally different from AI in a notebook.

The ask

I’m not trying to dunk on founders — most are sincere, many are talented, and building real AI is genuinely hard. But I am asking us, as investors, to stop reading the deck and start reading the code. Not just for wrappers, but for whatever comes dressed as the next paradigm.

The wrapper epidemic taught us that “AI-powered” on a slide doesn’t mean anything without evidence. The agentic wave will teach us the same lesson, at higher stakes and faster speed. The companies worth backing are the ones where the founder can explain the data, the architecture holds up under scrutiny, humans stay meaningfully in the loop, and the technology doesn’t evaporate when you look behind the curtain.

We can do better than applauding wrappers — even when the deck says agent. And we should.


Bastiaan van de Rakt is a builder, investor, and AI auditor based in Amsterdam. He co-founded Deeploy (runtime AI governance) and Enjins (agency in AI and AI infrastructure), invests through Why Commit Capital, and serves as Operating Partner at Volve Capital and Venture Partner at Aenu. He is now building TechTruth — an AI due diligence tool that turns twenty-five years of technical DD into a systematic first pass for investors, including ghost scans of codebases and infrastructure behind agentic AI claims (in collaboration with Enjins). Coming soon at techtruth.ai. More at whycommit.com.