· Deeploy · 1 min read

Explainability: control vs complexity (Deeploy, 2021)

Deeploy on explainable AI: why accountability and transparency matter as ML grows more complex — from public-sector oversight and SyRI to insurers’ ethical-AI frameworks and the need for practical tooling.

Read original article

  • news
  • Deeploy
  • XAI
  • AI governance
  • compliance
  • explainability

Published 10 March 2021 on Deeploy, this long-form piece ties explainability to a simple tension: control versus complexity — using Penrose & Escher as a metaphor for human-readable structure, then grounding the argument in Dutch public-sector scrutiny (e.g. Algemene Rekenkamer, SyRI), insurers’ ethical-AI frameworks, and the day-to-day need for tools that make models monitorable, explainable, and accountable in production.

The through-line matches what we still see in the market: regulation helps, but operators and business owners often drive the strongest demand for governance that works in the real world.

Why Commit Capital backs Deeploy as they turn that demand into runtime AI governance — not slide decks.