· Deeploy · 1 min read

5Q’s with Maarten Stolk, CEO of Deeploy

Deeploy — Jan 2024. Five questions on runtime AI governance, production ML, and why “human in the loop” still matters.

Read original article

  • news
  • Deeploy
  • AI
  • interview

Short interview with Maarten Stolk, CEO of Deeploy, on what it takes to run machine learning responsibly in production — not only to ship models, but to keep them fair, explainable, and aligned with human oversight.

1. What problem is Deeploy solving first for customers?

Teams rarely fail on model accuracy in a notebook; they fail when governance, monitoring, and human review must work in real time. Deeploy focuses on that runtime layer so organisations can trust AI where it matters.

2. How do you think about “human in the loop” in 2024?

It is not a slogan — it is a design constraint. You need clear roles, audit trails, and interfaces for experts to intervene when the model drifts or when regulation requires an explanation, not only a score.

3. What is different about climate and regulated domains?

The cost of a wrong answer is asymmetric. You need stronger bias checks, documentation, and often slower release cycles — paired with faster detection when something breaks in production.

4. What should founders prioritise before scaling inference?

Observability first: logging, explainability hooks, and a path to rollback. If you cannot answer “why did this decision happen?” you are not ready to scale.

5. What are you excited about next at Deeploy?

Deeper product integrations with MLOps stacks and teams that want governance as a default — not a spreadsheet after the fact.


Why Commit Capital is an investor in Deeploy. This piece reflects the original “5Q’s” format on the site.