Sprekers


Børge Obel
Chair of the Board at Zeal

Veronika Durgin
VP of Data at Saks
Wil je 2 of meer mensen trainen?
Geef je team toegang tot de volledige DataCamp-bibliotheek, met centrale rapportage, opdrachten, projecten en nog veel meer.[RADAR AI x Human] Dismantling Org Design: Who Should Own AI in the Enterprise?
April 2026
Summary
A practical discussion for data and business leaders deciding how to organize, govern, and scale AI without losing accountability or speed.
AI transformation increasingly looks less like “adding a model” and more like dismantling familiar workflows—sometimes even reporting lines—and rebuilding around new capabilities. The conversation spans success and “productive failure” stories, from Japan’s disciplined deployment of AI inside business processes to McDonald’s and Walmart rolling back flawed AI pilots and returning to iterate. Panelists debate the operating model question at the heart of enterprise adoption: if “anyone can build AI” now, how do you prevent duplication, security gaps, and uneven quality without creating a paralyzing governance committee? A recurring answer is an AI ownership model where accountability sits with the business outcome owner, supported by a federated build approach close to the work, paired with centralized AI governance for standards on risk, ethics, security, and performance monitoring. The panel also explores AI as an organizational “agent” that reshapes teams and roles, and why human accountability still matters most in high-stakes settings like healthcare. Change management emerges as the unglamorous differentiator: showing concrete prototypes, integrating AI into existing tools, and measuring adoption at the user level, not just as a program metric.
Key Takeaways:
- Ownership of AI should follow ownership of outcomes: “whoever owns the business outcome… owns AI.”
- A federated build model is inevitable; centralized AI governance is still necessary for security, risk, and ethical standards.
- AI is increasingly treated as an “agent” in the division of labor—sometimes a coworker, sometimes an assistant, and potentially a manager in limited contexts.
- Human-in-the-loop governance is about accountability, not box-checking; high-stakes decisions require humans who can challenge AI outputs.
- Adoption improves when AI arrives inside existing workflows: “Don’t ask people to come to AI. AI should come to them.”
Detailed Sections
1) Who Owns AI? Outcome Ownership, Federated Execution, Centralized Guardrails
The session’s ...
Lees Meer
Yet decentralization brings its own hazards. When “literally, anyone can build AI,” the enterprise risks a proliferation of overlapping agents, inconsistent quality, and quiet security failures—what Veronika Durgin warns can quickly become “Shadow AI.” Her argument is not for a return to centralized control—she calls that “a futile attempt”—but for a federated or hub-and-spoke operating model with a light but serious coordinating layer: a coaching staff that helps teams run in the same direction without dictating every play. That “coach” metaphor is telling. The goal is to speed learning and prevent redundancy (“not solving the same problem in multiple different ways”) while avoiding the classic governance committee that becomes “a massive bottleneck.”
Kotu adds the non-negotiable counterweight: governance itself often must be centralized, precisely because scale makes risk harder to see. In a large enterprise, he notes, “hundreds and thousands of agents” may be deployed; someone must ensure they meet standards, perform reliably, and align with ethics and security requirements. The nuance is that centralization should apply to guardrails (risk, compliance, monitoring), not to ideation or day-to-day experimentation. The discussion is most compelling when it acknowledges the real-world messiness: a federated model is not a feel-good preference—it is emerging as the only workable structure once AI creation becomes broadly accessible.
To hear how each panelist defines the boundary between “coaching” and “control,” the full session’s exchanges around governance versus innovation are worth the time.
2) AI as an Organizational Actor: Redesigning Teams Around a New “Agent”
Børge Obel approaches AI less as software and more as a participant in organizational design. His key move is conceptual: “now see AI as an agent… part of the division of labor.” That sounds abstract until you consider what it implies for team structure. If AI is an actor, then the organization isn’t merely automating tasks; it is changing how information flows, how decisions are escalated, and how coordination happens across units. Obel suggests AI can function as “a coworker… an assistant… [or] actually a boss,” referencing research on what happens when a superior is an AI system. The provocative framing isn’t meant as science fiction; it’s a way to force leaders to confront how authority and accountability might shift when AI shapes priorities, assigns work, or evaluates outcomes.
Still, the session is careful about contingency: not every AI fits every context, and not every human role should be redesigned the same way. Obel stresses fit—between “the type of AI system” and “the particular task and context.” That makes organizational design a matching problem rather than a blanket transformation mandate. Different industries and functions will land on different equilibria, and even within a company, some areas can tolerate automation and error more than others.
Veronika Durgin grounds the idea of “team redesign” back in human realities. For her, AI hasn’t changed the core of good data work: “writing code was always easy. What was really hard is connecting with people.” If AI makes implementation faster, the bottleneck becomes more social and strategic—translating messy business needs into clear questions, aligning stakeholders on definitions, and interrogating outputs. In other words, team design cannot just be about AI capability; it must preserve (and often elevate) the roles that create shared understanding across the business.
The most intriguing implication is that AI may flatten some hierarchies while increasing the need for boundary-spanning roles—people who can coordinate, validate, and translate. The full session digs into these trade-offs in a way that’s hard to capture in a single model, and that complexity is exactly why it rewards a full watch.
3) Human-in-the-Loop Means Accountability: Where Humans Must Stay Responsible
The panel converges on a definition of “human-in-the-loop” that is stricter than many enterprise implementations. The aim is not to sprinkle humans into a workflow as passive approvers, but to ensure someone can be held responsible when the model is wrong. Durgin recounts a conversation with an MD-PhD researcher that crystallizes the point: when asked whether he’d choose a human doctor or an AI doctor, his answer was a doctor who uses AI—because “who is responsible when AI makes the wrong call?” The question is both ethical and practical; as Obel notes, in medicine it is also legal. Responsibility does not vanish because an algorithm produced the recommendation.
Kotu extends the idea into enterprise work more broadly: “Ultimately, who takes the accountability… is the most important question.” In lower-stakes settings—say, automating certain IT tickets—organizations can allow more autonomy. But as stakes rise, the governance model must become more explicit about what humans are expected to review, what they must understand, and what they are empowered to override. The session implicitly critiques the common pattern where a “human in the loop” simply rubber-stamps outputs they don’t truly grasp. That approach preserves liability while increasing risk.
There is also a subtle organizational design insight here: accountability shapes architecture. If you cannot clearly name the accountable owner, you will struggle to set thresholds for automation, determine escalation paths, or decide which decisions require audit trails. This ties back to ownership: if responsibility is shared vaguely across committees, it is often effectively owned by no one.
The discussion is particularly timely in domains experimenting with AI-assisted judgment—radiology reads, banking decisions, managerial tasks—where AI is moving beyond “transaction” work into recommendations that can materially affect lives and livelihoods. The panel doesn’t offer a single universal boundary for where humans must remain, but it does offer a useful rule: when consequences are real and error tolerance is low, the human role must include informed scrutiny, not ceremonial oversight.
4) Change Management That Actually Works: Prototypes, Embedded Workflows, and User-Level Adoption
If the session has a quiet throughline, it is that AI transformation fails less from algorithms and more from adoption. Obel argues that introducing AI isn’t fundamentally different from other organizational change: people must understand what’s happening, develop trust in the system, and see it working in context. His example from healthcare is pragmatic—run AI in parallel with existing processes (such as evaluating X-rays) so clinicians can compare outcomes, build confidence, and identify failure modes without putting real patients at risk. This mirrors Durgin’s preference for “safe” experimentation, and aligns with her earlier praise for companies that roll back deployments rather than forcing them into production. McDonald’s, she notes, piloted AI ordering, found it “failed miserably,” and rolled it back to iterate—a story she treats as a success in disciplined learning, not a public embarrassment.
Durgin adds an important nuance about generative AI: many employees still “don’t actually get it.” Broad mandates—“go use AI”—often produce confusion or superficial usage. What changes behavior is “show and tell”: quick prototypes and concrete examples that help teams reimagine what is possible. Once people see even a rough demonstration, the question often shifts from “why would we use this?” to “why are we doing this manually?”
Kotu’s advice is even more operational: reduce friction by embedding AI into existing tools. “Don’t ask people to come to AI. AI should come to them.” This is not just convenience; it’s organizational design. When AI is integrated into familiar interfaces, adoption becomes a workflow upgrade rather than a new initiative competing for attention. He also recommends measuring adoption “at user level,” diagnosing why specific roles or individuals aren’t using capabilities, and then scaling insights back to the program level.
For leaders looking for a playbook that goes beyond slogans—pilot design, rollout reversals, workflow embedding, and adoption metrics—the full session offers concrete, experience-driven guidance that’s easy to miss in more polished AI success stories.
Gerelateerd
webinar
Radar Data & AI Literacy Edition: Adapting Organizational Culture to AI
Join Glenn Hofmann, Former Chief Analytics Officer at New York Life Insurance as he shares the ins and outs of building a data & AI-first culture.webinar
Building Trust in AI: Scaling Responsible AI Within Your Organization
Explore actionable strategies for embedding responsible AI principles across your organization's AI initiatives.webinar
Building Trust in AI: Scaling Responsible AI Within Your Organization
Explore actionable strategies for embedding responsible AI principles across your organization's AI initiatives.webinar
The AI Agent Strategic Playbook
AI leaders and enterprise practitioners, will share how organizations are deploying AI agents today—and what executives need to know to lead those efforts effectively.webinar
Building Trustworthy AI Products
Experts discuss how to design, build, and operate AI products that users can rely on.webinar