Altavoces

Osama Elkady
CEO & Co-founder at Incorta

Edward Calvesbert
VP of Product Management for watsonx at IBM


¿Formar a 2 o más personas?
Dale a tu equipo acceso a la biblioteca completa de DataCamp, con informes centralizados, tareas, proyectos y mucho más.[RADAR AI x Human] Building AI-Ready Teams: Skills, Mindset, and Structures
April 2026
Summary
A practical briefing for executives, team leads, and working professionals who want to move AI from small experiments to repeatable business results.
Across the panel, “AI-ready” was defined in practical terms: humans and AI working together in real workflows, with clear decision rights, strong data foundations, and an operating model that makes accountability obvious. Panelists described readiness as uneven across departments—software teams often move fast with code generation and automated testing, while sales and other functions need different enablement, controls, and ways to measure impact. The conversation stayed close to execution: hybrid teams of people and agents, hiring that rewards problem-solving and AI fluency, and stack choices that assume a multi-model future rather than betting on one vendor or one model.
The most consistent theme was disciplined measurement. Productivity matters, but leaders also need to track risk reduction, customer outcomes, and the ability to move faster without breaking trust. Culture—psychological safety, structured experimentation, and continuous adoption—was framed as the multiplier that decides whether AI gets embedded into how work is done or stays a side project. The session leaves a clear takeaway: the organizations that win will redesign processes and decision rights, not simply roll out a chatbot.
Key Takeaways:
- AI-ready teams treat AI as a teammate embedded in workflows (with clear handoffs), while keeping humans accountable for outcomes and approvals.
- Measure readiness department by department: skills, tooling fit, risk tolerance, and success metrics differ across engineering, marketing, sales, finance, and support.
- Hybrid teams (humans + agents) change management: leaders design systems of work, not only task lists, and they define where the human is “in the lead.”
- Hiring is shifting toward AI fluency and problem-solving; prompt libraries, evaluation habits, and the ability to run agent workflows are becoming key signals.
- Enterprise AI stacks are trending toward model-agnostic, multi-model designs, with strong software practices, governance, and high-quality data (lineage, observability, freshness).
Deep Dives
1) Defining “AI-Ready”: From Tools to Teammates
Th ...
Leer Mas
This becomes especially important as AI moves from isolated use cases into core operating routines. Sai described a shift “from AI being just a tool to be teammates,” capturing both the opportunity and the need for role clarity. Teammates need supervision. The panel kept returning to where the human sits: not simply “in the loop” as a rubber stamp, but “in the lead” as the accountable owner of results. That matters most in regulated settings, client-facing work, and any domain where trust, privacy, and compliance are part of what you sell.
Osama Elkady added an execution-focused point: AI readiness is not uniform across the enterprise. Development teams may adopt earlier because the feedback loop is short—AI can generate, test, and refactor quickly, and engineers can validate outputs with tooling. Marketing may follow with content and campaign acceleration. Sales can lag when tools do not match existing workflows, when data is scattered, or when success is harder to attribute. His advice was to measure readiness “department by department,” because “the success of AI is different from a team to team.”
That measurement should go beyond simple usage. Elkady emphasized outcomes, even offering a blunt threshold: “When you see… at least four x productivity, then you know that the team is actually very [successful] as a whole team.” The broader message is a practical AI readiness framework: define the outcome (cycle time, quality, customer impact, fewer incidents), define who is accountable (human-in-the-lead), define the guardrails (policy, approvals, auditability), and make sure data and tooling support the workflow. The session is strongest here—grounding “AI-ready” in operating design rather than hype.
2) Managing Hybrid Teams—and Measuring Success Beyond Cost Savings
Thomas Bodenski offered the clearest description of how AI changes daily management: “We stopped managing humans only in 2023.” From that point, he said, his organization operated as a hybrid team of people and agents working together. The implication is practical: managers are no longer assigning work only across employees; they are designing workflows where agents handle repeatable tasks and humans focus on supervision, judgment, exception handling, and problem solving.
Bodenski anchored the conversation in metrics that go beyond “we rolled out a tool.” He dismissed vanity milestones—installations, licenses, headcount coverage—as “meaningless unless there is real outcome.” His preferred evidence is change that you cannot get by simply adding staff: faster delivery, better coverage, fewer errors, and lower operational risk. That framing helps leaders measure ROI in a way that maps to business performance, not software adoption.
His “boring use case” story is a strong template for teams looking for their first enterprise AI workflow. The company received roughly 100,000 vendor emails each year announcing data changes—messages that had to be read and triaged by expensive, finance-trained specialists. The work was monotonous and error-prone; missed notifications could trigger production outages affecting hundreds of financial-institution clients. By automating the triage workflow end-to-end, the team eliminated about two and a half full-time equivalents of effort, reduced operating cost to roughly 3% of the prior approach, and—more importantly—stopped missing critical updates.
That ordering of benefits is the lesson. Effort reduction is helpful, but durable value shows up as reliability, fewer outages, better customer experience, and the ability to run 24/7. As Bodenski put it, the value becomes obvious when AI enables “10 times faster” execution, “24 by seven” coverage, and lower risk—outcomes that change what the organization can promise to clients. It’s a reminder that strong enterprise AI cases often look unglamorous: they remove failure points from real operations, with clear owners and measurable results.
3) Skills, Hiring, and the Retrain-vs-Replace Question
The most charged question—when to retrain employees versus replace them—was handled with a human-first stance. Edward Calvesbert argued that “human intelligence… is never, gonna be replaced,” while acknowledging that tasks will change. The leadership job is deciding where human judgment must remain “in the lead” so trust, compliance, and quality scale. In practice, that means designing roles where people own intent-setting, validation, and risk management, and where AI accelerates drafting, searching, summarizing, and routine execution.
Bodenski went further, saying explicitly: “I have never replaced anyone… because of AI.” In his view, AI expands capacity as complexity increases; it shifts roles rather than removing them. That will not match every company’s situation, but it describes what an AI-ready approach looks like when leaders treat productivity gains as fuel for growth, better service, and higher reliability—not as an automatic headcount lever.
Hiring criteria, however, are changing quickly. Bodenski described a shift away from recruiting for narrow technical virtuosity—“the best C++ developer… SQL developer… Python engineer”—toward problem solvers and domain experts who can direct AI systems and verify results. His most revealing interview question was concrete: “Show me your prompt library.” The prompt library acts like a portfolio: proof that a candidate can reuse patterns, build repeatable workflows, and get consistent outputs across tasks—not simply chat with a model.
Elkady echoed the need for balance: teams should avoid becoming “100% blind” to AI output. Generating code is easy; maintaining it is harder. He emphasized training and knowledge sharing so humans understand what’s being produced, can test it, and can maintain it later. He also described a practical litmus test for both retention and hiring: adaptability. “Do people… are able to adapt to change or not? And this is where you decide.”
The takeaway is that reskilling is less about one-time training and more about continuous learning built into the job: practice using AI in real workflows, shared prompt and workflow libraries, habits for validating outputs, and clear standards for quality. AI fluency becomes a baseline expectation, but judgment remains the differentiator. If you want the most actionable detail—the interview signals, the guardrails, and how leaders set expectations in hybrid teams—the full session is where those pieces come into focus.
4) Choosing the Right Stack (and Culture) for Enterprise AI
On technology strategy, Calvesbert made a case for designing systems that assume constant model change rather than chasing a single “best” model. Because models are improving rapidly and excelling at different tasks, he predicted a “multi-model world” in which no one model wins at everything. For leaders choosing an enterprise AI stack, the implication is straightforward: avoid tight coupling. Build toolchains that are model agnostic where possible, so you can swap models without rewriting core workflows or re-architecting the business.
But models are only one layer. Calvesbert described everything around them as “scaffolding”: software lifecycle practices that become even more important when agents can take actions, write code, or touch data. CI/CD, separation of duties, collaboration workflows, access controls, testing, monitoring, and automation are not “nice to have”—they are part of enterprise AI governance in practice. Without these controls, many AI initiatives stay stuck in pilot mode because security, compliance, and business owners cannot trust the system.
Data remains a key divider. Calvesbert argued that what separates companies is “its data”—not just volume, but quality defined by governance, lineage, observability, and freshness. Stale or poorly governed data caps business impact and increases the risk of misleading outputs. This is a familiar enterprise lesson made urgent: generative systems will amplify the strengths and weaknesses of the data environment you give them.
Culture is where the stack either becomes useful or decorative. Sai described an AI-ready culture as one built on trust and learning, where experimentation is “the norm, but it’s not chaotic.” The formula is simple: safe spaces to test, clear guardrails, and accountability so people trust what they deploy. She also warned that adoption is no longer a one-time rollout; “the adoption is continuous.” That shifts responsibility from training teams alone to leadership: setting operating rhythms where teams revisit workflows, evaluate outcomes, update prompt and agent patterns, and adjust governance as the technology evolves.
If you’re deciding what to standardize (governance, data controls, evaluation, platform patterns) and what to keep flexible (department tools, specific workflows, model choice by use case), this is the segment that rewards a careful listen. It focuses less on brand names and more on architectural and operating principles that keep enterprise AI useful long after the first demo.
Relacionado
webinar
[RADAR AI x Human] Easy Wins: How Non-technical Teams Thrive with AI
AI for everybody else.webinar
[RADAR AI x Human] AI Upskilling with Purpose: Customer Success Stories
Run AI training that makes a difference.webinar
Managing AI-First Teams
Industry experts share how they hire for AI-era skill sets, foster an AI-first culture, and measure team performance.webinar
[RADAR AI x Human] The Future of Education. This Time It's Personal!
Everyone deserves a world-class personal tutor.webinar
Does AI Help or Harm Skill-Building?
Ross Stevenson, Chief Learning Strategist at Steal These Thoughts!, explores how AI influences skill-building and how to integrate it thoughtfully into training programs.webinar