Ir al contenido principal

Completa los detalles para desbloquear el seminario web

Al continuar, aceptas nuestros Términos de uso, nuestra Política de privacidad y que tus datos se almacenen en los EE. UU.

Altavoces

Empresas

¿Formar a 2 o más personas?

Dale a tu equipo acceso a la biblioteca completa de DataCamp, con informes centralizados, tareas, proyectos y mucho más.
Prueba DataCamp para empresasPara obtener una solución a medida, reserva una demostración.

Strategic AI Transformation

February 2026
Webinar Preview

Session Resources

Summary

A practical briefing for leaders, data teams, and transformation owners who need AI to change how work gets done—without betting the business on hype.

AI adoption has raced ahead at the individual level, but many institutions are still “driving” cautiously while employees quietly use tools in the background. The discussion traced this gap to familiar enterprise frictions—slow change management, unclear accountability, and brittle processes—made worse by how fast models and platforms evolve. Success, the speakers argued, depends less on declaring an “AI-first” identity and more on doing the basic work that turns pilots into an enterprise AI transformation program: mapping processes, writing down tribal knowledge, and building canonical knowledge systems that agents can use reliably.

Instead of chasing flashy use cases, the panel pushed for narrow, repeatable applications that shift low-value busywork to AI while preserving high-value judgment for people. They also challenged organizations to think beyond one-off automations: the real change comes when many small agentic AI improvements add up to better operations and faster learning across teams. Governance, values, and safety frameworks (EU AI Act, NIST, ISO 42001) were framed as accelerators, not brakes—guardrails that let teams test and scale with confidence. The throughline: an enterprise AI transformation strategy must serve business strategy, and the workforce must be supported with clear policies, reskilling paths, and a “person + AI” operating model with human ...
Leer Mas

-in-the-loop accountability.

Key Takeaways:

  • Individual AI use is already widespread; the harder problem is institutional adoption—process, governance, and accountability.
  • “Canonical” knowledge management is foundational; without it, agentic systems become expensive trial-and-error.
  • Start with narrow, repeatable use cases that offload low-value work and set realistic accuracy expectations.
  • Measure success in business outcomes, not novelty—reliable, “boring” AI is often the sign of real value.
  • Change management must be explicit: policies, guardrails, and a “person + AI” model that keeps humans accountable.

Deep Dives

1) Why organizations lag behind individuals in AI adoption

The session opened with a blunt reframing: many companies are not truly “choosing” whether AI is being adopted inside their walls—employees and consumers are already using it. Robb Wilson captured the mismatch with an image that stuck: “I almost think of it as, like, grandma on the freeway.” The organization believes it is controlling the speed and direction, while the traffic around it has already accelerated.

This gap is not unique to AI. Evan Schwartz noted that even conventional software rollouts—ERP, accounting platforms, CRM—have been slow for decades because institutions must coordinate training, process change, and risk management at scale. AI adds a twist: the technology itself shifts quickly, so the target keeps moving. A strategy built around a single vendor’s capabilities can feel outdated within quarters, not years. That volatility amplifies skepticism: leadership worries about “doing something stupid and making things worse,” while employees quietly adopt “shadow tools” to get work done.

Louisa Loran argued the deeper obstacle is existential, not technical. Many organizations have not confronted what they will be in an AI-era market—what services become unnecessary, what roles shrink, and what new propositions emerge. The danger of avoiding that conversation is strategic drift: employees define what works for them, but not necessarily what works for the business. The result is fragmented experimentation rather than a clear enterprise AI transformation roadmap.

What makes this portion of the conversation worth watching in full is how the panel differentiates between “AI in the workflow” and “AI in the institution.” The former can happen via personal productivity hacks; the latter requires decisions about data ownership, governance, incentives, and accountability. That’s why, three years after ChatGPT’s breakout, “AI-first” is still more slogan than operating model in many enterprises—despite the reality that the workforce has already moved on.

2) Canonical knowledge management: the unglamorous foundation of agentic AI

If there was one point the panel returned to repeatedly, it was that most agentic ambitions collapse into a knowledge problem. Wilson was especially direct: guardrails, prompt engineering, and agent tooling are often “code for get your knowledge figured out.” His emphasis wasn’t on dumping documents into a repository or standing up yet another RAG index; it was on canonical knowledge—information that is verified, structured, maintained, and usable by systems that must act with repeatable reliability.

That’s also why “success” can feel anticlimactic. Wilson offered a counterintuitive benchmark that cuts against the demo-driven culture around AI: “When AI is boring, you’re being successful. When AI is fun and exciting, you’re probably failing.” The line isn’t anti-innovation; it’s a warning about volatility. If an agent’s performance swings between magical and disastrous, it’s not ready to sit inside systemic processes. The most valuable AI work is often the work no one wants to fund—knowledge capture, taxonomy, data quality, and operational stewardship—because it doesn’t look like a breakthrough on a slide.

Loran extended the idea from “AI knowledge” to “company knowledge.” In legacy industries especially, knowledge is frequently embedded in hierarchy: what gets escalated, who is trusted, which metrics dominate. AI pressures that model by making information easier to find—sometimes uncomfortably so. One of her examples described organizations using AI search and discoverability not as a reason to argue about “right or wrong” outputs, but as a trigger for cultural change: the information existed; the enterprise simply hadn’t been able to find and use it.

The practical implication is clear: before expecting agents to execute complex tasks, organizations need a disciplined way to turn tribal knowledge into institutional knowledge. Watch the full session for the nuance here—the speakers are not romanticizing “data cleanup.” They are arguing that canonical knowledge is what turns agentic AI from a fragile experiment into infrastructure that can scale across the enterprise.

3) Designing use cases that scale: narrow agents, orchestration, and realistic success metrics

The panel’s guidance on process redesign was deliberately pragmatic: start narrow, prove repeatability, then scale through orchestration. Schwartz warned that many failures come from aiming AI at the wrong slice of work—often the rare, complex “1%” cases that look expensive, but don’t provide enough economic return to justify ongoing model and tooling costs. Instead, he advocated shifting low-value, necessary tasks to AI while reserving human attention for the parts of the job where judgment, empathy, and relationships matter.

His customer-success example illustrated this well. The high-value work is the customer conversation itself; the low-value work is the 20–30 minutes after the call spent summarizing, extracting actions, and drafting follow-up emails. An agent that learns the employee’s tone and reliably generates a high-quality draft can deliver immediate leverage—more customer touches, lower churn risk, and better consistency—without pretending that AI should own the whole relationship.

Just as important was how the panel defined “success.” AI is not an ERP ledger: expecting perfect precision every time is the wrong bar. Schwartz argued that the right standard is whether AI can match the performance level of the human previously doing the work—and do so consistently enough to scale. If repeatability breaks, the implementation becomes “unscalable,” no matter how impressive a demo looks.

Loran added a structural critique: narrow use cases become far more powerful when they feed organizational learning. Too often, companies automate a task and then reimpose old management patterns—KPIs and oversight that evaluate the transaction rather than using new data signals to improve customer outcomes, predict churn, and adapt the broader system. In other words, “agentic” value is not just speed; it is feedback loops.

The session is especially useful for viewers trying to move from chatbots to integrated agents. The speakers explain why the agent layer is increasingly commoditized—and why the real work is process clarity, data access, and orchestration across multiple narrow agents (often connected to business tools through APIs and emerging standards like MCP endpoints) to achieve complex outcomes.

4) People, governance, and the changing org chart: making “person + AI” real

AI transformation is often framed as a technical rollout; the panel treated it as a people-and-accountability redesign. Schwartz was explicit that organizations should avoid treating AI as a headcount reduction program, especially early. His recommendation was to institutionalize a “person plus AI strategy” in which humans remain accountable, and agents are delegated well-defined work under guardrails. That framing is not sentimental—it is operational. Someone must have “skin in the game” for outcomes, and today’s systems still require human intervention when things break.

Governance, in this view, is not bureaucracy; it’s what allows safe experimentation at scale. Schwartz pointed to established external frameworks (EU AI Act, NIST, ISO 42001) as starting points for policies, transparency, monitoring, and “kill switches.” He also described a practical internal mechanism: training plus controlled hackathons that let teams build EBITDA-relevant agents within a policy framework. The implicit lesson is that change management moves faster when employees are invited into creation rather than simply handed new tools.

Wilson pushed the organizational implications further, predicting a squeeze on the “middle” of companies—roles that exist primarily to manage administrative friction rather than to serve customers or develop people. In his telling, AI pulls out that middle layer, increasing transparency at the top and pushing more roles toward customer-facing work. Whether one agrees with the direction, the challenge is immediate: companies can’t just “add agents” without rethinking who approves objectives, who owns risk, and how decisions flow.

Loran added a psychological warning: convenience can erode critical thinking. When AI is “right nine out of ten times,” people may stop asking what it would take to change their mind, or what assumptions they should unlearn. That curiosity—and the willingness to stay uncomfortable—is quickly becoming a core competency.

For teams anxious about jobs, this segment is the most relevant: it doesn’t offer platitudes. It lays out the governance and reskilling commitments required to keep people central, accountable, and capable in an environment where tools—and job descriptions—evolve on a much shorter cycle.


Relacionado

webinar

Show Me the Money: Maximizing ROI from AI

Robb Wilson, CEO at OneReach.ai, Vin Vashishta, Founder & AI Advisor at V Squared, and Vijay Mehta, EVP of Global Solutions & Analytics at Experian, will share how to turn AI initiatives into bottom-line results.

webinar

Leading with AI: Leadership Insights on Driving Successful AI Transformation

C-level leaders from industry and government will explore how they're harnessing AI to propel their organizations forward.

webinar

AI In The Enterprise: AI Strategies That Create Value

Lexi Reese, CEO & Co-founder at Lanai and Krunal Patel, Chief Product Officer & Co-Founder at Bordo AI explore what makes an AI strategy successful.

webinar

Leading with AI: Leadership Insights on Driving Successful AI Transformation

C-level leaders from industry and government will explore how they're harnessing AI to propel their organizations forward.

webinar

Transforming AI Into Value: Driving Business Growth and ROI

Industry experts explore the strategies and frameworks needed to harness AI effectively. Discover how to drive adoption of AI, build clear alignment with business goals, and unlock the ROI of your AI investments.

webinar

Running an AI Transformation Initiative

Keri McCrensky, VP for Digital Transformation in Healthcare at EXL, and Grace Anderson, Global VP of Generative AI & Digital Adoption at Sodexo, will share how they’ve led enterprise-wide AI initiatives.