Przejdź do treści głównej

Wypełnij dane, aby odblokować webinarium

Kontynuując, akceptujesz nasze Warunki korzystania, naszą Politykę prywatności oraz fakt, że Twoje dane są przechowywane w USA.

Głośniki

Dla biznesu

Szkolenie 2 lub więcej osób?

Zapewnij swojemu zespołowi dostęp do pełnej biblioteki DataCamp, obejmującej scentralizowane raportowanie, zadania, projekty i wiele więcej
Wypróbuj DataCamp dla firmJeśli szukasz rozwiązania szytego na miarę, zarezerwuj demo.

[RADAR AI x Human] AI Upskilling with Purpose: Customer Success Stories

April 2026
Webinar Preview

Summary

A practical conversation for learning-and-development leaders and transformation partners tasked with turning AI curiosity into safe, repeatable capability across an entire organization.

AI upskilling is no longer a niche offering for technical teams; it is quickly becoming an operating requirement for everyone from finance and legal to executive assistants and frontline managers. Leaders from Bristol Myers Squibb and Snowflake describe how they built momentum without creating “haves and have-nots,” starting with the populations most likely to model behavior—and the ones most likely to pause because of governance, compliance, or reputational risk. The discussion moves from the realities of rollout (who goes first, what guardrails are non-negotiable, and how to partner with IT and security) to what “success” can reasonably mean while the technology changes weekly.

Instead of treating AI training as a single curriculum, both organizations emphasize a blended approach: lightweight, self-paced learning; internal storytelling; hands-on experimentation in controlled environments; and clear policies that make it obvious what’s allowed. Measurement, too, shifts from classic learning metrics to adoption signals—utilization, frequency, and practical workflow impact—paired with a willingness to sunset tools that don’t earn their place. Across the session, the core argument is consistent: the competitive advantage comes from pairing domain expertise with systems thinking and AI fluency, not from tooling alone.

Key Takeaways:

  • Prevent an AI “class system” by designing programs that bring G&A and other risk-averse groups along, not just technical early adopters.
  • Blend formal learning with internal demos, stories, and safe practice environments; most behavior change will come from exposure and experience, not slides.
  • Start with governance and guardrails so employees can experiment confidently—especially where sensitive data and compliance are involved.
  • Prioritize momentum: go where interest is highest, prove value quickly, then codify what works into scalable playbooks.
  • Measure success with adoption and workflow signals (frequency, tool usage, practical outcomes), then iterate—expanding what works and retiring what doesn’t.

Deep Dives

1) Designing AI Upskilling as Change Management (Not a Course Catalog)

Both speakers implicitly reject the ...
Przeczytaj Wiecej

idea that AI upskilling is mainly a content problem. It is, instead, a behavior-change problem—one that sits at the intersection of trust, policy, and practical workflow design. Bristol Myers Squibb’s approach is explicitly end-to-end, combining technical enablement with the human skills required to use AI responsibly: critical thinking, evaluation, judgment, and what Amber Marcoux described as a more metacognitive style of learning—helping employees clarify what they need, then learn iteratively with support. In practice, that means treating the program like a rollout: define approved tools and data rules first, start with a few role-based use cases, and then scale through repeatable templates (prompt patterns, review checklists, and “what good looks like” examples).

That human-centered framing matters because AI tools lower the barrier to entry while raising the stakes of misuse. In regulated environments, the risk is not abstract: data sensitivity, compliance obligations, and patient impact make “try it and see” an insufficient strategy. Marcoux emphasizes the “human in the loop” reality: employees remain accountable for output quality, bias, and appropriateness. This is less about teaching prompts and more about teaching discernment—how to interrogate results, recognize hallucinations, validate sources, and correct course before anything reaches a customer, a patient process, or an employee decision.

Snowflake’s Amanda Myton arrives at a similar place from a different angle: organizational credibility. Snowflake sells data and AI infrastructure; it cannot claim leadership in the market while leaving internal teams uncertain, divided, or afraid to use the tools. Myton’s point is cultural as much as operational:

“We can’t create something where we have sort of this dichotomy and the haves and the have nots.”

That ethos turns AI enablement into a company-wide literacy project with clear expectations: what employees can use, what they can’t, how to handle sensitive data, and where to go for help when a use case sits in a gray area.

One practical insight threaded through the session: early resistance is often rational, especially among functions that manage legal exposure, compensation decisions, immigration matters, and employee relations. Rather than dismissing skepticism, both leaders treat it as design input: build guardrails, show concrete use cases, and create low-stakes entry points that allow employees to learn without risking sensitive data or making high-impact decisions prematurely. The deeper logic is that adoption follows psychological safety—and psychological safety follows clarity: clear policies, clear escalation paths, and clear examples of “safe” work that delivers value.

To hear how they translate these principles into program architecture—who owns what, how IT partners are involved, and where “soft skills” show up in a technical curriculum—the full session provides the operational texture that’s hard to capture in summary.

2) Who Goes First: Sequencing Adoption Without Leaving People Behind

The session offers a candid look at rollout sequencing—less “launch to everyone” than a carefully staged diffusion strategy. Snowflake’s internal population spans cutting-edge AI builders and employees far removed from model development. Myton describes the challenge as connecting an internal spectrum: highly technical teams on one side, and business functions on the other that are often more risk-averse because they operate closest to sensitive information and policy constraints. For many companies, this is the make-or-break decision: if you start only with enthusiasts, you can create momentum but also resentment; if you start only with the most cautious groups, you can slow everything down unless governance is ready from day one.

Rather than waiting for perfect readiness, Snowflake leaned into a pragmatic two-pronged approach. Tools were made available to enable early adopters to explore and generate internal stories, while a formal program provided structure and reassurance to those who needed explicit permission and guidance. Myton recounts an early moment that revealed the gap: some colleagues were excited by custom GPT ideas for manager coaching, while others concluded the tools were unusable after a troubling experiment. The underlying lesson: in ambiguous spaces, a few negative anecdotes can freeze adoption unless leadership supplies governance, norms, and safe defaults (approved tools, do-not-use rules, and a clear review step for sensitive outputs).

Bristol Myers Squibb took a multi-audience approach that may surprise L&D teams accustomed to starting with a single cohort. Marcoux describes beginning with senior leadership for the original data program—because modeling matters—then broadening AI enablement to distinct communities, including executive assistants. The rationale is practical: executive admins “run the company,” are naturally curious, and act as connective tissue across teams, spreading patterns quickly when something helps. That choice reflects an adoption principle often overlooked: influence is not only hierarchical; it’s also network-based. It also points to a useful rollout pattern for many employers: leaders first (to set expectations), then highly connected roles (to spread everyday workflows), then G&A/risk teams (to remove blockers), and then deeper role-based tracks for specialists.

BMS also differentiates between “citizen learners” and specialist populations such as scientists, where the learning need isn’t basic comfort with data but rather understanding how AI can accelerate domain-specific outcomes. In other words, the “same tool” demands different onboarding depending on whether the user is a subject-matter expert, a business operator, or a process owner. This is where many programs stumble—by offering one-size-fits-all training that satisfies no one. Both speakers point toward role-based pathways: a short AI literacy core for everyone, then add-ons for managers, analysts, HR, legal, and scientific/technical communities.

One phrase from Myton captures the prioritization logic that keeps these programs moving even when demand is endless:

“Where can I go where the love is?”

The idea isn’t to ignore reluctant groups indefinitely; it’s to generate credible internal wins first, then use those wins to bring the middle along. The webinar’s Q&A (worth watching) adds more nuance on how to engage that “thawed middle” without letting the most enthusiastic pioneers dictate the entire roadmap—especially by pairing early adopters with a champions network and office hours so experimentation turns into repeatable practices.

3) Building the Right Learning Mix: Blended Programs, Guardrails, and Internal “AI That Knows Your Company”

Both organizations converge on a design pattern that is quickly becoming a best practice: blended learning anchored in real work. Snowflake’s “AI for everyone” program combined self-paced coursework, internal storytelling, and direct education from technical teams on what to try first—starting with time-saving use cases (summaries, first drafts, meeting notes, analysis support), then moving toward more transformative applications (analytics workflows, internal knowledge access, and role-specific copilots). The sequence matters: early wins create trust, which is necessary before employees will rely on AI for decisions that carry real consequence.

Myton also emphasizes a stance that feels more like security engineering than corporate training:

“It’s a trust and verify.”

Employees are encouraged to use tools, but adoption is monitored, policies are explicit, and the organization keeps a close eye on utilization and behavior. This is not surveillance as culture; it’s governance as scaffolding, especially when mistakes could expose sensitive data or create biased outcomes. For L&D and HR teams, that often translates into a practical prerequisite: before broad training, confirm you have an approved-tool list, data-handling rules, and a simple escalation channel (IT/security/legal) for “is this allowed?” questions.

A particularly instructive example is Snowflake’s internal tooling—described as a low-code/no-code experience (often referenced internally as Snowwork) that blends the familiarity of chat-based interaction with the safety of an enterprise governance layer. The key usability breakthrough is natural language access to internal data that consumer chatbots can’t provide. Instead of generic Q&A, employees can ask questions rooted in company reality—headcount changes, hiring velocity, RSVP patterns—and receive answers grounded in governed systems. This design closes a common adoption gap: people don’t just want “AI”; they want AI that understands their context, without leaking their context.

At BMS, the program ecosystem includes a broad set of tools (from standard copilots to purpose-built agents) and, importantly, a human network of champions. Marcoux describes tiers ranging from local “ask me anything” helpers to advanced builders who can create agents and solutions. This tiered support model is the operational antidote to the fear of asking “basic” questions publicly—employees get low-stakes, local help, which increases experimentation and reduces the stigma of being new. It also makes scaling easier: the champions network becomes a force multiplier for L&D, and a feedback loop for IT and security on where policies or tools are unclear.

Both speakers also highlight the importance of partnering tightly with IT, security, and enterprise platforms—because AI enablement can’t be bolted on after the fact. Tooling proliferates fast; the learning function’s role becomes providing the structure, pathways, and role-based framing that keep experimentation aligned with business goals and compliance obligations. The full session is especially useful on this point, because it surfaces the behind-the-scenes coordination required to make “safe experimentation” real rather than aspirational.

4) Measuring Success When Benchmarks Don’t Exist: Adoption Signals, Storytelling, and ROI Discipline

The measurement segment is refreshingly candid: traditional learning metrics can be directionally useful, but they are insufficient for AI because the target keeps moving. Myton notes the absence of stable benchmarks and argues that organizations should borrow measurement ideas from disciplines that routinely introduce new products to uncertain markets—especially product marketing. The resulting framework looks like a funnel: awareness (do people know what exists?), engagement (are they trying it?), and conversion (are they integrating it into workflows?). This approach acknowledges a central reality: many employees don’t yet know what they can do with AI, so assessment must start with exposure and behavior rather than mastery. In practical terms, that means measuring beyond completions: active users, repeat users, and the distribution of usage by function and role.

BMS, meanwhile, tracks familiar learning indicators—progression, proficiency, certification—then layers in tool utilization and frequency. Marcoux offers an analogy that’s easy to operationalize: adoption resembles training for a marathon. Frequency comes first; sophistication comes later. Consistent use is the precursor to effective use, and measuring frequency helps isolate whether a program has an awareness problem, a trust problem, or a workflow-fit problem. A useful measurement stack implied by the conversation is: baseline self-assessment (AI literacy and risk comfort) → short knowledge checks and role-specific practice → certification or sign-off for higher-risk use cases → reassessment after 60–90 days, paired with usage trends.

Both organizations describe a healthy discipline around tooling itself. Snowflake evaluates which tools are “leading,” where value is accruing, and which should be sunsetted—an important counterweight to the common corporate habit of accumulating overlapping AI products with no clear ownership. BMS similarly monitors use across a wide portfolio of AI options and connects adoption to licensing value: are employees getting “their money’s worth,” and where does deeper ROI analysis make sense? For executives and procurement partners, this shows up as simple scorecards: usage by tool, frequency bands (daily/weekly/monthly), team-by-team comparisons, and a short list of workflow outcomes (time saved, cycle-time reduction, fewer handoffs, faster analysis).

What emerges is a pragmatic definition of success: not perfect capability scores, but evidence that employees are using AI in governed ways to do work that either took too long before or wasn’t feasible at all. Myton points to the advantage of employees who can combine domain knowledge with systems thinking and synthesis—signals that are already appearing in performance narratives. Those stories become a measurement tool in their own right, because they make value legible to skeptics and executives alike—and they help teams decide where to invest next (more training, better prompts, better data access, or different tools).

If you’re deciding what to measure—and what not to measure—during an AI rollout, the full webinar is worth watching for its concrete examples of adoption instrumentation, program iteration, and the organizational trade-offs hiding behind every metric choice.


Powiązany

webinar

AI Upskilling: Lessons from the Frontlines

Experts discuss their experiences running AI upskilling programs at scale. You’ll learn how to define training personas, boost adoption and engagement, and measure the ROI of your efforts.

webinar

[RADAR AI x Human] The Future of Education. This Time It's Personal!

Everyone deserves a world-class personal tutor.

webinar

[RADAR AI x Human] Easy Wins: How Non-technical Teams Thrive with AI

AI for everybody else.

webinar

Architect a 90-Day AI Upskilling Program For Your Team

Nerupa Kidnapillai, a Senior Customer Success Manager at DataCamp, walks you through how to design and implement a focused AI upskilling initiative for your organization.

webinar

Building AI Skills with DataCamp

Discover how DataCamp can help you future-proof your career and business with new AI-focused courses.

webinar

Radar AI Edition 2024: Welcome to Radar!

DataCamp CEO and co-founder Jonathan Cornelissen welcomes you to Radar, highlighting the state of data & AI literacy today, customer success stories, and what the future holds for data & AI