Skip to main content

Fill in the details to unlock webinar

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.

Speakers

For Business

Training 2 or more people?

Get your team access to the full DataCamp library, with centralized reporting, assignments, projects and more
Try DataCamp For BusinessFor a bespoke solution book a demo.

Transform Your Learning Culture

February 2026
Webinar Preview

Session Resources

Summary

An applied primer for data and AI leaders, team managers, and working professionals who need their organizations to actually use AI—not simply pay for it.

AI programs stall for reasons that have little to do with models, platforms, or data pipelines. The bigger obstacle is human: uncertainty, habit, fatigue, and the quiet fear of getting it wrong. That “human firewall” shows up in predictable ways—from cautious observers who wait for proof, to researchers who consume content but hesitate to experiment, to efficiency hackers chasing speed, to early adopters who try everything. Against a backdrop of low global engagement and high burnout, the session makes a pointed argument: AI change management is now a core AI competency.

The most practical advice centers on how people learn and how habits form. Traditional “AI literacy” (long e-learning modules and slide-heavy workshops) tends to produce certificates, not behavior change. Instead, durable AI adoption comes from training designed for the brain—short, repeated bursts; real use cases that connect to daily work; and small rewards that reinforce effort. Social learning accelerates this further by making experimentation feel safer, more normal, and—most importantly—visible.

The closing playbook is habit-focused: identify the craving (time savings, confidence, reduced stress, recognition), then attach it to an obvious cue (a time, a place, or a preceding event). It’s a concrete framework worth seeing demonstrated live—especially if your AI roadmap is ready but your people aren’t.

Key Takeaways:

  • AI readiness is primarily a people challenge—mindset, habits, and behavior—not a technology problem.
  • Different “AI adoption personas” (observer, efficiency hacker, researcher, early adopter) require different change tactics.
  • Training that sticks looks like microlearning plus repetition, real workplace use cases, and small reward loops—not marathon courses.
  • Social learning (peer groups, forums, challenges) reduces fear, increases accountability, and spreads practical prompting patterns.
  • Build AI usage as a habit: define a craving, create a cue, lower friction, and make the reward feel immediate and relevant.

Deep Dives

1) The “Human Firewall”: Why AI Strategies Fail After the Tech Is Ready

A familiar moment plays out in ...
Read More

many organizations: the platform is chosen, funding is secured, the roadmap is polished—and then someone asks whether employees feel ready to use any of it. The question lands because it reframes the whole initiative. “Readiness isn’t about technology. It comes down to mindset and behaviors,” Rosanne Werner says, describing the silence that often follows. In practice, AI transformation fails less from broken algorithms than from unchanged routines: people keep working the old way because it feels safer, faster, and socially reinforced.

The session ties that reality to two sobering signals. First, engagement is low. Werner cites Gallup data showing only 21% of the global workforce is engaged, with widespread burnout and disconnection. Second, AI transformation has a poor success rate: ambitious initiatives struggle to scale, many are abandoned, and ROI remains hard to prove. The throughline is not merely “change is hard,” but that organizations are layering AI onto teams that are already stretched thin. When employees are juggling competing priorities, “AI adoption” can read like one more demand rather than genuine help.

This is where the “human firewall” metaphor becomes useful. Firewalls aren’t evil; they’re protective. In the workplace, the protective impulse looks like hesitation (“I don’t know where to start”), self-doubt (“I’m not technical enough”), and suspicion (“I don’t trust how this works”). It also includes a subtler reaction: consumption without action. Many smart employees become “researchers,” fluent in AI headlines yet reluctant to try tools in ways that might expose gaps or create a mistake.

For AI leaders, the implication is straightforward and uncomfortable: adoption cannot be delegated to a training portal. It must be designed like any other change—clear expectations, visible leader role-modeling, practical guardrails, and time to practice on real work. That means meeting people where they are psychologically, reducing perceived risk, and deliberately shaping the conditions that make experimentation normal. If you want the full detail of this argument—especially the interactive prompts that surface resistance patterns in real time—the session is worth watching end-to-end.

2) Learning That Sticks: Neuroplasticity, Microlearning, and the End of “Slide-First” AI Literacy

One of the sharper critiques in the session is aimed at conventional corporate training. AI literacy programs are often built like compliance: long e-learning courses, dense workshops, a badge at the end—and then a return to familiar habits. Werner’s point is not that information is useless, but that information alone rarely changes behavior or daily workflows. She leans on a simple model of memory: people retain far more of what they experience (and especially what they teach) than what they read or hear. The gap between learning and doing becomes the hidden cause of stalled adoption.

Underneath that gap is biology. The brain prefers certainty and routines; change can trigger threat responses that feel like fear, frustration, or exhaustion. Werner walks through how habit mechanisms and “alarm” systems push employees back toward the known—particularly when AI tools are introduced without time to practice, permission to be imperfect, or clarity about where to start. The session’s brief physical exercises (switching hand motions, crossing arms the “wrong” way) serve a practical purpose: they make the friction of rewiring visceral, not theoretical.

From there, the guidance becomes actionable. “Our brain loves snacks, not banquets,” Werner says, arguing for bite-sized, frequent microlearning that uses repetition to strengthen retrieval pathways. Consistency matters more than intensity; one workshop does not create capability any more than one workout creates fitness. The second element is gamified challenges—quizzes, leaderboards, small wins—that add motivation by delivering tiny reward signals. This is not “fun for fun’s sake,” but a mechanism to keep people coming back long enough for new routines to form.

Finally, she emphasizes real use cases and stories. Abstract AI concepts tend to evaporate; concrete benefits—time saved, stress reduced, a clearer recommendation—stick because they connect to daily pain. If your team is still treating AI as an optional curiosity, the full segment offers a practical plan for redesigning an AI upskilling program so it looks like practice, not schooling.

3) Social Learning: Turning AI Adoption Into a Shared, Safer Practice

Even well-designed training can fail when employees feel alone with the risk of trying something new. Social learning, in Werner’s framing, is a way to change that emotional context. “Our brains are wired to learn better in a group,” she says, tying the idea to belonging and safety. When people feel safe, they absorb more, try more, and recover faster when something doesn’t work.

The mechanism is partly emotional and partly practical. Groups create “memory hooks” because people remember how an experience felt, and they remember it alongside the reactions of others. That web of shared experience makes new concepts easier to retrieve later. Social settings also increase exposure to diverse approaches: different teams will test different prompts, tools, and workflows. One person’s workaround becomes another person’s starting point, cutting the trial-and-error cycle that would otherwise happen in isolation.

Werner doesn’t romanticize group dynamics; she points out that social pressure is real—and can be useful. In a peer setting, people are motivated to keep pace, contribute, and avoid being left behind. That can sound uncomfortable, but in practice it often replaces vague “you should use AI” messaging with visible norms: colleagues demonstrating what “good” looks like, sharing mistakes without shame, and showing that experimentation is part of the job.

The session suggests several formats that organizations can adapt: always-on online forums for questions and prompt-sharing; peer-to-peer workshops that mix roles and seniority; and innovation challenges (a “Dragon’s Den” style pitch) where teams tackle real business problems with AI. These approaches matter because they make AI adoption less like an individual performance test and more like a collective capability build.

If you’re responsible for culture change, the details here—how to structure groups, why diversity matters, and how visibility drives momentum—are the kinds of operational insights that don’t translate well into a checklist. They’re best understood by watching the full discussion.

4) Building AI Habits That Last: Cravings, Cues, and a Practical Adoption Loop

The session’s most immediately portable framework is habit formation. Rather than asking employees to “be more innovative,” Werner offers a way to make AI usage routine. Habits, she argues, compound like interest: small behaviors repeated consistently create outsized change. The model is simple—cue, craving, response, reward—and its power is that it shifts adoption from aspiration to design.

Start with cravings: the emotional reasons someone would bother using AI in the first place. In the session, four show up repeatedly. Time savings is the most obvious: fewer hours lost to email triage, meeting notes, research summarization. Confidence and competence is quieter but potent—AI can help people draft persuasive messages, adapt tone for stakeholders, and sharpen recommendations. Reduced stress and cognitive load addresses the lived reality of fragmented attention, constant notifications, and task switching. Recognition is the social layer: help writing a performance review narrative, packaging results for influence, or being seen as a contributor.

But craving isn’t enough; you need a clear trigger. That’s where cues come in: time-based cues (every day at 8:30, ask AI to prioritize emails), location-based cues (in the meeting room, activate an AI notetaker), and preceding-event cues (after a client call, generate next steps). The goal is to remove the need for willpower. When the environment reliably triggers the behavior, the behavior becomes routine.

Werner also nudges leaders to think in two directions: how to make good AI habits easier (obvious cues, low friction, satisfying rewards) and how to make bad habits harder (hide cues, add barriers, reduce the “reward”). It’s a subtle but important point: culture change is not only motivation; it is architecture.

The live exercise—choosing one craving, one cue, and writing a single “when X, I will do Y” sentence—turns an abstract framework into a concrete plan. If you want to copy-paste something into your team’s next enablement session, this segment is the one to watch closely.


Related

webinar

Building a Learning Culture in the Age of Generative AI

Industry experts explore how organizations can foster continuous learning and adaptability in the age of AI.

webinar

Radar Data & AI Literacy Edition: Adapting Organizational Culture to AI

Join Glenn Hofmann, Former Chief Analytics Officer at New York Life Insurance as he shares the ins and outs of building a data & AI-first culture. 

webinar

RADAR: The Analytics Edition - Building a Learning Culture for Analytics Functions

In the session, Russell Johnson, Denisse Groenendaal-Lopez and Mark Stern address the importance of fostering a learning environment for driving success with analytics.

webinar

The Learning Leader's Guide to AI Literacy

Adel Nehme, VP of Media at DataCamp, walks you through how to foster organization-wide AI literacy.

webinar

The Learning Leader's Guide to AI Literacy

Adel Nehme, VP of Media at DataCamp, walks you through how to foster organization-wide AI literacy.

webinar

AI Literacy in Action: Driving Workforce Transformation

Find out how to drive workforce transformation through AI literacy. You’ll learn how to increase AI adoption across the enterprise, align AI initiatives with strategic priorities, and redesign workflows to take full advantage of AI capabilities.