Diễn giả
Đào tạo từ 2 người trở lên?
Cung cấp cho nhóm của bạn quyền truy cập vào toàn bộ thư viện DataCamp, với báo cáo tập trung, phân công nhiệm vụ, dự án và nhiều hơn nữa.[RADAR AI x Human] The Future of Education. This Time It's Personal!
April 2026
Summary
A forward-looking briefing for learning leaders, data professionals, and builders trying to understand how AI agents are reshaping work—and what “personalized learning with AI” needs to look like to keep up.
AI agents have moved from hype to infrastructure, with tools that don’t just suggest code but operate in terminals, coordinate parallel reviews, and execute “agent swarm” workflows at scale. As the barrier to building drops, the more consequential shift is organizational: routine tasks get automated, new categories of work emerge, and role expectations tilt toward people who can pair domain expertise with practical AI fluency. “AI is everywhere now, and it works,” Jonathan Cornelissen argues, framing this moment as a “golden age of skills” in which learning velocity becomes a competitive advantage.
That promise runs into a familiar constraint: a widening skills gap. Drawing on DataCamp’s 2026 State of Data and AI Literacy report, Cornelissen describes leaders who broadly agree that baseline data and AI literacy matter—yet nearly 60% say their organizations are still falling short. The problem is no longer access to tools; it’s judgment, interpretation, and behavior change. Against that backdrop, the session positions AI-native education—especially conversational tutoring that adapts to role, level, and industry—as the next S-curve in online learning, moving past video libraries and even interactive exercises. The goal is specific: an AI tutor for professional skills that can adjust pace and depth while staying anchored to grounded learning objectives and human-curated content, and, for enterprises, show what’s actua ...
Dọc Them
Key Takeaways:
- AI agents have crossed a threshold from “coming soon” to operational reality, changing how work is executed and coordinated.
- AI fluency is rapidly shifting from a differentiator to a baseline expectation; domain expertise plus AI judgment is the new durable edge.
- The primary barrier is no longer tooling—it’s skills, evaluation of AI outputs, and the behavior change required to apply them to real business problems.
- Leaders report a significant data/AI skills gap even as they raise expectations; many are willing to pay more for AI-literate talent.
- Personalized, AI-native tutoring signals a third wave in online education, promising adaptive pacing, deeper relevance, and better organizational insight.
Deep Dives
1) AI Agents: From Novelty to the New Operating Layer of Work
A central claim in the session is that AI agents have finally become “real” in the sense that matters to working teams: they can execute tasks end-to-end, in parallel, and inside the tools where work already happens. Cornelissen notes that the conversation in 2025 often positioned agents as the “next big tech revolution,” but with a lingering caveat—always arriving “soon.” The tone here is different: the arrival has happened, and the capabilities are concrete.
In software and technical workflows, he points to a shift “far beyond simple code completion,” highlighting tools that operate directly in the terminal and enable developers to “assemble autonomous agent teams that work in parallel to review and to fix complex code bases.” The important detail is not the novelty of automated assistance; it’s the reconfiguration of throughput. Parallelization changes the economics of iteration: more hypotheses tested, more refactors attempted, more issues surfaced and resolved—often faster than a single human can coordinate alone.
He broadens the lens beyond developer tools to a world where deploying always-on agents becomes a default capability—“twenty four seven AI agents accessible to deploy with just a few clicks.” The mention of “agent swarms,” coordinating “up to a 100 specialized sub agents,” suggests a model where the unit of work becomes a coordinated system rather than an individual contributor. That, in turn, forces a new question: if execution becomes abundant, what becomes scarce?
The session’s answer is judgment. As agents accelerate production, the differentiator shifts to problem framing, evaluation, and governance—deciding what to automate, what to verify, and where human accountability must remain explicit. Cornelissen connects this directly to changing job requirements: “The boring work gets automated, new things become possible, and the skills and requirements for almost every role have changed or are about to change.” If you want the fuller picture of what roles might look like on the other side of this transition—especially for non-technical functions—his preview of later discussions is a clear invitation to watch the rest of the program.
2) The Skills Gap Is Now a Judgment Gap
The most policy-relevant portion of the talk is Cornelissen’s insistence that the constraint on AI transformation has shifted. Organizations have largely solved for access—models, copilots, platforms, even agents are increasingly available. The bottleneck is the human layer: how people interpret outputs, decide when to trust them, and translate them into action without eroding quality or accountability.
He frames this through DataCamp’s 2026 State of Data and AI Literacy report, based on surveys of 500 leaders in the U.S. and U.K. The headline numbers are intentionally stark: 88% of leaders say basic data literacy is important or very important, and 72% say the same for basic AI literacy. Yet “nearly 60%” believe their organization suffers from a data and AI skills gap. The tension is not subtle—expectations are rising faster than capability.
What’s most revealing is how Cornelissen describes the missing skills. “It isn’t the access to tooling anymore that’s the issue,” he says. “It’s the lack of skills the lack of judgments, the lack of behavior change.” In other words, the gap is not only about knowing which button to click. It’s about being able to question outputs, recognize when a model is confidently wrong, understand what data is appropriate to use, and apply results to a “real world business problem” responsibly.
He also makes a labor-market claim with practical implications for individuals deciding what to learn next: the gap is an opportunity. Employers are “desperate for people with these skills,” he argues, citing a finding that 69% of leaders are willing to pay a higher salary for candidates with strong AI literacy. The subtext is that “AI literacy” is becoming a proxy for adaptability—proof that a candidate can operate in uncertain environments where tools evolve monthly.
This is where the session becomes more than futurism. If the problem is judgment and behavior change, then training can’t be passive. It has to build decision-making practice and the habit of checking AI work, not just deliver information—a premise that sets up the talk’s argument for AI-native, personalized tutoring.
3) The Third Wave of Online Education: Personalization Without Losing the Plot
Cornelissen offers a simple historical arc for online learning—useful because it clarifies what “AI-native courses” actually mean. The “first wave,” he says, put video content online. The limitation is familiar to anyone who has abandoned a course halfway through: “most people don’t wanna sit through hours and hours of video content.” The “second wave” leaned into interactivity and “learning by doing,” a model DataCamp helped popularize through hands-on exercises.
The session’s pivot is the claim that a “third s curve” is now emerging, powered by AI tutors. The aspiration is not incremental improvement but a categorical shift: “We’re starting to build a technology that eventually will be better than the best human teacher.” That line is provocative for a reason—it frames AI not merely as content delivery, but as instruction that can respond, ask follow-up questions, and adapt in real time.
What does “personal” mean here in concrete terms? The tutor “adapts the entire experience to the learner”—to role, industry, and level. It can “slow down if you’re new to a topic” and “speed up if you already know a topic,” while allowing learners to “go deep on any topic that matters.” Compared with a typical online course, this is the practical difference in AI tutoring vs online courses: instead of everyone watching the same sequence, the tutor can change the explanation, the examples, and the practice questions based on what the learner gets wrong and what they need for their job. The key design challenge is avoiding a personalized experience that drifts away from what the course is meant to teach. Cornelissen addresses this directly: adaptation is bounded by “the learning objectives that we have set” and content “curated by our human experts.” In other words, the tutor is meant to teach within a defined curriculum, not invent one on the fly.
For professionals, the promise is pragmatic: less time spent on what you already know, faster clarification when you’re stuck, and an adaptive learning platform experience that reflects the context you work in—not a generic syllabus. For education leaders, it hints at a new measurement layer: conversational learning generates richer signals than video completion rates or quiz scores, including how someone reasons through a problem and how they respond to feedback.
This portion of the session is also where the talk most clearly tees up a reason to watch in full: if AI is changing job requirements quickly, then the format of training—how people practice judgment under realistic constraints—becomes as important as the content itself.
4) Enterprise-Grade AI Tutoring: Customization, Governance, and Learning Analytics
The talk’s enterprise argument is straightforward: personalization becomes significantly more valuable when it reflects the realities of a specific organization. Cornelissen suggests that when a tutor “knows your industry, your company, your tech stack, your training goals, your data governance policies,” it can produce “far more effective and relevant upskilling.” The inclusion of governance policies is a notable detail—it acknowledges that enterprise learning is constrained not only by teaching style, but by risk, compliance, and rules about how data and AI can be used.
That’s also why conversational tutoring is positioned as more than a nicer user experience. It becomes an observability layer for skill development. Because the training happens through dialogue, organizations can get “very rich insights to admins,” especially compared with today’s limited dashboards. Cornelissen argues many companies “have no visibility on what’s holding back their data and AI transformation,” and suggests a tutor can surface patterns: where learners consistently misunderstand concepts, which teams struggle to apply techniques, and what kinds of prompts or workflows produce fragile reasoning.
This is a meaningful reframing of learning analytics. Traditional metrics—time spent, units completed—often measure compliance rather than capability. A conversational tutor can capture the shape of misunderstanding: not only that someone got an answer wrong, but why. That creates the possibility of targeted interventions: revised internal documentation, additional practice scenarios tied to company data and tools, or even organizational changes to how teams use AI tools day-to-day.
Cornelissen grounds the vision in rollout details: AI-native formats are expanding across existing and new courses, with “intro to SQL and introduction to AI” already taken by “more than a 100,000 learners.” He also flags a pathway for more advanced practitioners and developers: an “AI engineering with lang chain track” aimed at engineers and data scientists moving beyond baseline AI literacy into hands-on AI engineering skills—building LLM apps, using tool calling, working with retrieval, and shipping agent workflows with testing and evaluation.
The broader implication is that enterprises are drifting toward a new contract with learning: training is no longer episodic; it’s continuous, contextual, and measured against real adoption blockers. If you’re deciding whether this approach is genuinely different—or simply a new wrapper on old content—the rest of the session’s discussion (and the day’s later panels Cornelissen previews) is where you’ll want the added detail.
Có liên quan
webinar
Radar AI Edition 2024: Welcome to Radar!
DataCamp CEO and co-founder Jonathan Cornelissen welcomes you to Radar, highlighting the state of data & AI literacy today, customer success stories, and what the future holds for data & AIwebinar
Welcome to Radar: Forward Edition!
Join us as we welcome you to Radar: Forward Edition, highlighting the state of data & AI literacy today, and what the future holds for data & AI.webinar
Building AI Skills with DataCamp
Discover how DataCamp can help you future-proof your career and business with new AI-focused courses.webinar
Revolutionizing Learning: Exploring the Future of Upskilling with AI
Join us as the panel of AI and education experts discuss how to work with generative AI to upskill employees and improve corporate training programs.webinar
Reimagining Data and AI Education
Jonathan Cornelissen, the CEO at DataCamp, and Yusuf Saber, the CAIO at DataCamp, will explore how organizations and individuals can adapt to this new era of AI-driven learning.webinar
