Vortragende


Rami Krispin
Senior Manager of Data Science & Engineering

Jaya Gupta
Partner at Foundation Capital
Training für 2 oder mehr Personen?
Dein Team erhält Zugang zur vollständigen DataCamp-Bibliothek mit zentralisiertem Berichtswesen, Übungsaufgaben, Projekten und mehr.[RADAR AI x Human] What's Next? Rethinking Analytics for the AI-Human Era
April 2026
Summary
A candid panel for analytics leaders, data practitioners, and enterprise builders figuring out how AI and humans will share the work of modern analytics.
Generative AI has already rewritten how teams work with text and code; the next frontier is structured, numeric enterprise data—where the stakes are higher and the shortcuts don’t hold. The conversation traced a shift from “nice email” automation to models that can forecast, predict, and recommend actions directly from raw relational data, without the brittle rituals of flattening tables and hand-built feature engineering. “Structured data is a missing modality in AI,” Stanford professor and Kumo cofounder Jure Leskovec argued, calling SQL a decades-old bottleneck that leaves the enterprise world “stuck in the past.”
As AI agents spread across systems of record—Salesforce, Workday, SAP—panelists emphasized a second bottleneck: context. The most valuable information often lives in Slack threads, incident calls, approvals, and half-documented exceptions. Capturing these “decision traces” as a context graph could turn short-lived reasoning into shared memory, improving both auditability and automation. Alongside this comes a redefinition of data work: less time writing code and more time designing systems, defining variables and metrics, ensuring data quality, and verifying outputs (human-in-the-loop validation). The punchline was pragmatic: faster insights matter only when they improve decisions. As Leskovec put it, “The goal is not insights. The goal is better decisions.”
Key Takeaways:
- AI in analytics is moving beyond chat and code generation toward predictive analytics over proprietary enterprise data—forecasting, churn prediction, fraud detection, and next-best action.
- Structured relational data remains the “missing modality” in AI; prompting an LLM with a database dump doesn’t solve deterministic, multi-table reasoning (LLMs vs structured data models).
- Self-service analytics with AI agents is within reach for many routine questions (natural language to SQL), but data preparation (not dashboards) is still the core bottleneck.
- “Decision traces” and a context graph can preserve the “why” behind actions—approvals, debates, exceptions—information that rarely makes it into warehouses.
- Data literacy becomes more important as AI spreads; humans increasingly act as auditors and verifiers of probabilistic outputs (AI analytics governance).
- Data roles shift “up the stack”: from writing functions to designing architectures, governance, quality, semantic definitions, and decision workflows.
Deep Dives
1) Structured Data as AI’s Missing Modality
For all the progress in language and vision model ...
Mehr Lesen
A recurring misconception, he noted, was the early hope that you could “think of my database as a document,” put it in a prompt, and let an LLM do the rest. In practice, this fails because relational data requires deterministic handling—joins, time windows, leakage control, and careful aggregation—while LLMs are not designed to guarantee correctness over these operations. “LLMs are… probabilistic token guessers,” Leskovec said, and they “struggle with deterministic… data,” especially when the task involves multi-table logic and edge cases that matter in finance, supply chain, or compliance.
The alternative proposed was not better prompting, but a “ground up” rethink: purpose-built architectures and foundation models for structured data that can reason directly over raw, multi-table enterprise data without flattening or bespoke feature engineering. This is not just an implementation detail. If models can natively ingest relational structures, organizations can skip weeks of pipeline work and move straight to prediction and decision support. It also reframes what “analytics” means in the AI era: less retrospective slicing and more answering what happens next, who is at risk, and what action changes the outcome.
What makes this theme especially consequential is its proximity to real operating decisions. When predictive questions can be posed in natural language—churn risk, inventory shortfalls, fraud likelihood—AI stops being a reporting assistant and becomes an engine for action. The discussion left a clear provocation for viewers: if your analytics stack still treats structured data as an afterthought, you may be optimizing the wrong layer entirely.
2) From Dashboards to Decisions: Predictive Analytics at the Center
The panel repeatedly returned to a deceptively simple idea: analytics is valuable only when it changes what an organization does. That sounds obvious, yet many teams still measure success in dashboards shipped or insights delivered. Leskovec cut through that habit during Q&A: “The goal is not insights. The goal is better decisions.” In an AI-driven decision making environment, that distinction becomes more urgent, because generating “insights” is rapidly becoming cheap—sometimes dangerously so.
Why does predictive analytics sit at the center of this reorientation? Because decision-making is inherently about the future of analytics: acting now based on what is likely to happen next. As Leskovec described it, humans “make decisions based on predicting the outcome of our decisions in the future.” Historical reporting matters, but mainly as a stepping stone to what comes next: which customers will churn, which orders will be delayed, which product change will trigger support volume, which accounts should be escalated now rather than later.
This forward-looking posture also changes how organizations should evaluate AI tools. The most impressive demonstrations are not the ones that draft an answer fluently, but the ones that compress the full cycle from question to prediction to action—without months of model building and operationalization. Leskovec argued that if an agent is expected to act autonomously, it needs “intelligence,” not just the ability to execute steps. “An autonomous agent is useless if he doesn’t know what to do,” he said, emphasizing that agents require accurate, near-real-time predictive signals from proprietary data to choose the right action, at the right moment, with the right message.
From an operating perspective, this suggests a new north star: measure analytics by improved decision quality and downstream outcomes, not by volume of analysis. It also hints at a higher bar for explainability—leaders won’t accept a recommendation unless they can interrogate why it was made, and what evidence supports it. The discussion tees up a compelling reason to watch the full session: the panel doesn’t just predict that analytics will speed up; it argues that analytics trends through 2026 will be judged by whether systems can reliably steer high-stakes choices.
3) Self-Service Analytics Is Close—But Data Prep and Context Still Decide the Ceiling
The promise of self-service analytics has been a fixture of modern data work for years. What changed, the panel argued, is not the ambition but the interface: natural language and code-generating assistants are removing the “middle layer” that historically forced nontechnical teams to queue behind analysts. Rami Krispin described the shift as unlocking access for people who lived in Excel but needed help to query Snowflake or Postgres. With AI agents for analytics embedded in data infrastructure, routine questions can be answered far faster—and often by the stakeholder directly.
Still, “close” is not the same as “solved.” Krispin emphasized practical limits: as complexity rises, systems will fail more often, and teams will still need experts for edge cases and high-risk questions. Leskovec offered a sharper diagnosis of why self-service has historically stalled: “The bottleneck isn’t building the dashboards. It’s the data preparation.” In other words, even if an AI can generate a chart or a SQL query, the hardest work remains ensuring the underlying dataset is correctly shaped, governed, and safe to use.
Data prep is not merely tedious; it is where subtle errors become business failures. Leskovec pointed to common pitfalls—information leakage, incorrect aggregation, missing joins—that can quietly invalidate a model or analysis. Coding agents can help, but only when organizations provide “the right level of abstraction and the right infrastructure” so the system operates within guardrails. This shifts investment away from superficial UX improvements toward foundational work: schema discipline, semantic layer definitions (metrics, dimensions, business logic), quality checks, and reusable transformations.
The conversation also surfaced a second ceiling on self-service: context. Answers are only as good as the definitions, assumptions, and institutional knowledge that accompany them. Krispin’s advice was operational: context should be constructed upstream, not bolted on later. When teams reach a curated “gold” layer of data, that’s where they should “create a context,” expose it, and make it available for both humans and agents. The takeaway is both hopeful and sobering: self-service is now plausible at scale, but it will reward organizations that treat the context layer, governance, and preparation as product-grade infrastructure—not as afterthoughts.
4) The New Data Professional: Architect, Context Manager, and Verifier
As AI accelerates time-to-insight, the panel’s most practical discussion centered on what happens to the people who used to produce those insights. The consensus was not replacement, but reallocation—away from repetitive construction work and toward higher-order system design. Krispin used a clear analogy for the speed shift: “We are moving from… riding on horses to cars.” When building a feature shrinks from weeks to hours, the limiting factor becomes architecture, not keystrokes.
Krispin described his own role evolving from writing code to being “more architectural”—designing processes rather than individual functions, and building systems that remain debuggable despite rapid code generation. This is not a cosmetic change. Faster creation can produce sprawling, fragile complexity unless teams enforce structure: clear ownership, conventions, testing, and observability.
Jaya Gupta pushed the argument into organizational design. As agents become prolific producers of output, the scarce resource becomes human judgment—people who can audit whether results are true, relevant, and safe to act on. She predicted that humans will increasingly become “verifiers,” and that being literate in data and AI will matter “a 100x more.” The panel’s logic is straightforward: probabilistic systems can be persuasive even when wrong, so expertise is what turns AI from a risk into a multiplier.
That expertise includes domain context. Stakeholders often understand the business better than the data team; the new workflow asks them to participate in tuning, validation, and feedback loops rather than passively consuming reports. Leskovec summed up the direction as “moving up the stack”: defining variables, ensuring quality, translating business problems into predictive questions, and connecting outputs to decisions. For viewers wondering what will happen to data analysts and data scientists with AI, the panel’s closing message was unambiguous: in an agent-driven world, craft knowledge is the leverage that lets you correct, constrain, and ultimately trust what the systems produce.
Verwandt
webinar
[RADAR AI x Human] The Top Human Skills In An Agentic World
What should you learn to get hired or promoted?webinar
Welcome to Radar: Forward Edition!
Join us as we welcome you to Radar: Forward Edition, highlighting the state of data & AI literacy today, and what the future holds for data & AI.webinar
[RADAR AI x Human] The Future of Education. This Time It's Personal!
Everyone deserves a world-class personal tutor.webinar
[RADAR AI x Human] Easy Wins: How Non-technical Teams Thrive with AI
AI for everybody else.webinar
Data & AI Trends and Predictions 2026
Richie and Rhys revisit last year’s predictions to see how they held up, then share a fresh set of forecasts for what will matter most in data, business intelligence, and AI in 2026.webinar