Direkt zum Inhalt

Bitte das Formular ausfüllen, um das Webinar freizuschalten

Durch Klick auf die Schaltfläche akzeptierst du unsere Nutzungsbedingungen, unsere Datenschutzrichtlinie und die Speicherung deiner Daten in den USA.

Vortragende

Für Unternehmen

Training für 2 oder mehr Personen?

Dein Team erhält Zugang zur vollständigen DataCamp-Bibliothek mit zentralisiertem Berichtswesen, Übungsaufgaben, Projekten und mehr.
DataCamp für Unternehmen testenUm eine maßgeschneiderte Lösung zu erhalten, buche eine Demovorführung.

Build An AI-Native Enterprise

February 2026
Webinar Preview

Session Resources

Summary

This session is for founders, product leaders, and operators who want a clear view of what it takes to build (or rebuild) a company where AI is central to how value is created and how work gets done—instead of layering AI onto existing workflows.

AI-native thinking starts with a simple test: does the business still make sense if AI is removed? If the answer is no, then AI is not a feature—it’s the engine. The discussion drew a useful line between AI as a customer-facing product capability and AI as the internal operating system that runs finance, HR, marketing, product, and support. That distinction matters because many “AI transformations” stall when teams treat AI as a tool add-on rather than redesigning decisions, workflows, and accountability around it.

The panelists differed on whether established firms can truly become AI-native. Technical feasibility is not the same as organizational reality; legacy process ownership, technical debt, and cultural resistance often keep companies in AI-assisted mode. Still, there is a practical middle path: build AI-native pockets—new products, teams, or lines of business—that can grow faster than the legacy core and eventually reshape it.

On the technology side, the conversation emphasized capabilities over tools: design systems so you can swap models without reworking everything, and protect metadata—the map of a company’s business processes. On hiring and culture, the focus shifted from narrow roles to curiosity, comfort with ambiguity, and the ability to multiply impact with AI—while keeping humans accountable for judgme ...
Mehr Lesen

nt, relationships, and the final decision.

Key Takeaways:

  • AI native is not “adding ChatGPT”; it’s when the business or operating model “doesn’t make sense without AI.”
  • Expect most incumbents to become AI-assisted, not AI-native—unless they build new AI-native capabilities that eventually dominate the company.
  • Design for model churn: prioritize modular architectures so new models create value without constant re-engineering.
  • Metadata is sensitive IP; controlling how agents handle and retain it is a core security and trust issue.
  • Humans remain essential for accountability, taste/judgment, and relationship-driven work—AI increases execution speed, not responsibility.

Deep Dives

1) What “AI Native” Actually Means (and Why the Buzzword Test Matters)

The term “AI native” is often used loosely, but the panel offered a definition that is stricter and more useful than typical marketing copy. Tadas Jucikas put it plainly: “AI native doesn’t mean that you just added like ChatGPT to your product.” The sharper criterion is economic: “If you are an AI native company, your company doesn’t make sense without AI.” In this framing, AI is not an enhancement. It is the mechanism by which the company creates value, scales delivery, and differentiates itself.

Serge Gershkovich clarified the picture by separating the product from the business that builds it. A company might sell something “plain vanilla” to customers while running internally through automated processes and agents. His analogy—Amazon as a “web-native” bookstore versus a traditional retailer that merely added a website—highlighted that “native” describes the underlying structure, not the surface offering. It is a reminder that an AI-native operating model can show up in how the company runs, even if the product itself looks familiar.

Elizabeth McCalley emphasized that AI nativeness spans “people, process, and technology,” with culture doing as much work as software. The key shift is that AI becomes part of how ideas are formed, evaluated, shipped, and improved—an expectation across the workflow rather than a specialist function. Her operational rule of thumb was direct: assume every workflow should use AI in some way, then build the habits and governance that make that real. The implication is that “AI native” is less a one-time switch and more an operating discipline: repeated decisions that push work toward automation, learning, and iteration.

What makes this definition actionable is that it avoids half measures. If the company could still run comfortably without AI, it may be using AI well—but it is likely AI-assisted, not AI-native. That distinction is not academic; it changes how aggressively a team redesigns workflows, how it invests in data and tooling, and how it measures success (for example: speed, quality, and error rates). The full conversation adds nuance to where that line sits—and why many teams misread which side they’re on.

2) Can an Existing Company Become AI Native—or Only AI Assisted?

The panel did not offer a comforting answer, but it did offer a realistic one. In theory, established companies can “become AI native.” In practice, they run into the weight of what already exists: technical debt, process debt, and what Elizabeth McCalley called “cultural debt.” Serge Gershkovich described the human dimension—ownership, hierarchy, and ego—that makes it hard to unwind legacy ways of working. The friction is rarely about whether automation is possible; it is about whether the organization can accept the redistribution of work and status that automation creates.

Serge’s most practical starting point was to automate the “boring stuff”—the high-volume, low-judgment tasks that few people feel emotionally attached to. Those are the easiest wins because they remove repetitive work without forcing immediate identity fights about who “owns” a process. A useful prompt for teams is: “Why isn’t AI used here yet?” If the answer is “because nobody had time,” “because it’s manual,” or “because it’s inconsistent,” that task is often a strong candidate. The deeper transformation comes later, when automation starts to touch tasks that used to define a team’s value. At that point, AI adoption becomes a governance and change-management problem, not a tooling problem.

Elizabeth took a harder line: “I don’t believe that companies can be AI native unless they’re building from scratch the whole company… There’s no bolt on AI. The architecture is the architecture of the company.” Her argument is structural. AI is not a module you attach; it is an operating architecture that shapes how decisions are made, how work flows, and how the organization scales. Large firms can fund experiments, but their complexity makes true reinvention slow and politically expensive.

Tadas offered a middle path that may be the most actionable for incumbents: most will not become fully AI native, but they can build AI-native “components” that grow larger than the original business. A new AI-native product line, service model, or internal capability can become the growth engine, gradually rebalancing the company around AI-enabled processes. This is not a quick flip; it’s a portfolio strategy. It also reframes the competitive threat: the real risk is not that a rival adds AI features, but that a smaller, faster player rebuilds the value chain around AI and changes cost structures entirely.

If you want the unvarnished version of what blocks transformation—and what actually moves the needle—this segment of the session is where the details surface.

3) The AI-Native Stack: Capabilities Over Tools, and Why Metadata Becomes the Battleground

The conversation about tooling avoided the usual “best stack” checklist, and for good reason: the ecosystem is moving too quickly for any fixed set of products to stay current. The more durable guidance was architectural. Tadas urged teams to focus on capabilities rather than specific tools, and to assume that models will improve—sometimes fast enough to make careful model-specific work obsolete. The practical implication is to avoid building systems that are tightly coupled to one model’s quirks. If the model can be swapped by “changing the API key,” as Serge put it, your system should be designed so that swap is routine rather than disruptive.

Yet the panel’s most specific and consequential technical point was not about models at all. It was about metadata—what your tables are called, how systems are structured, and what workflows produce. Elizabeth warned that “renting agents” can become “the new shadow IT, but way worse,” because third-party tools may “scrape your metadata,” turning a company’s operational plan into someone else’s training signal. In an AI-native environment, outputs and traces are not throwaway byproducts; they are strategic assets that reveal priorities, processes, and patterns.

Serge grounded this in SQL DBM’s reality as a data-modeling platform: even when you “only” see table structures rather than raw data, clients are intensely protective because metadata is “the skeleton of your business process.” It describes how the company works. That is why AI features—especially those that learn across users—raise immediate questions about cross-contamination between clients, leakage, retention, and control. A real differentiator may belong to companies that can prove strong boundaries: what stays private, what is stored, and what is allowed to improve shared systems.

For teams building AI-native operations, this creates a design mandate: treat metadata governance as a first-class requirement, not a late-stage compliance task. The right question isn’t “Which agent platform should we buy?” but “What information will this system generate, who owns it, where does it flow, and can we swap models without vendor lock-in?” Only after that is answered does the rest of the stack discussion become meaningful.

The session’s strongest value here is its realism: AI-native tooling decisions are increasingly security, IP, and architecture decisions presented as product choices.

4) Culture, Process Design, and Hiring: Where Humans Still Matter (and What Changes First)

If AI is going to sit at the center of work, the hard part is not prompting—it is operating. Elizabeth described AI nativeness as cultural infrastructure: curiosity, experimentation, and the discipline to “show your work.” That ethos is both practical and political. When AI contributes to outputs, teams need shared visibility into how conclusions were formed, what inputs were used, and where uncertainty remains. This is not bureaucracy for its own sake; it is how trust holds up in a system that can move faster than human review.

A key reframing was her rejection of the passive tone in “human in the loop.” She argued that it misstates agency: people are not bystanders supervising AI; they are responsible actors using AI as part of an expert process. That reorientation matters because it keeps accountability where it belongs—inside the organization—rather than outsourcing it to a model’s disclaimer.

Serge named three areas where humans remain essential: accountability, taste/judgment, and relationships. AI cannot be held responsible for outages, bad decisions, or reputational damage. It also does not reliably produce strong creative direction without human selection and refinement—a point he illustrated with the familiar experience of asking for “10 blog title ideas” and finding only one worth keeping. Relationships, meanwhile, remain stubbornly human: trust, persuasion, and the rapport that closes deals or calms customers is still hard to automate well.

In hiring, this shifts what “talent” looks like. Tadas argued the highest-ROI hire today is someone who can take “one idea and 10x it with AI,” regardless of domain—paired with systems thinking, high agency, curiosity, and comfort with ambiguity. Elizabeth echoed this, emphasizing broad problem framing over narrow role execution. Her most unusual tactic was evaluative: paid work sprints that test real collaboration and communication rather than polished interview performance.

The throughline is clear: AI-native companies do not simply adopt tools; they recruit and reward a different operating style. If you’re wondering what that looks like in daily practice—and how to assess it without being fooled by AI-assisted resumes—the full session is worth watching closely.


Verwandt

webinar

AI In The Enterprise: Developing AI Talent

Damien Herrick, Ita Petrika-Lindroos, and Thomas Bodenski discuss strategies for developing AI talent. Learn which AI skills and roles are in demand, how to attract the right talent, and how to build AI literacy across your organization.

webinar

AI In The Enterprise: AI Strategies That Create Value

Lexi Reese, CEO & Co-founder at Lanai and Krunal Patel, Chief Product Officer & Co-Founder at Bordo AI explore what makes an AI strategy successful.

webinar

AI Agents For Business: The Leader's Guide to AI Agents

Philippe Wellens, CEO & Co-founder at Kleio, Matt Glickman, CEO & Co-founder at Genesis Computing, and Rahul Sonwalker, CEO & Co-founder at Julius AI, share how to successfully integrate AI agents into your business strategy.

webinar

AI In The Enterprise: From Prototype to Production

Aishwarya Naresh Reganti, Supreet Kaur, and Luke Jinu Kim discuss how to navigate the journey from AI prototypes to production-ready applications.

webinar

Artificial Intelligence for Business Leaders

We'll answer the questions about AI that you've been too afraid to ask.

webinar

How Leaders Build AI Skills at Scale

Industry experts share how they’ve built AI training programs that scale without sacrificing relevance or engagement.