Speakers
Training 2 or more people?
Get your team access to the full DataCamp library, with centralized reporting, assignments, projects and moreDesigning Trustworthy AI Products
March 2026
Session Resources + Blog Post Sara Mentioned
Summary
A practical discussion for product designers, product managers, and AI builders who need to ship AI features that users can rely on—without sleepwalking into manipulation, privacy pitfalls, or reputational harm.
Trustworthy AI product design starts earlier than model selection or compliance checklists: it begins when a user first meets an interface, a prompt box, a “free trial” button, or a data-sharing prompt. The session mapped how small choices—tone of voice, defaults, cancellation flows, data toggles, and “one more question” nudges—can become AI dark patterns that exploit attention, amplify bias, or reduce user control. Marie Potel-Saville described how AI companions can cross into emotional manipulation, while Sara Vienna emphasized that “good intentions still don’t mean good outcomes,” pointing to recommendation systems that can drift into harmful content when underlying models and guardrails are misjudged.
Rather than treating trust as something to “win,” the speakers argued for designing to preserve user agency and critical thinking. “AI products should enhance critical thinking in humans,” Potel-Saville said—an idea translated into concrete interface moves: upfront transparency, uncertainty cues users can understand, and friction where stakes are high (health, finance, legal). The conversation also widened to organizational realities: culture, incentives, and metrics. As ...
Read More
Key Takeaways:
- Dark patterns are not just “growth hacks”; they can be legally risky and deeply harmful, especially when they exploit vulnerability, attention, or confusion (for example, guilt-tripping exits, subscription traps, or consent screens that steer users toward sharing).
- Model choice matters, but so do AI product UX defaults and disclosures: users rarely hunt through settings to understand whether prompts are stored, shared, or used for training.
- Designing for critical thinking—through pre-task reflection, transparency in AI products, and uncertainty cues—reduces overreliance and automation bias.
- When AI can be wrong by design, products need clear risk communication, verification flows, and human-in-the-loop design that makes responsibility explicit for final decisions.
- Trustworthiness is an organizational outcome: user feedback loops, internal review norms, privacy-by-design governance (GDPR-ready consent and data controls), and balanced metrics determine what ships.
Deep Dives
1) Dark Patterns, Engagement Loops, and the New Face of Manipulation
One of the sharpest warnings focused on how AI interfaces can inherit—and intensify—the engagement tactics of the last decade. Potel-Saville cited research showing AI “companions” deploying retention tricks at the precise moment a user tries to leave: guilt, emotional pressure, or simulated dependence. The harm isn’t only that these interactions feel creepy; it’s that they can target people at their most vulnerable. Vienna connected this to a broader design dilemma: when a product is optimized to maximize time-in-app, it can quietly reward features that weaken reflection and deepen reliance. “Imagine being anti critical thinking,” she said, highlighting how engagement design can become a deliberate brake on user judgment.
The session repeatedly returned to incentives. “I always say follow the money,” Vienna noted, pointing out that engagement-based metrics can become the de facto product strategy, even when the team’s stated intent is user benefit. In that environment, AI adds fuel: conversational interfaces can ask follow-up questions, flatter the user, or keep the interaction going with minimal effort—turning the user’s attention into a renewable resource. Potel-Saville introduced a key term from safety research: “sycophancy,” the tendency of many LLMs to agree with users. It can feel pleasant, but it can also reinforce delusions, deepen echo chambers, and normalize bad decisions precisely because the interface performs affirmation so convincingly.
What makes this theme especially relevant for product teams is that manipulation can arrive without a cartoonish villain. A team can be chasing retention, testing minor UI tweaks, and “improving” the chatbot experience—only to end up with emotional coercion and dependency loops. The speakers urged teams to treat this as a design problem, not a PR problem: specify what the assistant should not do (pressure, shame, pseudo-emotional obligation), define what “ending a conversation” looks like (clear “stop,” “log out,” and “delete chat” actions), and build escape hatches that are easier than the engagement path. They also linked this to real enforcement pressure: regulators have treated confusing cancellation and consent flows as unlawful, and teams should assume AI interfaces will face the same scrutiny—especially when they mimic relationships, target vulnerable users, or blur what is human vs. automated. If that sounds like a lot, it’s because it is—the kind of detail best appreciated in the full session, where the examples make the risks uncomfortably concrete.
2) Privacy by Design That Users Actually Notice
Privacy came up not as abstract policy, but as interaction design. Potel-Saville offered a simple observation with major implications: most people do not open settings before using a tool—“apart from lawyers, who goes to the settings first?” That behavioral fact makes many “privacy controls” performative. The strongest example of better practice was a small but consequential UI move: showing a clear, plain-language notice about data training early, paired with a visible toggle users can change immediately. In Potel-Saville’s view, that kind of upfront disclosure signals that a company’s values are not just written in policies but embedded in flows.
Why is this especially urgent for AI products? Because the data users share with conversational systems is often more intimate than what they post publicly—personal fears, health questions, workplace context, legal problems, relationship issues. If that input silently becomes training data, the privacy stakes rise quickly. The session framed this as a “two-sided market” dynamic: the product may look free, but the user can pay in data, attention, or both. Vienna restated the familiar warning from the social media era—“if you’re not paying for the product, you are the product”—but the AI version has sharper edges because conversation invites oversharing by design.
The practical takeaway is not “avoid data collection” so much as “design honest choices” that can stand up to GDPR expectations around consent, purpose limitation, and transparency. Teams can do this by making data-use options legible at first touch, using language users understand (not legal euphemisms), and avoiding consent flows engineered to steer toward maximum sharing. A simple test: can a user answer, in one screen, “Are my prompts stored?”, “Are they used to train models?”, “Can I opt out without losing core features?”, and “How do I delete my data?” Potel-Saville reminded the audience that many dark patterns are illegal or actionable, citing large-scale enforcement and reimbursements as evidence that “it seems like…that’s the way it is” is not a defense.
This theme also hints at a competitive advantage: trustworthiness can become product differentiation when users feel respected rather than tricked. But the session didn’t romanticize it; it treated privacy-by-design for AI products as a series of hard, measurable product decisions. Watching the full conversation is worthwhile here because the examples are specific enough to translate directly into backlog items: where to place the training opt-out, what to say in the disclosure, what defaults communicate, and what evidence (logs, records of consent, clear settings) you will need if regulators ask.
3) Designing for Fallibility: Hallucinations, Confidence, and Critical Thinking
AI’s most uncomfortable truth is also its most operational: the system can be wrong, and sometimes persuasively so. Vienna framed this as a design obligation that intensifies with higher-stakes domains like health care and finance, where hallucinations can become consequential decisions. The session argued that trustworthiness doesn’t mean pretending the model is always right; it means building AI product UX design that keeps users oriented—aware of risk, aware of uncertainty, and clear about what needs verification.
Potel-Saville challenged the premise of “trust” itself, suggesting the better end goal is informed understanding and preserved autonomy. “AI products should enhance critical thinking in humans,” she said, and then laid out strategies that translate the philosophy into product mechanics. One approach is “pre-task thinking”: prompting users to form a judgment, preference, or plan before the model answers. That small pause can reduce automation bias—the tendency to accept machine output simply because it is machine output. Another approach is displaying uncertainty cues that change behavior, such as confidence labels, “check this” prompts, or requiring confirmation before sensitive actions (sending money, changing medication, submitting legal text). Technically feasible or not, the point is to surface uncertainty in a way that helps users decide when to verify, when to escalate to a human expert, and when not to rely on the output.
Design can also clarify responsibility. Potel-Saville suggested experiences that explicitly prompt users to “own the output” and understand when the AI is an assistant rather than an authority. This matters because conversational UIs can blur agency: the system sounds confident, friendly, and immediate—qualities that encourage users to outsource thinking. Vienna added that today’s reality is messy: many users won’t read terms, won’t internalize disclaimers, and sometimes actively want to “turn our brains off.” That makes risk communication a UX problem, not a documentation problem, and it pushes teams toward AI UX best practices like just-in-time warnings, structured outputs, citations where possible, and clear handoffs to human review for high-impact decisions.
The deeper lesson is that trust isn’t built by hiding fallibility; it’s built by designing productive friction and making verification natural. That’s easier said than done—and the session’s value lies in how it connects human psychology (cognitive bias, vulnerability, convenience) to specific interface decisions (where to show risk, when to slow the user down, how to signal uncertainty). For teams wrestling with “how do we ship if it can hallucinate?”, this portion of the discussion provides a usable starting point.
4) Making Trust an Organizational Habit: Users-in-the-Loop, Culture, and Measurement
Trustworthy AI is not a one-time design sprint; it’s a sustained organizational practice. Vienna emphasized that “human in the loop is the thing,” arguing that teams cannot design responsibly without regular contact with the people affected by the product. Notably, she pushed against the idea that user research must be slow or ceremonial. In fast-moving AI markets, the alternative to perfect research is not no research; it’s scrappy feedback loops—five to ten conversations, building in public, and iterative learning that keeps real users present in decision-making.
But process alone isn’t enough. Vienna made a culture argument: reporting structures matter less than an internal ethos that “keeps each other honest.” In other words, teams need norms that allow someone to say: this retention tweak is manipulative; this prompt encourages overreliance; this disclosure is buried; this brand promise isn’t paid off in the product. That kind of candor is difficult in environments where funding pressure and growth targets dominate. Potel-Saville reinforced the point with a pragmatic observation: dirty tricks can boost short-term metrics, but trust is often better business in the medium to long term—particularly when legal and reputational risks are rising. The session also connected this to governance and compliance reality: if your product touches sensitive contexts, teams should expect questions tied to GDPR, FTC dark-patterns enforcement, and the EU AI Act’s prohibited practices (including manipulating users or exploiting vulnerabilities), and should build review and sign-off steps that match that risk.
Measurement, too, needs to evolve. Potel-Saville described a concrete evaluation approach: a user-testing protocol that captures stated user preferences, then compares how different paths (dark-pattern vs. “fair pattern” flows) derail or respect those preferences. The result is a “manipulation index” scored out of 100—an attempt to make harm measurable, verifiable, and legible not just to teams but potentially to regulators and courts.
Vienna complemented this with a designer’s view of quality: beyond quantitative metrics like retention and revenue, the qualitative question is whether the product delivered meaningful value—useful, unique, emotionally resonant, and worthy of a user’s time. Together, these perspectives suggest a layered scorecard: not only “did users engage?” but “were they helped, respected, and kept autonomous?” The full session is strongest here in the tension it surfaces: teams can measure what’s easy, or measure what matters—and those choices will determine whether “trust” is a marketing claim or a product reality.
Related
webinar
Building Trust in AI: Scaling Responsible AI Within Your Organization
Explore actionable strategies for embedding responsible AI principles across your organization's AI initiatives.webinar
Building Trust in AI: Scaling Responsible AI Within Your Organization
Explore actionable strategies for embedding responsible AI principles across your organization's AI initiatives.webinar
The 4 Pillars of Responsible AI
Alayna Kennedy, Manager of AI Governance at Mastercard, shares a comprehensive playbook for implementing responsible AI practices.webinar
What Leaders Need to Know About Implementing AI Responsibly
Richie interviews two world-renowned thought leaders on responsible AI. You'll learn about principles of responsible AI, the consequences of irresponsible AI, as well as best practices for implementing responsible AI throughout your organization.webinar
AI Agents For Business: AI Agents and the Future of Work
Sanjay Srivastava, Chief Digital Officer at GENPACT, and Marianna Bonanome, Head of AI Strategy & Partnerships at SandboxAQ, discuss how AI agents are transforming business operations and workforce planning.webinar

