In partnership withpartner logo
HomeUpcoming webinars

Understanding LLM Inference: How AI Generates Words

Key Takeaways:
  • Learn how large language models generate text.
  • Understand how changing model settings affects output.
  • Learn how to choose the right LLM for your use cases.
Tuesday April 23, 11AM ET
View More Webinars

Register for the webinar

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.

Description

In the last eighteen months, large language models (LLMs) have become commonplace. For many people, simply being able to use AI chat tools is enough, but for data and AI practitioners, it is helpful to understand how they work.

In this session, you'll learn how large language models generate words. That is, how they do "inference". Our two experts from NVIDIA will present the core concepts of how LLMs work, then you'll see how the large scale "foundation" LLMs are developed. You'll also see how changes in model parameters and settings affect the output.

Presenter Bio

Kyle Kranen Headshot
Kyle KranenManager of Deep Learning Algorithms at NVIDIA

Kyle is an engineering leader focused on the intersection of deep learning, real-world application, and production. He leads a team developing NVIDIA foundation models for enterprise generative AI. His research includes optimization of LLMs and building software tools for AI developers.

Mark Moyou, PhD Headshot
Mark Moyou, PhDSenior Data Scientist & Solutions Architect at NVIDIA

Mark is a customer-facing data scientist who uses machine learning to solve business problems. His work involves helping companies with their AI strategy, as well as search and recommendation engines. Mark is the host of the Caribbean Tech Pioneers podcast, AI Portfolio Podcast and the director of the Southern Data Science Conference.

View More Webinars