Pular para o conteúdo principal
Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados são armazenados nos EUA.
HomeResourcesWebinars

Understanding LLM Inference: How AI Generates Words

Webinar

In the last eighteen months, large language models (LLMs) have become commonplace. For many people, simply being able to use AI chat tools is enough, but for data and AI practitioners, it is helpful to understand how they work.

In this session, you'll learn how large language models generate words. Our two experts from NVIDIA will present the core concepts of how LLMs work, then you'll see how large scale LLMs are developed. You'll also see how changes in model parameters and settings affect the output.

Key Takeaways:

  • Learn how large language models generate text.
  • Understand how changing model settings affects output.
  • Learn how to choose the right LLM for your use cases.
Kyle Kranen Headshot
Kyle Kranen

Manager of Deep Learning Algorithms at NVIDIA

Mark Moyou, PhD Headshot
Mark Moyou, PhD

Senior Data Scientist & Solutions Architect at NVIDIA

View More Webinars

Hands-on learning experience

Companies using DataCamp achieve course completion rates 6X higher than traditional online course providers

Learn More

Upskill your teams in data science and analytics

Learn More

Join 5,000+ companies and 80% of the Fortune 1000 who use DataCamp to upskill their teams.

Don’t just take our word for it.