Curso
Google DeepMind: Fine-Tune Your Model
IntermediárioNível de habilidade
Atualizado 04/2026Google CloudCloud8 h40 Exercícios2,000 XPCertificado de conclusão
Crie sua conta gratuita
ou
Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados serão armazenados nos EUA.Preferido por alunos de milhares de empresas
Treinar 2 ou mais pessoas?
Experimentar DataCamp for BusinessDescrição do curso
Pré-requisitos
Não há pré-requisitos para esse curso1
Introduction to fine-tuning
In this module, you will explore the motivation for fine-tuning. Even when a pre-trained large language model is available, it may not always do exactly what you want it to do. Here, you will investigate the capabilities and limitations of a pre-trained language model to better understand why it is necessary to fine-tune models to new tasks.
2
Formatting
In this module, you will appreciate the power and importance of formatting for fine-tuning large language models. You will explore the various formats required to achieve different tasks. You will pre-process a dataset derived from the Africa Galore dataset and transform it into a question-and-answer format. This will then be used in later modules to fine-tune your language model so that it can generate revision study flashcards—a task which it was not pre-trained to do.
3
Full-parameter fine-tuning
In this module, you will learn full-parameter fine-tuning: a straightforward method for adapting pre-trained models. To gain an understanding, you will first take the small language model you built in course 04 Discover The Transformer Architecture. Then you will continue its training on a small, specialized dataset to generate the revision study flashcards you created in the previous module. This process will allow you to compare fine-tuning to training from scratch, observing the key similarities and differences in the development pipeline. Here, you will also consider how AI is made sense of within cultural contexts by reading a story about AI and then creating your own short piece of fiction. The aim here is to explore how narrative can act as a complementary approach to anticipation and reflection for revealing cultural meaning and values.
4
Parameter-efficient fine-tuning
In this module, you will explore low-rank adaptation (LoRA), a more computationally efficient alternative to full-parameter fine-tuning. LoRA is a popular parameter-efficient fine-tuning (PEFT) technique. You will investigate LoRA by applying it to fine-tune the Gemma3-1B model, which has one billion parameters. This will enable you to experience first-hand how it is able to achieve excellent results with a fraction of the computational cost of full-parameter fine-tuning.
5
Opportunities and limitations of SFT
In this module, you will consider the limitations of supervised fine-tuning. You will then be given a brief overview of advanced techniques based on reinforcement learning (RL). This will introduce you to how these approaches can better align a model's behavior with human values and preferences.
6
Challenge
In this module, you will explore foresight and governance. You will consider how the values and meanings revealed through storytelling can inform foresight reporting and help you to design governance responses, including enforceable rules, transparency, and accountability. This equips you to think about how strong governance can protect communities, ensure equity, and align AI with societal values.
7
Continue your journey
Critically evaluate and design governance approaches that include enforcement, bright-line rules, and burden of proof.
Google DeepMind: Fine-Tune Your Model
Curso concluído
Obtenha um certificado de conclusão
Adicione esta credencial ao seu perfil do LinkedIn, currículo ou CVCompartilhe nas redes sociais e em sua avaliação de desempenhoInscreva-se Agora
Faça como mais de 19 milhões de alunos e comece Google DeepMind: Fine-Tune Your Model hoje mesmo!
Crie sua conta gratuita
ou
Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados serão armazenados nos EUA.