Curso
Google DeepMind: Represent Your Language Data
IntermedioNivel de habilidad
Actualizado 4/2026Google CloudCloud4 h35 Ejercicios1,750 XPCertificado de logros
Crea Tu Cuenta Gratuita
o
Al continuar, aceptas nuestros Términos de uso, nuestra Política de privacidad y que tus datos se almacenen en los EE. UU.Preferido por estudiantes en miles de empresas
¿Formar a 2 o más personas?
Probar DataCamp for BusinessDescripción del curso
Requisitos previos
No hay requisitos previos para este curso1
Introduction to text data
In this module, you will learn about the challenges that come with preparing text data so that it is in a format that machines can process. You will consider the course learning objectives and how to most effectively study them. Furthermore, you will learn how the meaning of text depends on social and cultural contexts and why this makes issues like ownership, consent, privacy, and exclusion central to building responsible datasets for LLMs.
2
Preprocessing
In this module, you will practice common automatic techniques for cleaning texts and think about where text data comes from. You will hear from Professor David Adelani about community efforts to create datasets that work well for African languages. Next, you will explore why reflecting on data sourcing, consent and ownership in the African context is crucial in preventing digital data from becoming another form of extraction. You will investigate how issues of transparency, benefit-sharing, and community control shape ethical questions about who owns data, who profits from it, and how it can be used responsibly.
3
Tokenization
In this module, you will learn about different levels of granularity when splitting texts into tokens. You will first experiment with character-level and word-level tokenizers to understand their different approaches. Then, you will learn about byte pair encoding (BPE), which is a subword tokenizer. This advanced method combines the benefits of both character and word-level approaches, offering a more balanced solution. You will then move on to consider how gaps and biases in LLM training datasets can marginalize African languages and cultures, reinforcing digital exclusion. By reflecting on these disparities, you will see how inclusive data practices and community-driven initiatives are essential for building fairer, more responsible AI systems.
4
Embeddings
In this module, you will investigate how language models represent the meaning of tokens in the form of embeddings. You will design your own “map of meaning”, experiment with Gemma’s embeddings, and learn how to visualize the token meaning representations. Finally, you will use the BPE tokenizer that you implemented in the previous module to prepare a dataset for training a small language model.
5
Challenge
In this module, you will build on your values-led problem statement from 01 Build Your Own Small Language Model by learning how to design an ethical dataset that supports your solution. You will see how dataset choices shape fairness, representation, and accountability in AI, and why responsible innovation in Africa means creating systems that respect privacy, community ownership, and cultural heritage
6
Continue your journey
In this module, you will have the opportunity to consult additional resources and further reading to investigate the topics you have covered in more detail. Finally, you will consider your next steps and how you can build on what you have learned in the course.
Google DeepMind: Represent Your Language Data
Curso completo
Obtener certificado de logros
Añade esta certificación a tu perfil de LinkedIn o a tu currículum.Compártelo en redes sociales y en tu evaluación de desempeño.Inscríbete Ahora
¡Únete a 19 millones de estudiantes y empieza Google DeepMind: Represent Your Language Data hoy mismo!
Crea Tu Cuenta Gratuita
o
Al continuar, aceptas nuestros Términos de uso, nuestra Política de privacidad y que tus datos se almacenen en los EE. UU.