Cleaning Data with PySpark
Learn how to clean data with Apache Spark in Python.
Comece O Curso Gratuitamente4 horas16 vídeos53 exercícios
Crie sua conta gratuita
ou
Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados são armazenados nos EUA.Treinar 2 ou mais pessoas?Experimente o DataCamp For Business
Amado por alunos de milhares de empresas
Descrição do Curso
Working with data is tricky - working with millions or even billions of rows is worse.
Did you receive some data processing code written on a laptop with fairly pristine data?
Chances are you’ve probably been put in charge of moving a basic data process from prototype to production.
You may have worked with real world datasets, with missing fields, bizarre formatting, and orders of magnitude more data. Even if this is all new to you, this course helps you learn what’s needed to prepare data processes using Python with Apache Spark.
You’ll learn terminology, methods, and some best practices to create a performant, maintainable, and understandable data processing platform.
Para Empresas
Treinar 2 ou mais pessoas?
Obtenha acesso à biblioteca completa do DataCamp, com relatórios, atribuições, projetos e muito mais centralizadosNas seguintes faixas
Big Data com PySpark
Ir para a trilha- 1
DataFrame details
GratuitoA review of DataFrame fundamentals and the importance of data cleaning.
- 2
Manipulating DataFrames in the real world
A look at various techniques to modify the contents of DataFrames in Spark.
DataFrame column operations50 xpFiltering column content with Python100 xpFiltering Question #150 xpFiltering Question #250 xpModifying DataFrame columns100 xpConditional DataFrame column operations50 xpwhen() example100 xpWhen / Otherwise100 xpUser defined functions50 xpUnderstanding user defined functions50 xpUsing user defined functions in Spark100 xpPartitioning and lazy processing50 xpAdding an ID Field100 xpIDs with different partitions100 xpMore ID tricks100 xp - 3
Improving Performance
Improve data cleaning tasks by increasing performance or reducing resource requirements.
Caching50 xpCaching a DataFrame100 xpRemoving a DataFrame from cache100 xpImprove import performance50 xpFile size optimization50 xpFile import performance100 xpCluster configurations50 xpReading Spark configurations100 xpWriting Spark configurations100 xpPerformance improvements50 xpNormal joins100 xpUsing broadcasting on Spark joins100 xpComparing broadcast vs normal joins100 xp - 4
Complex processing and data pipelines
Learn how to process complex real-world data using Spark and the basics of pipelines.
Introduction to data pipelines50 xpQuick pipeline100 xpPipeline data issue50 xpData handling techniques50 xpRemoving commented lines100 xpRemoving invalid rows100 xpSplitting into columns100 xpFurther parsing100 xpData validation50 xpValidate rows via join100 xpExamining invalid rows100 xpFinal analysis and delivery50 xpDog parsing100 xpPer image count100 xpPercentage dog pixels100 xpCongratulations and next steps50 xp
Para Empresas
Treinar 2 ou mais pessoas?
Obtenha acesso à biblioteca completa do DataCamp, com relatórios, atribuições, projetos e muito mais centralizadosNas seguintes faixas
Big Data com PySpark
Ir para a trilhaconjuntos de dados
Dallas Council VotesDallas Council VotersFlights - 2014Flights - 2015Flights - 2016Flights - 2017colaboradores
Mike Metzger
Ver MaisData Engineer Consultant @ Flexible Creations
O que os outros alunos têm a dizer?
Junte-se a mais de 14 milhões de alunos e comece Cleaning Data with PySpark hoje mesmo!
Crie sua conta gratuita
ou
Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados são armazenados nos EUA.