Cleaning Data with PySpark
Learn how to clean data with Apache Spark in Python.
Start Course for Free4 hours16 videos53 exercises27,483 learnersStatement of Accomplishment
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.Training 2 or more people?
Try DataCamp for BusinessLoved by learners at thousands of companies
Course Description
Working with data is tricky - working with millions or even billions of rows is worse.
Did you receive some data processing code written on a laptop with fairly pristine data?
Chances are you’ve probably been put in charge of moving a basic data process from prototype to production.
You may have worked with real world datasets, with missing fields, bizarre formatting, and orders of magnitude more data. Even if this is all new to you, this course helps you learn what’s needed to prepare data processes using Python with Apache Spark.
You’ll learn terminology, methods, and some best practices to create a performant, maintainable, and understandable data processing platform.
Training 2 or more people?
Get your team access to the full DataCamp platform, including all the features.In the following Tracks
Big Data with PySpark
Go To Track- 1
DataFrame details
FreeA review of DataFrame fundamentals and the importance of data cleaning.
- 2
Manipulating DataFrames in the real world
A look at various techniques to modify the contents of DataFrames in Spark.
DataFrame column operations50 xpFiltering column content with Python100 xpFiltering Question #150 xpFiltering Question #250 xpModifying DataFrame columns100 xpConditional DataFrame column operations50 xpwhen() example100 xpWhen / Otherwise100 xpUser defined functions50 xpUnderstanding user defined functions50 xpUsing user defined functions in Spark100 xpPartitioning and lazy processing50 xpAdding an ID Field100 xpIDs with different partitions100 xpMore ID tricks100 xp - 3
Improving Performance
Improve data cleaning tasks by increasing performance or reducing resource requirements.
Caching50 xpCaching a DataFrame100 xpRemoving a DataFrame from cache100 xpImprove import performance50 xpFile size optimization50 xpFile import performance100 xpCluster configurations50 xpReading Spark configurations100 xpWriting Spark configurations100 xpPerformance improvements50 xpNormal joins100 xpUsing broadcasting on Spark joins100 xpComparing broadcast vs normal joins100 xp - 4
Complex processing and data pipelines
Learn how to process complex real-world data using Spark and the basics of pipelines.
Introduction to data pipelines50 xpQuick pipeline100 xpPipeline data issue50 xpData handling techniques50 xpRemoving commented lines100 xpRemoving invalid rows100 xpSplitting into columns100 xpFurther parsing100 xpData validation50 xpValidate rows via join100 xpExamining invalid rows100 xpFinal analysis and delivery50 xpDog parsing100 xpPer image count100 xpPercentage dog pixels100 xpCongratulations and next steps50 xp
Training 2 or more people?
Get your team access to the full DataCamp platform, including all the features.In the following Tracks
Big Data with PySpark
Go To Trackdatasets
Dallas Council VotesDallas Council VotersFlights - 2014Flights - 2015Flights - 2016Flights - 2017collaborators
Mike Metzger
See MoreData Engineer Consultant @ Flexible Creations
Mike is a consultant focusing on data engineering and analysis using SQL, Python, and Apache Spark among other technologies. He has a 20+ year history of working with various technologies in the data, networking, and security space.
Join over 15 million learners and start Cleaning Data with PySpark today!
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.