Building Data Engineering Pipelines in Python
Learn how to build and test data engineering pipelines in Python using PySpark and Apache Airflow.
Start Course for Free4 Hours14 Videos52 Exercises
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.Loved by learners at thousands of companies
Course Description
Build a Data Pipeline in Python
Learn how to use Python to build data engineering pipelines with this 4-hour course.In any data-driven company, you will undoubtedly cross paths with data engineers. Among other things, they facilitate work by making data readily available to everyone within the organization and may also bring machine learning models into production.
One way to speed up this process is through building an understanding of what it means to bring processes into production and what features are of high-grade code. In this course, we’ll be looking at various data pipelines that data engineers build and how some of the tools they use can help you get your models into production or run repetitive tasks consistently and efficiently.
Use PySpark to Create a Data Transformation Pipeline
In this course, we illustrate common elements of data engineering pipelines. In Chapter 1, you will learn what a data platform is and how to ingest data.Chapter 2 will go one step further with cleaning and transforming data, using PySpark to create a data transformation pipeline.
In Chapter 3, you will learn how to safely deploy code, looking at the different forms of testing. Finally, in Chapter 4, you will schedule complex dependencies between applications, using the basics of Apache Airflow to trigger the various components of an ETL pipeline on a certain time schedule and execute tasks in a specific order.
Learn How to Manage and Orchestrate Workflows
By the end of this course, you’ll have an understanding of building data pipelines in Python for data engineering. You’ll also have a knowledge of how to orchestrate and manage your workflows using DAG schedules and Apache Airflow for automated testing.- 1
Ingesting Data
FreeAfter seeing this chapter, you will be able to explain what a data platform is, how data ends up in it, and how data engineers structure its foundations. You will be able to ingest data from a RESTful API into the data platform’s data lake using a self-written ingestion pipeline, made using Singer’s taps and targets.
Components of a data platform50 xpDashboards providing business value50 xpSnapshots in a data lake50 xpThe data catalog50 xpIntroduction to data ingestion with Singer50 xpWorking with JSON100 xpSpecifying the schema of the data100 xpRunning an ingestion pipeline with Singer50 xpProperly propagating state50 xpCommunicating with an API100 xpStreaming records100 xpChain taps and targets100 xp - 2
Creating a data transformation pipeline with PySpark
You will learn how to process data in the data lake in a structured way using PySpark. Of course, you must first understand when PySpark is the right choice for the job.
Basic introduction to PySpark50 xpReading a CSV file100 xpDefining a schema100 xpCleaning data50 xpSensible data types50 xpRemoving invalid rows100 xpFilling unknown data100 xpConditionally replacing values100 xpTransforming data with Spark50 xpSelecting and renaming columns100 xpGrouping and aggregating data100 xpPackaging your application50 xpCreating a deployable artifact100 xpSubmitting your Spark job100 xpDebugging simple errors50 xpVerifying your pipeline’s output50 xp - 3
Testing your data pipeline
Stating “it works on my machine” is not a guarantee it will work reliably elsewhere and in the future. Requirements for your project will change. In this chapter, we explore different forms of testing and learn how to write unit tests for our PySpark data transformation pipeline, so that we make robust and reusable parts.
On the importance of tests50 xpRegression errors50 xpCharacteristics of tests100 xpWriting unit tests for PySpark50 xpCreating in-memory DataFrames100 xpMaking a function more widely reusable100 xpContinuous testing50 xpA high-level view on CI/CD100 xpUnderstanding the output of pytest50 xpImproving style guide compliancy100 xp - 4
Managing and orchestrating a workflow
We will explore the basics of Apache Airflow, a popular piece of software that allows you to trigger the various components of an ETL pipeline on a certain time schedule and execute tasks in a specific order. Here too, we illustrate how a deployment of Apache Airflow can be tested automatically.
Modern day workflow management50 xpSpecifying the DAG schedule100 xpSetting up daily tasks50 xpSpecifying operator dependencies100 xpBuilding a data pipeline with Airflow50 xpPreparing a DAG for daily pipelines100 xpScheduling bash scripts with Airflow100 xpScheduling Spark jobs with Airflow100 xpScheduling the full data pipeline with Airflow100 xpDeploying Airflow50 xpAirflow’s executors50 xpRecovering from deployed but broken DAGs100 xpRunning tests on Airflow100 xpFinal thoughts50 xp
Datasets
prices.csvpurchased.csvratings.csvratings_with_incomplete_rows.csvratings_with_invalid_rows.csvCollaborators

Oliver Willekens
See MoreData Engineer at Data Minded
Oliver is a Data Engineer and Data Scientist. He’s also an educator in these fields. Enthusiastic life-long learner and automator, he has a PhD in photonics (think lasers here). Experienced in Python and Scala, he is often found helping on StackOverflow, usually in the Python and NumPy tags.
Kai Zhang
See MoreData Engineer at Data Minded
Kai is a data engineer, data scientist and solutions architect who is passionate about delivering business value and actionable insights through well architected data products. Kai holds a Master's degree in Electrical Engineering from KU Leuven.
What do other learners have to say?
FAQs
Join over 12 million learners and start Building Data Engineering Pipelines in Python today!
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.