Skip to main content

Building Data Engineering Pipelines in Python

Learn how to build data engineering pipelines in Python.

Start Course for Free
4 Hours14 Videos52 Exercises17,496 Learners3950 XP

Create Your Free Account



By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA. You confirm you are at least 16 years old (13 if you are an authorized Classrooms user).

Loved by learners at thousands of companies

Course Description

In any data-driven company, you will undoubtedly cross paths with data engineers. Among other things, they facilitate some of your work by making data readily available to everyone within the organization, and possibly in bringing machine learning models into production. One way to speed up this process is through building an understanding of what it means to bring processes into production and what features are of high-grade code. In this course, we’ll be looking at various data pipelines the data engineer is building, and how some of the tools he or she is using can help you in getting your models into production or run repetitive tasks consistently and efficiently.

In this course, we illustrate common elements of data engineering pipelines. In Chapter 1, you will learn how to ingest data. Chapter 2 will go one step further with cleaning and transforming data. In Chapter 3, you will learn how to safely deploy code. Finally, in Chapter 4 you will schedule complex dependencies between applications.

Building Data Engineering Pipelines covers new technologies and material, so we recommend that you have a strong understanding of the prerequisites to get the most out of this course.

  1. 1

    Ingesting Data


    After seeing this chapter, you will be able to explain what a data platform is, how data ends up in it, and how data engineers structure its foundations. You will be able to ingest data from a RESTful API into the data platform’s data lake using a self-written ingestion pipeline, made using Singer’s taps and targets.

    Play Chapter Now
    Components of a data platform
    50 xp
    Dashboards providing business value
    50 xp
    Snapshots in a data lake
    50 xp
    The data catalog
    50 xp
    Introduction to data ingestion with Singer
    50 xp
    Working with JSON
    100 xp
    Specifying the schema of the data
    100 xp
    Running an ingestion pipeline with Singer
    50 xp
    Properly propagating state
    50 xp
    Communicating with an API
    100 xp
    Streaming records
    100 xp
    Chain taps and targets
    100 xp
  2. 3

    Testing your data pipeline

    Stating “it works on my machine” is not a guarantee it will work reliably elsewhere and in the future. Requirements for your project will change. In this chapter, we explore different forms of testing and learn how to write unit tests for our PySpark data transformation pipeline, so that we make robust and reusable parts.

    Play Chapter Now
  3. 4

    Managing and orchestrating a workflow

    We will explore the basics of Apache Airflow, a popular piece of software that allows you to trigger the various components of an ETL pipeline on a certain time schedule and execute tasks in a specific order. Here too, we illustrate how a deployment of Apache Airflow can be tested automatically.

    Play Chapter Now




hadrien-d4e73b49-bc29-46b7-a485-2f598f38e3b9Hadrien Lacroixhillary-green-lermanHillary Green-Lerman
Kai  Zhang Headshot

Kai Zhang

Data Engineer at Data Minded

Kai is a data engineer, data scientist and solutions architect who is passionate about delivering business value and actionable insights through well architected data products. Kai holds a Master's degree in Electrical Engineering from KU Leuven.
See More
Oliver Willekens Headshot

Oliver Willekens

Data Engineer at Data Minded

Oliver is a Data Engineer and Data Scientist. He’s also an educator in these fields. Enthusiastic life-long learner and automator, he has a PhD in photonics (think lasers here). Experienced in Python and Scala, he is often found helping on StackOverflow, usually in the Python and NumPy tags.
See More

What do other learners have to say?

I've used other sites—Coursera, Udacity, things like that—but DataCamp's been the one that I've stuck with.

Devon Edwards Joseph
Lloyds Banking Group

DataCamp is the top resource I recommend for learning data science.

Louis Maiden
Harvard Business School

DataCamp is by far my favorite website to learn from.

Ronald Bowers
Decision Science Analytics, USAA