Loved by learners at thousands of companies
Empowering Analytics with Data Pipelines Data pipelines are at the foundation of all analytics projects. You’ve probably heard the age-old adage, that 90% of data science is cleaning and transforming data. This introductory course will help you hone the skills to build effective, performant, and reliable data pipelines. Data pipelines are at the foundation of every strong data platform. Building these pipelines is an essential skill for data engineers, who provide incredible value to a business ready to step into a data-driven future. This introductory course will help you hone the skills to build effective, performant, and reliable data pipelines. Building and Maintaining ETL Solutions Throughout this course, you’ll dive into the complete process of building a data pipeline. You’ll grow skills leveraging Python libraries such as `pandas` and `json` to extract data from structured and unstructured sources before it’s transformed and persisted for downstream use. Along the way, you’ll grow confidence tools and techniques such as architecture diagrams, unit-tests, and monitoring that will help to set your data pipelines out from the rest. As you progress, you’ll put your new-found skills to the test with hands-on exercises. Supercharge Data Workflows After completing this course, you’ll be ready to design, develop and use data pipelines to supercharge your data workflow in your job, new career, or personal project.
Introduction to Data PipelinesFree
Get ready to discover how data is collected, processed, and moved using data pipelines. You will explore the qualities of the best data pipelines, and prepare to design and build your own.Introducing data pipelines50 xpWhat is a data pipeline?50 xpComponents of a data pipeline100 xpProducers and consumers of data pipelines100 xpDesigning data pipelines50 xpArchitecture diagrams for data pipelines50 xpReading architecture diagrams50 xpData pipeline design process100 xpQualities of great data pipelines50 xpBuilding quality data pipelines50 xpPersisting data throughout a pipeline50 xpQualities of sound data pipelines100 xp
Building ETL Pipelines
Dive into leveraging pandas to extract, transform, and load data as you build your first data pipelines. Learn how to make your ETL logic reusable, and apply logging and exception handling to your pipelines.Extracting data from structure sources50 xpExtracting data from parquet files100 xpPulling data from SQL databases100 xpBuilding functions to extract data100 xpTransforming data with pandas50 xpFiltering pandas DataFrames100 xpTransforming sales data with pandas100 xpValidating data transformations100 xpPersisting data with pandas50 xpLoading sales data to a CSV file100 xpCustomizing a CSV file100 xpPersisting data to files100 xpMonitoring a data pipeline50 xpLogging within a data pipeline100 xpHandling exceptions when loading data100 xpMonitoring and alerting within a data pipeline100 xp
Advanced ETL Techniques
Supercharge your workflow with advanced data pipelining techniques, such as working with non-tabular data and persisting DataFrames to SQL databases. Discover tooling to tackle advanced transformations with pandas, and uncover best-practices for working with complex data.Extracting non-tabular data50 xpIngesting JSON data with pandas100 xpReading JSON data into memory100 xpTransforming non-tabular data50 xpIterating over dictionaries100 xpParsing data from dictionaries100 xpTransforming JSON data100 xpTransforming and cleaning DataFrames100 xpAdvanced data transformation with pandas50 xpFilling missing values with pandas100 xpGrouping data with pandas100 xpApplying advanced transformations to DataFrames100 xpLoading data to a SQL database with pandas50 xpLoading data to a Postgres database100 xpValidating data loaded to a Postgres Database100 xp
Deploying and Maintaining a Data Pipeline
In this final chapter, you’ll create frameworks to validate and test data pipelines before shipping them into production. After you’ve tested your pipeline, you’ll explore techniques to run your data pipeline end-to-end, all while allowing for visibility into pipeline performance.Manually testing a data pipeline50 xpTesting data pipelines50 xpValidating a data pipeline at "checkpoints"100 xpTesting a data pipeline end-to-end100 xpUnit-testing a data pipeline50 xpValidating a data pipeline with assert and isinstance100 xpWriting unit tests with pytest100 xpCreating fixtures with pytest100 xpUnit testing a data pipeline with fixtures100 xpRunning a data pipeline in production50 xpOrchestration and ETL tools50 xpData pipeline architecture patterns100 xpRunning a data pipeline end-to-end100 xpCongratulations!50 xp
Jake RoachSee More
Hi all! I'm Jake, a Data Engineer and DataCamp Instructor. I use Python and Airflow to extract, transform, and load data into a state-of-the-art data platform powered by Astronomer, AWS, MongoDB and Postgres. I'm born and raised in Buffalo, NY, so I'm used to seeing a Snowflake or two. When I'm not working with data, you can find me out at the golf course playing a quick nine holes before dark!