What is Data Engineering?
Due to popular demand, DataCamp is getting ready to build a Data Engineering track. Like most terms in the ever-expanding Data Science Universe, there’s a lot of ambiguity around the definition of “Data Engineering.” Some Data Engineers do a lot of reporting and dashboarding. Some spend most of their time working on data pipelines. Others take Python code from Data Scientists and optimize it to run in Java or C.
In order to start course creation, we’ll need to pick a single definition of “Data Engineer” to work from. After much deliberation and thought, we chose to paraphrase the American television show “Law and Order”:
In the world of Data Science, the data are represented by three separate yet equally important professions:
- The Data Engineers, who use programming languages to ensure clean, reliable, and performative access to data and databases
- The Data Analysts, who use programming languages, spreadsheets, and business intelligence tools to describe and categorize the data that currently exist
- The Data Scientists, who use algorithms to predict future data based on existing information
For example, imagine that a company sells many different types of sofas on their website. Each time a visitor to the website clicks on a particular sofa, a new piece of data is created. A Data Engineer would define how to collect this data, what types of metadata should be appended to each click event, and how to store the data in an easy-to-access format. A Data Analyst would create visualizations to help sales and marketing track who is buying each sofa and how much money the company is making. A Data Scientist would take the data on which customers bought each sofa and use it to predict the perfect sofa for each new visitor to the website.
For many organizations, data engineers are the first hires on a data team. Before collected data can be analyzed and leveraged with predictive methods, it needs to be organized and cleaned. Data Engineers begins this process by making a list of what data is stored, called a data schema. Next, they need to pick a reliable, easily accessible location, called a data warehouse, for storing the data. Examples of data warehousing systems include Amazon Redshift or Google Cloud. Finally, Data Engineers create ETL (Extract, Transform and Load) processes to make sure that the data gets into the data warehouse.
As an organization grows, Data Engineers are responsible for integrating new data sources into the data ecosystem, and sending the stored data into different analysis tools. When the data warehouse becomes very large, Data Engineers have to find new ways of making analyses performative, such as parallelizing analysis or creating smaller subsets for fast querying.
Within the Data Science universe, there is always overlap between the three professions. Data Engineers are often responsible for simple Data Analysis projects or for transforming algorithms written by Data Scientists into more robust formats that can be run in parallel. Data Analysts and Data Scientists need to learn basic Data Engineering skills, especially if they’re working in an early-stage startup where engineering resources are scarce.
At DataCamp, we’re excited to build out our Data Engineering course offerings. We know what we want to teach, and we’re starting to recruit instructors to design these courses. If you’re interested, check out our application and the list of courses we are currently prioritizing.