Feature engineering helps you uncover useful insights from your machine learning models. The model building process is iterative and requires creating new features using existing variables that make your model more efficient. In this course, you will explore different data sets and apply a variety of feature engineering techniques to both continuous and discrete variables.
Creating Features from Categorical DataFree
In this chapter, you will learn how to change categorical features into numerical representations that models can interpret. You'll learn about one-hot encoding and using binning for categorical features.
Creating Features from Numeric Data
In this chapter, you will learn how to manipulate numerical features to create meaningful features that can give better insights into your model. You will also learn how to work with dates in the context of feature engineering.
Transforming Numerical Features
In this chapter, you will learn about using transformation techniques, like Box-Cox and Yeo-Johnson, to address issues with non-normally distributed features. You'll also learn about methods to scale features, including mean centering and z-score standardization.
In the final chapter, we will use feature crossing to create features from two or more variables. We will also discuss principal component analysis, and methods to explore and visualize those results.
Data Scientist, University of Washington
Jose is a Data Scientist at the University of Washington’s eScience Institute. Jose’s interests include the application of data science methods on sociological and educational data and building open source data tools to facilitate that process. Jose’s research combines theory and practice with data science methods to inform education policymaking. Jose earned his doctorate at the UW, with a focus in statistics and measurement and a Master of Education in policy, also from UW.