In this follow-up course, you will expand your stat modeling skills from part 1 and dive into more advanced concepts.
By pressing Continue you accept the Terms of Use and Privacy Policy. You also accept that you are aware that your data will be stored outside of the EU and that you are above the age of 16.
Statistical Modeling in R is a multi-part course designed to get you up to speed with the most important and powerful methodologies in statistics. In Part 2, we'll take a look at effect size and interaction, the concepts of total and partial change, sampling variability and mathematical transforms, and the implications of something called collinearity. This course has been written from scratch, specifically for DataCamp users. As you'll see, by using computing and concepts from machine learning, we'll be able to leapfrog many of the marginal and esoteric topics encountered in traditional 'regression' courses.
Effect sizes were introduced in Part 1 of this course series as a way to quantify how each explanatory variable is connected to the response. In this chapter, you'll meet some high-level tools that make it easier to calculate and visualize effect sizes. You'll see how to extend the notion of effect size to models with a categorical response variable. And you'll start to use interactions in constructing models to reflect the way that one explanatory variable can influence the effect size of another explanatory variable on the response.
This chapter examines the precision with which a model can estimate an effect size. The lack of precision comes from sampling variability, which can be quantified using resampling and bootstrapping. You'll also see some ways to improve precision using mathematical transformations of variables.
In many circumstances, an effect size tells you exactly what you need to know: how much the model output will change when one, and only one, explanatory variable changes. This is called partial change. In other situations, you will want to look at total change, which combines the effects of two or more explanatory variables. You'll also see an additional, but limited way of quantifying the extent to which the explanatory variables influence the response: R-squared. Finally, we'll describe the notion of degrees of freedom, a way of describing the complexity of a model.
In this final chapter, you'll learn about why you'd want to avoid collinearity, a common phenomenon in statistical modeling. You'll wrap up the course by discussing some of the ways models can be improved by involving the modeler in the design of the data collecting process.
Effect sizes were introduced in Part 1 of this course series as a way to quantify how each explanatory variable is connected to the response. In this chapter, you'll meet some high-level tools that make it easier to calculate and visualize effect sizes. You'll see how to extend the notion of effect size to models with a categorical response variable. And you'll start to use interactions in constructing models to reflect the way that one explanatory variable can influence the effect size of another explanatory variable on the response.
In many circumstances, an effect size tells you exactly what you need to know: how much the model output will change when one, and only one, explanatory variable changes. This is called partial change. In other situations, you will want to look at total change, which combines the effects of two or more explanatory variables. You'll also see an additional, but limited way of quantifying the extent to which the explanatory variables influence the response: R-squared. Finally, we'll describe the notion of degrees of freedom, a way of describing the complexity of a model.
This chapter examines the precision with which a model can estimate an effect size. The lack of precision comes from sampling variability, which can be quantified using resampling and bootstrapping. You'll also see some ways to improve precision using mathematical transformations of variables.
In this final chapter, you'll learn about why you'd want to avoid collinearity, a common phenomenon in statistical modeling. You'll wrap up the course by discussing some of the ways models can be improved by involving the modeler in the design of the data collecting process.
“I've used other sites, but DataCamp's been the one that I've stuck with.”
Devon Edwards Joseph
Lloyd's Banking Group
“DataCamp is the top resource I recommend for learning data science.”
Louis Maiden
Harvard Business School
“DataCamp is by far my favorite website to learn from.”
Ronald Bowers
Decision Science Analytics @ USAA