**Instructor(s):**

Andrew Conway is a Psychology Professor in the Division of Behavioral and Organizational Sciences at Claremont Graduate University in Claremont, California. He has been teaching introduction to statistics for undergraduate students and advanced statistics for graduate students for 20 years, at a variety of institutions, including the University of South Carolina, the University of Illinois in Chicago, and Princeton University.

If you have ever taken a math or statistics class, you’ve probably heard the old adage "Correlation does not imply causation". The first part of this course explores this further, and will offer a broad overview on correlational analysis. In the second part you will leave descriptive statistics behind, and dive into regression, prediction and inferential statistics.

In the first chapter you will be given a broad overview on the concepts behind correlation as well as some examples. Furthermore, you will walk through the mathematical calculation of the correlation coefficient r, that is the Pearson product-moment correlation coefficient. Finally, there will be a part on the assumptions underlying a typical correlational analysis.

- How are correlation coefficients calculated? 50 xp
- Manual computation of correlation coefficients (1) 100 xp
- Manual computation of correlation coefficients (2) 100 xp
- Manual computation of correlation coefficients (3) 100 xp
- The usefulness of correlation coefficients 50 xp
- Creating scatterplots 100 xp
- Correlation matrix 100 xp
- Analysis scatterplots 50 xp
- Get intuitive! #1 50 xp
- Points of caution 50 xp
- Non-representative data samples 100 xp
- Get intuitive! #2 50 xp

In this first chapter on linear regression, professor Conway will give you an overview on regression; What does it do? What is it used for? You will see how to build and execute a regression model in R, and what the effect is of adding additional regressors.

In chapter three you will do the calculation of the regression coefficients yourself in R. Next, there will be a detailed study of the assumptions underlying a linear regression analysis. The end is reserved for Anscombe’s quartet, a famous statistical example that shows you the importance of graphing data before analyzing it.