Skip to main content
HomeRFoundations of Inference in R

Foundations of Inference in R

Learn how to draw conclusions about a population from a sample of data via a process known as statistical inference.

Start Course for Free
4 Hours17 Videos58 Exercises
33,510 LearnersTrophyStatement of Accomplishment

Create Your Free Account

GoogleLinkedInFacebook

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.

Loved by learners at thousands of companies


Course Description

One of the foundational aspects of statistical analysis is inference, or the process of drawing conclusions about a larger population from a sample of data. Although counter intuitive, the standard practice is to attempt to disprove a research claim that is not of interest. For example, to show that one medical treatment is better than another, we can assume that the two treatments lead to equal survival rates only to then be disproved by the data. Additionally, we introduce the idea of a p-value, or the degree of disagreement between the data and the hypothesis. We also dive into confidence intervals, which measure the magnitude of the effect of interest (e.g. how much better one treatment is than another).
  1. 1

    Introduction to ideas of inference

    Free

    In this chapter, you will investigate how repeated samples taken from a population can vary. It is the variability in samples that allow you to make claims about the population of interest. It is important to remember that the research claims of interest focus on the population while the information available comes only from the sample data.

    Play Chapter Now
    Welcome to the course!
    50 xp
    Hypotheses (1)
    50 xp
    Hypotheses (2)
    50 xp
    Randomized distributions
    50 xp
    Working with the NHANES data
    100 xp
    Calculating statistic of interest
    100 xp
    Randomized data under null model of independence
    100 xp
    Randomized statistics and dotplot
    100 xp
    Randomization density
    100 xp
    Using the randomization distribution
    50 xp
    Do the data come from the population?
    100 xp
    What can you conclude?
    50 xp
    Study conclusions
    50 xp
  2. 3

    Hypothesis testing errors: opportunity cost

    You will continue learning about hypothesis testing with a new example and the same structure of randomization tests. In this chapter, however, the focus will be on different errors (type I and type II), how they are made, when one is worse than another, and how things like sample size and effect size impact the error rates.

    Play Chapter Now
  3. 4

    Confidence intervals

    As a complement to hypothesis testing, confidence intervals allow you to estimate a population parameter. Recall that your interest is always in some characteristic of the population, but you only have incomplete information to estimate the parameter using sample data. Here, the parameter is the true proportion of successes in a population. Bootstrapping is used to estimate the variability needed to form the confidence interval.

    Play Chapter Now

In the following tracks

Statistical Inference with RStatistician with R

Collaborators

Collaborator's avatar
Nick Carchedi
Collaborator's avatar
Tom Jeon
Jo Hardin HeadshotJo Hardin

Professor at Pomona College

Jo Hardin is a professor of mathematics and statistics at Pomona College. Her statistical research focuses on developing new robust methods for high throughput data. Recently, she has also worked closely with the statistics education community on ways to integrate data science early into a statistics curriculum. When not working with students or on her research, she loves to put on a pair of running shoes and hit the road.
See More

What do other learners have to say?

Join over 13 million learners and start Foundations of Inference in R today!

Create Your Free Account

GoogleLinkedInFacebook

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.