Pular para o conteúdo principal
InícioPythonStatistical Thinking in Python (Part 2)

Statistical Thinking in Python (Part 2)

Learn to perform the two key tasks in statistical inference: parameter estimation and hypothesis testing.

Comece O Curso Gratuitamente
4 horas15 vídeos66 exercícios
90.511 aprendizesTrophyDeclaração de Realização

Crie sua conta gratuita

GoogleLinkedInFacebook

ou

Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados são armazenados nos EUA.
GroupTreinar 2 ou mais pessoas?Experimente o DataCamp For Business

Amado por alunos de milhares de empresas


Descrição do Curso

After completing Statistical Thinking in Python (Part 1), you have the probabilistic mindset and foundational hacker stats skills to dive into data sets and extract useful information from them. In this course, you will do just that, expanding and honing your hacker stats toolbox to perform the two key tasks in statistical inference, parameter estimation and hypothesis testing. You will work with real data sets as you learn, culminating with analysis of measurements of the beaks of the Darwin's famous finches. You will emerge from this course with new knowledge and lots of practice under your belt, ready to attack your own inference problems out in the world.
Para Empresas

GroupTreinar 2 ou mais pessoas?

Obtenha acesso à biblioteca completa do DataCamp, com relatórios, atribuições, projetos e muito mais centralizados
Experimente O DataCamp for BusinessPara uma solução sob medida , agende uma demonstração.
  1. 1

    Parameter estimation by optimization

    Gratuito

    When doing statistical inference, we speak the language of probability. A probability distribution that describes your data has parameters. So, a major goal of statistical inference is to estimate the values of these parameters, which allows us to concisely and unambiguously describe our data and draw conclusions from it. In this chapter, you will learn how to find the optimal parameters, those that best describe your data.

    Reproduzir Capítulo Agora
    Optimal parameters
    50 xp
    How often do we get no-hitters?
    100 xp
    Do the data follow our story?
    100 xp
    How is this parameter optimal?
    100 xp
    Linear regression by least squares
    50 xp
    EDA of literacy/fertility data
    100 xp
    Linear regression
    100 xp
    How is it optimal?
    100 xp
    The importance of EDA: Anscombe's quartet
    50 xp
    The importance of EDA
    50 xp
    Linear regression on appropriate Anscombe data
    100 xp
    Linear regression on all Anscombe data
    100 xp
  2. 2

    Bootstrap confidence intervals

    To "pull yourself up by your bootstraps" is a classic idiom meaning that you achieve a difficult task by yourself with no help at all. In statistical inference, you want to know what would happen if you could repeat your data acquisition an infinite number of times. This task is impossible, but can we use only the data we actually have to get close to the same result as an infinitude of experiments? The answer is yes! The technique to do it is aptly called bootstrapping. This chapter will introduce you to this extraordinarily powerful tool.

    Reproduzir Capítulo Agora
  3. 3

    Introduction to hypothesis testing

    You now know how to define and estimate parameters given a model. But the question remains: how reasonable is it to observe your data if a model is true? This question is addressed by hypothesis tests. They are the icing on the inference cake. After completing this chapter, you will be able to carefully construct and test hypotheses using hacker statistics.

    Reproduzir Capítulo Agora
  4. 4

    Hypothesis test examples

    As you saw from the last chapter, hypothesis testing can be a bit tricky. You need to define the null hypothesis, figure out how to simulate it, and define clearly what it means to be "more extreme" in order to compute the p-value. Like any skill, practice makes perfect, and this chapter gives you some good practice with hypothesis tests.

    Reproduzir Capítulo Agora
Para Empresas

GroupTreinar 2 ou mais pessoas?

Obtenha acesso à biblioteca completa do DataCamp, com relatórios, atribuições, projetos e muito mais centralizados

conjuntos de dados

Anscombe dataBee sperm countsFemale literacy and fertilityFinch beaks (1975)Finch beaks (2012)Fortis beak depth heredityFrog tongue dataMajor League Baseball no-hittersScandens beak depth hereditySheffield Weather Station

colaboradores

Collaborator's avatar
Yashas Roy
Collaborator's avatar
Hugo Bowne-Anderson
Justin Bois HeadshotJustin Bois

Lecturer at the California Institute of Technology

Ver Mais

O que os outros alunos têm a dizer?

Junte-se a mais de 14 milhões de alunos e comece Statistical Thinking in Python (Part 2) hoje mesmo!

Crie sua conta gratuita

GoogleLinkedInFacebook

ou

Ao continuar, você aceita nossos Termos de Uso, nossa Política de Privacidade e que seus dados são armazenados nos EUA.