Saltar al contenido principal
InicioPythonStatistical Thinking in Python (Part 2)

Statistical Thinking in Python (Part 2)

Learn to perform the two key tasks in statistical inference: parameter estimation and hypothesis testing.

Comience El Curso Gratis
4 horas15 vídeos66 ejercicios90.587 aprendicesTrophyDeclaración de cumplimiento

Crea Tu Cuenta Gratuita

GoogleLinkedInFacebook

o

Al continuar, acepta nuestros Términos de uso, nuestra Política de privacidad y que sus datos se almacenan en los EE. UU.
Group

¿Entrenar a 2 o más personas?

Pruebe DataCamp para empresas

Preferido por estudiantes en miles de empresas


Descripción del curso

After completing Statistical Thinking in Python (Part 1), you have the probabilistic mindset and foundational hacker stats skills to dive into data sets and extract useful information from them. In this course, you will do just that, expanding and honing your hacker stats toolbox to perform the two key tasks in statistical inference, parameter estimation and hypothesis testing. You will work with real data sets as you learn, culminating with analysis of measurements of the beaks of the Darwin's famous finches. You will emerge from this course with new knowledge and lots of practice under your belt, ready to attack your own inference problems out in the world.
Empresas

Group¿Entrenar a 2 o más personas?

Obtenga acceso de su equipo a la biblioteca completa de DataCamp, con informes centralizados, tareas, proyectos y más
Pruebe DataCamp Para EmpresasPara obtener una solución a medida, reserve una demostración.
  1. 1

    Parameter estimation by optimization

    Gratuito

    When doing statistical inference, we speak the language of probability. A probability distribution that describes your data has parameters. So, a major goal of statistical inference is to estimate the values of these parameters, which allows us to concisely and unambiguously describe our data and draw conclusions from it. In this chapter, you will learn how to find the optimal parameters, those that best describe your data.

    Reproducir Capítulo Ahora
    Optimal parameters
    50 xp
    How often do we get no-hitters?
    100 xp
    Do the data follow our story?
    100 xp
    How is this parameter optimal?
    100 xp
    Linear regression by least squares
    50 xp
    EDA of literacy/fertility data
    100 xp
    Linear regression
    100 xp
    How is it optimal?
    100 xp
    The importance of EDA: Anscombe's quartet
    50 xp
    The importance of EDA
    50 xp
    Linear regression on appropriate Anscombe data
    100 xp
    Linear regression on all Anscombe data
    100 xp
  2. 2

    Bootstrap confidence intervals

    To "pull yourself up by your bootstraps" is a classic idiom meaning that you achieve a difficult task by yourself with no help at all. In statistical inference, you want to know what would happen if you could repeat your data acquisition an infinite number of times. This task is impossible, but can we use only the data we actually have to get close to the same result as an infinitude of experiments? The answer is yes! The technique to do it is aptly called bootstrapping. This chapter will introduce you to this extraordinarily powerful tool.

    Reproducir Capítulo Ahora
  3. 3

    Introduction to hypothesis testing

    You now know how to define and estimate parameters given a model. But the question remains: how reasonable is it to observe your data if a model is true? This question is addressed by hypothesis tests. They are the icing on the inference cake. After completing this chapter, you will be able to carefully construct and test hypotheses using hacker statistics.

    Reproducir Capítulo Ahora
  4. 4

    Hypothesis test examples

    As you saw from the last chapter, hypothesis testing can be a bit tricky. You need to define the null hypothesis, figure out how to simulate it, and define clearly what it means to be "more extreme" in order to compute the p-value. Like any skill, practice makes perfect, and this chapter gives you some good practice with hypothesis tests.

    Reproducir Capítulo Ahora
Empresas

Group¿Entrenar a 2 o más personas?

Obtenga acceso de su equipo a la biblioteca completa de DataCamp, con informes centralizados, tareas, proyectos y más

conjuntos de datos

Anscombe dataBee sperm countsFemale literacy and fertilityFinch beaks (1975)Finch beaks (2012)Fortis beak depth heredityFrog tongue dataMajor League Baseball no-hittersScandens beak depth hereditySheffield Weather Station

colaboradores

Collaborator's avatar
Yashas Roy
Collaborator's avatar
Hugo Bowne-Anderson
Justin Bois HeadshotJustin Bois

Lecturer at the California Institute of Technology

Ver Más

¿Qué tienen que decir otros alumnos?

¡Únete a 14 millones de estudiantes y empieza Statistical Thinking in Python (Part 2) hoy mismo!

Crea Tu Cuenta Gratuita

GoogleLinkedInFacebook

o

Al continuar, acepta nuestros Términos de uso, nuestra Política de privacidad y que sus datos se almacenan en los EE. UU.