Loved by learners at thousands of companies
Time series data is ubiquitous. Whether it be stock market fluctuations, sensor data recording climate change, or activity in the brain, any signal that changes over time can be described as a time series. Machine learning has emerged as a powerful method for leveraging complexity in data in order to generate predictions and insights into the problem one is trying to solve. This course is an intersection between these two worlds of machine learning and time series data, and covers feature engineering, spectograms, and other advanced techniques in order to classify heartbeat sounds and predict stock prices.
Time Series and Machine Learning PrimerFree
This chapter is an introduction to the basics of machine learning, time series data, and the intersection between the two.Timeseries kinds and applications50 xpIdentifying a time series50 xpPlotting a time series (I)100 xpPlotting a time series (II)100 xpMachine learning basics50 xpFitting a simple model: classification100 xpPredicting using a classification model100 xpFitting a simple model: regression100 xpPredicting using a regression model100 xpMachine learning and time series data50 xpInspecting the classification data100 xpInspecting the regression data100 xp
Time Series as Inputs to a Model
The easiest way to incorporate time series into your machine learning pipeline is to use them as features in a model. This chapter covers common features that are extracted from time series in order to do machine learning.Classifying a time series50 xpMany repetitions of sounds100 xpInvariance in time100 xpBuild a classification model100 xpImproving features for classification50 xpCalculating the envelope of sound100 xpCalculating features from the envelope100 xpDerivative features: The tempogram100 xpThe spectrogram50 xpSpectrograms of heartbeat audio100 xpEngineering spectral features100 xpCombining many features in a classifier100 xp
Predicting Time Series Data
If you want to predict patterns from data over time, there are special considerations to take in how you choose and construct your model. This chapter covers how to gain insights into the data before fitting your model, as well as best-practices in using predictive modeling for time series data.Predicting data over time50 xpIntroducing the dataset100 xpFitting a simple regression model100 xpVisualizing predicted values100 xpAdvanced time series prediction50 xpVisualizing messy data100 xpImputing missing values100 xpTransforming raw data100 xpHandling outliers100 xpCreating features over time50 xpEngineering multiple rolling features at once100 xpPercentiles and partial functions100 xpUsing "date" information100 xp
Validating and Inspecting Time Series Models
Once you've got a model for predicting time series data, you need to decide if it's a good or a bad model. This chapter coves the basics of generating predictions with models in order to validate them against "test" data.Creating features from the past50 xpCreating time-shifted features100 xpSpecial case: Auto-regressive models100 xpVisualize regression coefficients100 xpAuto-regression with a smoother time series100 xpCross-validating time series data50 xpCross-validation with shuffling100 xpCross-validation without shuffling100 xpTime-based cross-validation100 xpStationarity and stability50 xpStationarity50 xpBootstrapping a confidence interval100 xpCalculating variability in model coefficients100 xpVisualizing model score variability over time100 xpAccounting for non-stationarity100 xpWrap-up50 xp
Fellow at the Berkeley Institute for Data Science
Chris Holdgraf is a fellow at the Berkeley Institute for Data Science at UC Berkeley. He has a PhD in cognitive neuroscience from UC Berkeley. His work is at the boundary between technology, open-source software, and scientific workflows. He's a core member of Project Jupyter and contributes to several other open source tools for data analytics and education.