Michael Kane
Michael Kane

Assistant Professor at Yale University

Michael Kane is an Assistant Professor at Yale University. His research is in the area of scalable statistical/machine learning and applied probability.

See More
Simon Urbanek
Simon Urbanek

Member of the R-Core; Lead Inventive Scientist at AT&T Labs Research

Simon Urbanek is a member of the R-Core and Lead Inventive Scientist at AT&T Labs Research. His research is in the areas of R, statistical computing, visualization, and interactive graphics.

See More
Collaborator(s)
  • Richie Cotton

    Richie Cotton

  • Sumedh Panchadhar

    Sumedh Panchadhar

Course Description

Datasets are often larger than available RAM, which causes problems for R programmers since by default all the variables are stored in memory. You’ll learn tools for processing, exploring, and analyzing data directly from disk. You’ll also implement the split-apply-combine approach and learn how to write scalable code using the bigmemory and iotools packages. In this course, you'll make use of the Federal Housing Finance Agency's data, a publicly available data set chronicling all mortgages that were held or securitized by both Federal National Mortgage Association (Fannie Mae) and Federal Home Loan Mortgage Corporation (Freddie Mac) from 2009-2015.

  1. 1

    Working with increasingly large data sets

    Free

    In this chapter, we cover the reasons you need to apply new techniques when data sets are larger than available RAM. We show that importing and exporting data using the base R functions can be slow and some easy ways to remedy this. Finally, we introduce the bigmemory package.

  2. Processing and Analyzing Data with bigmemory

    Now that you've got some experience using bigmemory, we're going to go through some simple data exploration and analysis techniques. In particular, we'll see how to create tables and implement the split-apply-combine approach.

  3. Working with iotools

    We'll use the iotools package that can process both numeric and string data, and introduce the concept of chunk-wise processing.

  4. Case Study: A Preliminary Analysis of the Housing Data

    In the previous chapters, we've introduced the housing data and shown how to compute with data that is about as big, or bigger than, the amount of RAM on a single machine. In this chapter, we'll go through a preliminary analysis of the data, comparing various trends over time.