Interactive Course

Introduction to Spark SQL with Python

Learn how to manipulate data and create machine learning feature sets in Spark using SQL in Python.

  • 4 hours
  • 15 Videos
  • 52 Exercises
  • 1,320 Participants
  • 4,200 XP

Loved by learners at thousands of top companies:

t-mobile-grey.svg
deloitte-grey.svg
airbnb-grey.svg
ikea-grey.svg
ebay-grey.svg
siemens-grey.svg

Course Description

You're familiar with SQL, and have heard great things about Apache Spark. Then this course is for you! Apache Spark is a computing framework for processing big data. Spark SQL is a component of Apache Spark that works with tabular data. Window functions are an advanced feature of SQL that take Spark to a new level of usefulness. You will use Spark SQL to analyze time series. You will extract the most common sequences of words from a text document. You will create feature sets from natural language text and use them to predict the last word in a sentence using logistic regression. Spark combines the power of distributed computing with the ease of use of Python and SQL. The course uses a natural language text dataset that is easy to understand. Sentences are sequences of words. Window functions are very suitable for manipulating sequence data. The same techniques taught here can be applied to sequences of song identifiers, video ids, or podcast ids. Exercises include discovering frequent word sequences, and converting word sequences into machine learning feature set data for training a text classifier.

  1. 1

    Pyspark SQL

    Free

    In this chapter you will learn how to create and query a SQL table in Spark. Spark SQL brings the expressiveness of SQL to Spark. You will also learn how to use SQL window functions in Spark. Window functions perform a calculation across rows that are related to the current row. They greatly simplify achieving results that are difficult to express using only joins and traditional aggregations. We'll use window functions to perform running sums, running differences, and other operations that are challenging to perform in basic SQL.

  2. Caching, Logging, and the Spark UI

    In the previous chapters you learned how to use the expressiveness of window function SQL. However, this expressiveness now makes it important that you understand how to properly cache dataframes and cache SQL tables. It is also important to know how to evaluate your application. You learn how to do do this using the Spark UI. You'll also learn a best practice for logging in Spark. Spark SQL brings with it another useful tool for tuning query performance issues, the query execution plan. You will learn how to use the execution plan for evaluating the provenance of a dataframe.

  3. Using window function sql for natural language processing

    In this chapter you will be loading natural language text. Then you will apply a moving window analysis to find frequent word sequences.

  4. Text classification

    Previous chapters provided you with the tools for loading raw text, tokenizing it, and extracting word sequences. This is already very useful for analysis, but it is also useful for machine learning. What you've learned now comes together by using logistic regression to classify text. By the conclusion of this chapter, you will have loaded raw natural language text data and used it to train a text classifier.

  1. 1

    Pyspark SQL

    Free

    In this chapter you will learn how to create and query a SQL table in Spark. Spark SQL brings the expressiveness of SQL to Spark. You will also learn how to use SQL window functions in Spark. Window functions perform a calculation across rows that are related to the current row. They greatly simplify achieving results that are difficult to express using only joins and traditional aggregations. We'll use window functions to perform running sums, running differences, and other operations that are challenging to perform in basic SQL.

  2. Using window function sql for natural language processing

    In this chapter you will be loading natural language text. Then you will apply a moving window analysis to find frequent word sequences.

  3. Caching, Logging, and the Spark UI

    In the previous chapters you learned how to use the expressiveness of window function SQL. However, this expressiveness now makes it important that you understand how to properly cache dataframes and cache SQL tables. It is also important to know how to evaluate your application. You learn how to do do this using the Spark UI. You'll also learn a best practice for logging in Spark. Spark SQL brings with it another useful tool for tuning query performance issues, the query execution plan. You will learn how to use the execution plan for evaluating the provenance of a dataframe.

  4. Text classification

    Previous chapters provided you with the tools for loading raw text, tokenizing it, and extracting word sequences. This is already very useful for analysis, but it is also useful for machine learning. What you've learned now comes together by using logistic regression to classify text. By the conclusion of this chapter, you will have loaded raw natural language text data and used it to train a text classifier.

What do other learners have to say?

Devon

“I've used other sites, but DataCamp's been the one that I've stuck with.”

Devon Edwards Joseph

Lloyd's Banking Group

Louis

“DataCamp is the top resource I recommend for learning data science.”

Louis Maiden

Harvard Business School

Ronbowers

“DataCamp is by far my favorite website to learn from.”

Ronald Bowers

Decision Science Analytics @ USAA

Mark Plutowski
Mark Plutowski

Big Data Architect & Scientist @ Flipboard

Mark Plutowski received his PhD in Computer Science from UCSD with emphasis in machine learning and econometrics. He holds 18 published patents, has been working with cloud apis and hadoop ecosystem for over ten years, and has been involved with hundreds of interviews.

See More
Icon Icon Icon professional info