Interactive Course

Big Data Fundamentals via PySpark

Learn the fundamentals of working with big data with PySpark.

  • 4 hours
  • 16 Videos
  • 55 Exercises
  • 5,164 Participants
  • 4,600 XP

Loved by learners at thousands of top companies:

ea-grey.svg
paypal-grey.svg
ebay-grey.svg
intel-grey.svg
3m-grey.svg
ikea-grey.svg

Course Description

There's been a lot of buzz about Big Data over the past few years, and it's finally become mainstream for many companies. But what is this Big Data? This course covers the fundamentals of Big Data via PySpark. Spark is “lightning fast cluster computing" framework for Big Data. It provides a general data processing platform engine and lets you run programs up to 100x faster in memory, or 10x faster on disk, than Hadoop. You’ll use PySpark, a Python package for spark programming and its powerful, higher-level libraries such as SparkSQL, MLlib (for machine learning), etc., to interact with works of William Shakespeare, analyze Fifa football 2018 data and perform clustering of genomic datasets. At the end of this course, you will gain an in-depth understanding of PySpark and it’s application to general Big Data analysis.

  1. 1

    Introduction to Big Data analysis with Spark

    Free

    This chapter introduces the exciting world of Big Data, as well as the various concepts and different frameworks for processing Big Data. You will understand why Apache Spark is considered the best framework for BigData.

  2. PySpark SQL & DataFrames

    In this chapter, you'll learn about Spark SQL which is a Spark module for structured data processing. It provides a programming abstraction called DataFrames and can also act as a distributed SQL query engine. This chapter shows how Spark SQL allows you to use DataFrames in Python.

  3. Programming in PySpark RDD’s

    The main abstraction Spark provides is a resilient distributed dataset (RDD), which is the fundamental and backbone data type of this engine. This chapter introduces RDDs and shows how RDDs can be created and executed using RDD Transformations and Actions.

  4. Machine Learning with PySpark MLlib

    PySpark MLlib is the Apache Spark scalable machine learning library in Python consisting of common learning algorithms and utilities. Throughout this last chapter, you'll learn important Machine Learning algorithms. You will build a movie recommendation engine and a spam filter, and use k-means clustering.

  1. 1

    Introduction to Big Data analysis with Spark

    Free

    This chapter introduces the exciting world of Big Data, as well as the various concepts and different frameworks for processing Big Data. You will understand why Apache Spark is considered the best framework for BigData.

  2. Programming in PySpark RDD’s

    The main abstraction Spark provides is a resilient distributed dataset (RDD), which is the fundamental and backbone data type of this engine. This chapter introduces RDDs and shows how RDDs can be created and executed using RDD Transformations and Actions.

  3. PySpark SQL & DataFrames

    In this chapter, you'll learn about Spark SQL which is a Spark module for structured data processing. It provides a programming abstraction called DataFrames and can also act as a distributed SQL query engine. This chapter shows how Spark SQL allows you to use DataFrames in Python.

  4. Machine Learning with PySpark MLlib

    PySpark MLlib is the Apache Spark scalable machine learning library in Python consisting of common learning algorithms and utilities. Throughout this last chapter, you'll learn important Machine Learning algorithms. You will build a movie recommendation engine and a spam filter, and use k-means clustering.

What do other learners have to say?

Devon

“I've used other sites, but DataCamp's been the one that I've stuck with.”

Devon Edwards Joseph

Lloyd's Banking Group

Louis

“DataCamp is the top resource I recommend for learning data science.”

Louis Maiden

Harvard Business School

Ronbowers

“DataCamp is by far my favorite website to learn from.”

Ronald Bowers

Decision Science Analytics @ USAA

Upendra Kumar Devisetty
Upendra Kumar Devisetty

Science Analyst at CyVerse

Upendra Kumar Devisetty is a Science Analyst at CyVerse where he scientifically interacts with biologists, bioinformaticians, programming teams and other members of CyVerse team. He also coordinates development across projects, and facilitates integration and cross-communication. His current work mainly focuses on integrative analysis of Big Data using high-throughput methods on advanced computing systems. As scientific computing is becoming indispensable for Big Data research, he started building a community to develop and propagate a set of best practices, including continuous testing, version control, virtualization, sharing code through notebooks, and standard data structures.

See More
Icon Icon Icon professional info