Skip to content
Project: Cleaning an Orders Dataset with PySpark
As a Data Engineer at an electronics e-commerce company, Voltmart, you have been requested by a peer Machine Learning team to clean the data containing the information about orders made last year. They are planning to further use this cleaned data to build a demand forecasting model. To achieve this, they have shared their requirements regarding the desired output table format.
An analyst shared a parquet file called "orders_data.parquet" for you to clean and preprocess.
You can see the dataset schema below along with the cleaning requirements:
orders_data.parquet
orders_data.parquet| column | data type | description | cleaning requirements |
|---|---|---|---|
order_date | timestamp | Date and time when the order was made | Modify: Remove orders placed between 12am and 5am (inclusive); convert from timestamp to date |
time_of_day | string | Period of the day when the order was made | New column containing (lower bound inclusive, upper bound exclusive): "morning" for orders placed 5-12am, "afternoon" for orders placed 12-6pm, and "evening" for 6-12pm |
order_id | long | Order ID | N/A |
product | string | Name of a product ordered | Remove rows containing "TV" as the company has stopped selling this product; ensure all values are lowercase |
product_ean | double | Product ID | N/A |
category | string | Broader category of a product | Ensure all values are lowercase |
purchase_address | string | Address line where the order was made ("House Street, City, State Zipcode") | N/A |
purchase_state | string | US State of the purchase address | New column containing: the State that the purchase was ordered from |
quantity_ordered | long | Number of product units ordered | N/A |
price_each | double | Price of a product unit | N/A |
cost_price | double | Cost of production per product unit | N/A |
turnover | double | Total amount paid for a product (quantity x price) | N/A |
margin | double | Profit made by selling a product (turnover - cost) | N/A |
from pyspark.sql import (
SparkSession,
types,
functions as F,
)
spark = (
SparkSession
.builder
.appName('cleaning_orders_dataset_with_pyspark')
.getOrCreate()
)orders_data = spark.read.parquet('orders_data.parquet')
orders_data.toPandas().head()from pyspark.sql.functions import *orders_data = orders_data.filter(F.hour(orders_data.order_date) > 5)
orders_data = orders_data.withColumn("time_of_day",
when(F.hour("order_date") < 12, "morning")
.when(F.hour("order_date") < 18, "afternoon")
.otherwise("evening")
)orders_data = orders_data.withColumn("order_date", to_date("order_date"))orders_data = orders_data.withColumn("product", F.lower("product"))
orders_data = orders_data.filter(~orders_data.product.contains("tv"))orders_data = orders_data.withColumn("category", F.lower("category"))orders_data = orders_data.withColumn("purchase_address_split", F.split("purchase_address", ","))
orders_data = orders_data.withColumn("purchase_state_with_zip", orders_data.purchase_address_split.getItem(F.size("purchase_address_split")-1))
orders_data = orders_data.drop("purchase_address_split")from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
# Define the UDF
def get_first_two_items(s):
return s[1:3]
# Register the UDF
get_first_two_items_udf = udf(get_first_two_items, StringType())
# Apply the UDF to a specific column, for example 'product'
orders_data_clean = orders_data.withColumn('purchase_state', get_first_two_items_udf(orders_data['purchase_state_with_zip']))orders_data_clean.toPandas().head()unique_purchase_states = orders_data_clean.select("purchase_state").distinct()
unique_purchase_states.toPandas().head()