Skip to content

As a Data Engineer at an electronics e-commerce company, Voltmart, you have been requested by a peer Machine Learning team to clean the data containing the information about orders made last year. They are planning to further use this cleaned data to build a demand forecasting model. To achieve this, they have shared their requirements regarding the desired output table format.

An analyst shared a parquet file called "orders_data.parquet" for you to clean and preprocess.

You can see the dataset schema below along with the cleaning requirements:

orders_data.parquet

columndata typedescriptioncleaning requirements
order_datetimestampDate and time when the order was madeModify: Remove orders placed between 12am and 5am (inclusive); convert from timestamp to date
time_of_daystringPeriod of the day when the order was madeNew column containing (lower bound inclusive, upper bound exclusive): "morning" for orders placed 5-12am, "afternoon" for orders placed 12-6pm, and "evening" for 6-12pm
order_idlongOrder IDN/A
productstringName of a product orderedRemove rows containing "TV" as the company has stopped selling this product; ensure all values are lowercase
product_eandoubleProduct IDN/A
categorystringBroader category of a productEnsure all values are lowercase
purchase_addressstringAddress line where the order was made ("House Street, City, State Zipcode")N/A
purchase_statestringUS State of the purchase addressNew column containing: the State that the purchase was ordered from
quantity_orderedlongNumber of product units orderedN/A
price_eachdoublePrice of a product unitN/A
cost_pricedoubleCost of production per product unitN/A
turnoverdoubleTotal amount paid for a product (quantity x price)N/A
margindoubleProfit made by selling a product (turnover - cost)N/A

from pyspark.sql import (
    SparkSession,
    types,
    functions as F,
)

spark = (
    SparkSession
    .builder
    .appName('cleaning_orders_dataset_with_pyspark')
    .getOrCreate()
)
orders_data = spark.read.parquet('orders_data.parquet')
orders_data.toPandas().head()
from pyspark.sql.functions import (
    col, to_timestamp, to_date, hour, when, lower, regexp_extract
)

# Load the original Parquet file
orders_data = spark.read.parquet("orders_data.parquet")

# Step 0: Clean column names (if necessary)
orders_data = orders_data.toDF(*[c.strip().lower().replace(" ", "_") for c in orders_data.columns])

# Step 1: Convert order_date to timestamp
orders_data = orders_data.withColumn("order_date", to_timestamp("order_date"))

# Step 2: Remove orders placed between 00:00 and 05:00
orders_data = orders_data.filter(~(hour("order_date").between(0, 5)))

# Step 3: Extract hour
orders_data = orders_data.withColumn("order_hour", hour("order_date"))

# Step 4: Create time_of_day column
orders_data = orders_data.withColumn(
    "time_of_day",
    when((col("order_hour") >= 5) & (col("order_hour") < 12), "morning")
    .when((col("order_hour") >= 12) & (col("order_hour") < 18), "afternoon")
    .when((col("order_hour") >= 18) & (col("order_hour") < 24), "evening")
)

# Step 5: Strip time to keep only date
orders_data = orders_data.withColumn("order_date", to_date("order_date"))

# Step 6: Clean product and category columns
orders_data = orders_data.withColumn("product", lower(col("product")))
orders_data = orders_data.withColumn("category", lower(col("category")))

# Step 7: Remove rows with 'tv' in product name
orders_data = orders_data.filter(~col("product").contains("tv"))

# Step 8: Extract purchase_state from purchase_address
orders_data = orders_data.withColumn(
    "purchase_state",
    regexp_extract(col("purchase_address"), r",\s([A-Z]{2})\s\d{5}$", 1)
)

# Step 9: Drop helper column
orders_data = orders_data.drop("order_hour")

# Step 10: Save cleaned data
orders_data.write.mode("overwrite").parquet("orders_data_clean.parquet")

print("✅ Cleaned file saved as orders_data_clean.parquet")