It's simple to buy any product with a click and have it delivered to your door. Online shopping has been rapidly evolving over the last few years, making our lives easier. But behind the scenes, e-commerce companies face a complex challenge that needs to be addressed.
Uncertainty plays a big role in how the supply chains plan and organize their operations to ensure that the products are delivered on time. These uncertainties can lead to challenges such as stockouts, delayed deliveries, and increased operational costs.
You work for the Sales & Operations Planning (S&OP) team at a multinational e-commerce company. They need your help to assist in planning for the upcoming end-of-the-year sales. They want to use your insights to plan for promotional opportunities and manage their inventory. This effort is to ensure they have the right products in stock when needed and ensure their customers are satisfied with the prompt delivery to their doorstep.
The Data
You are provided with a sales dataset to use. A summary and preview are provided below.
Online Retail.csv
| Column | Description |
|---|---|
'InvoiceNo' | A 6-digit number uniquely assigned to each transaction |
'StockCode' | A 5-digit number uniquely assigned to each distinct product |
'Description' | The product name |
'Quantity' | The quantity of each product (item) per transaction |
'UnitPrice' | Product price per unit |
'CustomerID' | A 5-digit number uniquely assigned to each customer |
'Country' | The name of the country where each customer resides |
'InvoiceDate' | The day and time when each transaction was generated "MM/DD/YYYY" |
'Year' | The year when each transaction was generated |
'Month' | The month when each transaction was generated |
'Week' | The week when each transaction was generated (1-52) |
'Day' | The day of the month when each transaction was generated (1-31) |
'DayOfWeek' | The day of the weeke when each transaction was generated ( 0 = Monday, 6 = Sunday) |
# Import required libraries
import pyspark
from pyspark.sql import SparkSession
from pyspark.ml.feature import StringIndexer, VectorAssembler
from pyspark.ml import Pipeline
from pyspark.ml.regression import RandomForestRegressor
from pyspark.sql.functions import col, dayofmonth, month, year, to_date, to_timestamp, weekofyear, dayofweek
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
# Initialize Spark session
my_spark = SparkSession.builder.appName("SalesForecast").getOrCreate()
print(my_spark)
print(pyspark.__version__)# Importing sales data
sales_data = my_spark.read.csv(
"Online Retail.csv", header=True, inferSchema=True, sep=",")
# Convert InvoiceDate to datetime
sales_data = sales_data.withColumn("InvoiceDate", to_date(
to_timestamp(col("InvoiceDate"), "d/M/yyyy H:mm")))
sales_data.show(5)# Aggregate the data based on average unit price and quantity
group_sales_data = sales_data.groupBy("Country", "StockCode", "InvoiceDate", "Year", "Month", "Day", "Week", "DayOfWeek").agg({"Quantity":"sum", "UnitPrice":"avg"})
group_sales_data = group_sales_data.withColumnRenamed("sum(Quantity)", "Quantity")
group_sales_data = group_sales_data.withColumnRenamed("avg(UnitPrice)", "AvgUnitPrice")
group_sales_data.show(5)df = group_sales_data.selectExpr("max(InvoiceDate)", "min(InvoiceDate)")
df.show(5)The dataset contains records from 12 January 2010 to 10 December 2011. Since we are forecasting the upcoming end of year sales (4th Quarter of the calendar year), records up to 30 September 2011 will be used for training data. Records after 30 September 2011 will be used for test data.
# Split the data into training and test sets
# sales_train contains data up to "2011-09-30"
# sales_test contains data after "2011-09-30"
train_data = group_sales_data.filter(group_sales_data.InvoiceDate <= "2011-09-30")
test_data = group_sales_data.filter(group_sales_data.InvoiceDate > "2011-09-30")train_data.show(5)test_data.show(5)# Use pandas function to check whether there are any missing values
pd_sales_data = group_sales_data.toPandas()
pd_sales_data.info()Next step is to select the features for training a forecasting model. Categorical columns are converted to numeric indexes because machine learning model can only process numeric data. InvoiceDate and AvgUnitPrice columns are dropped because they are not relevant features for forecasting the sales quantity for any given week.
# Create indexers for categorical columns
country_indexer = StringIndexer(inputCol="Country", outputCol="CountryIndex").setHandleInvalid("keep")
stock_code_indexer = StringIndexer(inputCol="StockCode", outputCol="StockCodeIndex").setHandleInvalid("keep")
# Feature selection
feature_col = ['CountryIndex', 'StockCodeIndex', 'Year', 'Month', 'Day', 'Week', 'DayOfWeek']
label_col = ['Quantity']
# Combine all features into a single feature vector
assembler = VectorAssembler(inputCols=feature_col, outputCol="features")# Initialize a model
rf = RandomForestRegressor(featuresCol="features", labelCol="Quantity", maxBins=4000, seed=123)
# Create pipeline
pipeline = Pipeline(stages=[country_indexer, stock_code_indexer, assembler, rf])
# Create the model by fitting the pipeline to training data
model = pipeline.fit(train_data)# Predict the test data
test_predictions = model.transform(test_data)
test_predictions = test_predictions.withColumn('prediction', col("prediction").cast("double"))
test_predictions.select("Quantity", "prediction").show()# Evaluate the model predictions using Mean Absolute Error
mae_evaluator = RegressionEvaluator(labelCol="Quantity", predictionCol="prediction", metricName="mae")
mae = mae_evaluator.evaluate(test_predictions)
print(mae)