Walmart is the biggest retail store in the United States. Just like others, they have been expanding their e-commerce part of the business. By the end of 2022, e-commerce represented a roaring $80 billion in sales, which is 13% of total sales of Walmart. One of the main factors that affects their sales is public holidays, like the Super Bowl, Labour Day, Thanksgiving, and Christmas.
In this project, you have been tasked with creating a data pipeline for the analysis of supply and demand around the holidays, along with conducting a preliminary analysis of the data. You will be working with two data sources: grocery sales and complementary data. You have been provided with the grocery_sales table in PostgreSQL database with the following features:
grocery_sales
grocery_sales"index"- unique ID of the row"Store_ID"- the store number"Date"- the week of sales"Weekly_Sales"- sales for the given store
Also, you have the extra_data.parquet file that contains complementary data:
extra_data.parquet
extra_data.parquet"IsHoliday"- Whether the week contains a public holiday - 1 if yes, 0 if no."Temperature"- Temperature on the day of sale"Fuel_Price"- Cost of fuel in the region"CPI"– Prevailing consumer price index"Unemployment"- The prevailing unemployment rate"MarkDown1","MarkDown2","MarkDown3","MarkDown4"- number of promotional markdowns"Dept"- Department Number in each store"Size"- size of the store"Type"- type of the store (depends onSizecolumn)
You will need to merge those files and perform some data manipulations. The transformed DataFrame can then be stored as the clean_data variable containing the following columns:
"Store_ID""Month""Dept""IsHoliday""Weekly_Sales""CPI"- "
"Unemployment""
After merging and cleaning the data, you will have to analyze monthly sales of Walmart and store the results of your analysis as the agg_data variable that should look like:
| Month | Weekly_Sales |
|---|---|
| 1.0 | 33174.178494 |
| 2.0 | 34333.326579 |
| ... | ... |
Finally, you should save the clean_data and agg_data as the csv files.
It is recommended to use pandas for this project.
-- Write your SQL query here
SELECT
*
FROM
grocery_sales gsimport pandas as pd
import numpy as np
import os
# Extract function is already implemented for you
def extract(store_data, extra_data):
extra_df = pd.read_parquet(extra_data)
merged_df = store_data.merge(extra_df, on = "index")
return merged_df
# Call the extract() function and store it as the "merged_df" variable
merged_df = extract(grocery_sales, "extra_data.parquet")# Create the transform() function with one parameter: "raw_data"
def transform(raw_data):
#removingcolumns which will not be used
raw_data = raw_data.drop(columns = ['Temperature', 'Fuel_Price', 'MarkDown1', 'MarkDown2', 'MarkDown3', 'MarkDown4', 'MarkDown5', 'Type', 'Size'], axis = 1)
#replacing NaN values by mean of the whole dataset
numeric_columns = raw_data.select_dtypes(include=['number']).columns
raw_data[numeric_columns] = raw_data[numeric_columns].fillna(raw_data[numeric_columns].mean())
#removing rows with too little sales
crit = raw_data['Weekly_Sales'] > 10000
raw_data = raw_data[crit]
#Dates tranformsations (date cleansing and adding the month)
raw_data['Date'] = pd.to_datetime(raw_data['Date'], format = '%Y-%m-%d')
raw_data['Month'] = raw_data['Date'].dt.month
#removing the columns which ends up not being used after calculations
raw_data = raw_data.drop(columns = ['index', 'Date'], axis = 1)
return raw_data# Call the transform() function and pass the merged DataFrame
clean_data = transform(merged_df)# Create the avg_weekly_sales_per_month function that takes in the cleaned data from the last step
def avg_weekly_sales_per_month(clean_data):
#first step is selecting the columns we want to for agregation and then creating the agregation
group_data = clean_data[['Month', 'Weekly_Sales']]
group_data = (group_data
.groupby('Month')
.agg(Avg_Sales = ('Weekly_Sales','mean'))
.reset_index()
.round(2)
)
return group_data# Call the avg_weekly_sales_per_month() function and pass the cleaned DataFrame
agg_data = avg_weekly_sales_per_month(clean_data)# Create the load() function that takes in the cleaned DataFrame and the aggregated one with the paths where they are going to be stored
def load(full_data, full_data_file_path, agg_data, agg_data_file_path):
full_data.to_csv(full_data_file_path, index = False)
agg_data.to_csv(agg_data_file_path, index = False)# Call the load() function and pass the cleaned and aggregated DataFrames with their paths
load(clean_data, "clean_data.csv", agg_data, "agg_data.csv") # Create the validation() function with one parameter: file_path - to check whether the previous function was correctly executed
def validation(file_path):
#creating check file variable
check_file = os.path.exists(file_path)
#doing the test if the check file is false meaning there is no file and it will raise the exception
if check_file == False:
raise Exception(f'No file in the path : {file_path}')# Call the validation() function and pass first, the cleaned DataFrame path, and then the aggregated DataFrame path
validation("clean_data.csv")
validation("agg_data.csv")