Skip to content

Walmart is the biggest retail store in the United States. Just like others, they have been expanding their e-commerce part of the business. By the end of 2022, e-commerce represented a roaring $80 billion in sales, which is 13% of total sales of Walmart. One of the main factors that affects their sales is public holidays, like the Super Bowl, Labour Day, Thanksgiving, and Christmas.

In this project, you have been tasked with creating a data pipeline for the analysis of supply and demand around the holidays, along with conducting a preliminary analysis of the data. You will be working with two data sources: grocery sales and complementary data. You have been provided with the grocery_sales table in PostgreSQL database with the following features:

grocery_sales

  • "index" - unique ID of the row
  • "Store_ID" - the store number
  • "Date" - the week of sales
  • "Weekly_Sales" - sales for the given store

Also, you have the extra_data.parquet file that contains complementary data:

extra_data.parquet

  • "IsHoliday" - Whether the week contains a public holiday - 1 if yes, 0 if no.
  • "Temperature" - Temperature on the day of sale
  • "Fuel_Price" - Cost of fuel in the region
  • "CPI" – Prevailing consumer price index
  • "Unemployment" - The prevailing unemployment rate
  • "MarkDown1", "MarkDown2", "MarkDown3", "MarkDown4" - number of promotional markdowns
  • "Dept" - Department Number in each store
  • "Size" - size of the store
  • "Type" - type of the store (depends on Size column)

You will need to merge those files and perform some data manipulations. The transformed DataFrame can then be stored as the clean_data variable containing the following columns:

  • "Store_ID"
  • "Month"
  • "Dept"
  • "IsHoliday"
  • "Weekly_Sales"
  • "CPI"
  • ""Unemployment""

After merging and cleaning the data, you will have to analyze monthly sales of Walmart and store the results of your analysis as the agg_data variable that should look like:

MonthWeekly_Sales
1.033174.178494
2.034333.326579
......

Finally, you should save the clean_data and agg_data as the csv files.

It is recommended to use pandas for this project.

Spinner
DataFrameas
grocery_sales
variable
-- Write your SQL query here
SELECT * FROM grocery_sales
Hidden output
import pandas as pd
import os

# Extract function is already implemented for you 
def extract(store_data, extra_data):
    try:
        extra_df = pd.read_parquet(extra_data)
        merged_df = store_data.merge(extra_df, on = "index")
        print("Merge complete.")
        return merged_df
    except Exception as e:
        print(e)

# Call the extract() function and store it as the "merged_df" variable
merged_df = extract(grocery_sales, "extra_data.parquet")
# Looking at the data
print(merged_df.head)
# Create the transform() function with one parameter: "raw_data"
def transform(raw_data):
    try:
        # Fill missing cells with their column mean values
        # raw_data[['Weekly_Sales', 'CPI', 'Unemployment']].fillna(0, inplace=True)
        raw_data.fillna(
            {
                'Weekly_Sales': raw_data['Weekly_Sales'].mean(),
                'CPI': raw_data['CPI'].mean(),
                'Unemployment': raw_data['Unemployment'].mean(),
            },
            inplace=True
        )
        print("Filled missing values")

        # Convert Date to datetime format
        raw_data['Date'] = pd.to_datetime(raw_data['Date'], format='%Y-%m-%d')
        print("Converted Date column to datetime type")
        # Create Month column
        raw_data["Month"] = raw_data['Date'].dt.month
        print("Month column created.")

        # Drop rows with weekly sales $10,000 or less
        raw_data = raw_data.loc[raw_data['Weekly_Sales'] > 10000, :]
        print("Dropped rows with weekly sales < 10000")
        
        # Filtering unnecessary columns
        raw_data = raw_data[["Store_ID", "Month", "Dept", 
                 "IsHoliday", "Weekly_Sales", "CPI", "Unemployment"]]
        print("Filtered unnecessary columns")

        print("Transform complete.")
        print(raw_data.head)
        return raw_data
        
    except Exception as e:
        print(e)
# Call the transform() function and pass the merged DataFrame
clean_data = transform(merged_df)
# Create the avg_weekly_sales_per_month function that takes in the cleaned data from the last step
def avg_weekly_sales_per_month(clean_data):
    try:
        # Slicing columns
        clean_data = clean_data[["Month", "Weekly_Sales"]]
        print("done column slice")
        
        # Grouping by month, finding the avg weekly sales, reseting the index, rounding to 2 decimals
        clean_data = clean_data.groupby("Month").agg({'Weekly_Sales': 'mean'}).reset_index().round(2)
        print("done group by month and finding avg")

        # Renaming weekly_sales column
        clean_data.rename(columns={'Weekly_Sales': 'Avg_Sales'}, inplace=True)
        
        print(clean_data.head())

        print("Avg monthly sales complete.")
        return clean_data
    except Exception as e:
        print(e)
# Call the avg_weekly_sales_per_month() function and pass the cleaned DataFrame
agg_data = avg_weekly_sales_per_month(clean_data)
# Create the load() function that takes in the cleaned DataFrame and the aggregated one with the paths where they are going to be stored
def load(full_data, full_data_file_path, agg_data, agg_data_file_path):
    # Write your code here
    try:
        # Load full data to csv
        full_data.to_csv(full_data_file_path, index=False)
        print(f'Full data loaded to: {full_data_file_path}')

        #Load agg. data to csv
        agg_data.to_csv(agg_data_file_path, index=False)
        print(f'Agg. data loaded to: {agg_data_file_path}')
        
    except Exception as e:
        print(e)
# Call the load() function and pass the cleaned and aggregated DataFrames with their paths  
load(clean_data, 'clean_data.csv', agg_data, 'agg_data.csv')
# Create the validation() function with one parameter: file_path - to check whether the previous function was correctly executed
def validation(file_path):
    # Write your code here
    try:
        if os.path.exists(file_path):
            print(f'File Path: {file_path} exists in the current directory!')
        else:
            print(f'File Path: {file_path} does not exist!')
    except Exception as e:
        print(e)
# Call the validation() function and pass first, the cleaned DataFrame path, and then the aggregated DataFrame path
validation('clean_data.csv')
validation('agg_data.csv')