Skip to content
Project: Building a Retail Data Pipeline
  • AI Chat
  • Code
  • Report
  • Walmart is the biggest retail store in the United States. Just like others, they have been expanding their e-commerce part of the business. By the end of 2022, e-commerce represented a roaring $80 billion in sales, which is 13% of total sales of Walmart. One of the main factors that affects their sales is public holidays, like the Super Bowl, Labour Day, Thanksgiving, and Christmas.

    In this project, you have been tasked with creating a data pipeline for the analysis of demand and supply around the holidays and running preliminary analysis of the data. You will be working with two data sources: grocery sales and complementary data. You have been provided with the grocery_sales table in PostgreSQL database and extra_data.parquet file that contains complementary data.

    Here is information about all the available columns in the two data files:

    • "index" - unique ID of the row
    • "Store_ID" - the store number
    • "Date" - the week of sales
    • "Weekly_Sales" - sales for the given store
    • "IsHoliday" - Whether the week contains a public holiday - 1 if yes, 0 if no.
    • "Temperature" - Temperature on the day of sale
    • "Fuel_Price" - Cost of fuel in the region
    • "CPI" – Prevailing consumer price index
    • "Unemployment" - The prevailing unemployment rate
    • "MarkDown1", "MarkDown2", "MarkDown3", "MarkDown4" - number of promotional markdowns
    • "Dept" - Department Number in each store
    • "Size" - size of the store
    • "Type" - type of the store (depends on Size column)

    You will need to merge those files for further data manipulations and store the merged file in the clean_data.csv file that should contain the following columns:

    • "Store_ID"
    • "Month"
    • "Dept"
    • "IsHoliday"
    • "Weekly_Sales"
    • "CPI"
    • ""Unemployment""

    After merging and cleaning the data, you will have to analyze monthly sales of Walmart and store the results of your analysis in the agg_date.csv file that should look like:

    MonthWeekly_Sales
    1.033174.178494
    2.034333.326579
    ......

    It is recommended to use pandas for this project.

    Unknown integration
    DataFrameavailable as
    store_df
    variable
    -- Write your SQL query here
    SELECT * FROM public.grocery_sales;
    This query is taking long to finish...Consider adding a LIMIT clause or switching to Query mode to preview the result.
    Hidden output
    Unknown integration
    DataFrameavailable as
    parquet_df
    variable
    SELECT * FROM 'extra_data.parquet'
    This query is taking long to finish...Consider adding a LIMIT clause or switching to Query mode to preview the result.
    Hidden output
    # Import required packages
    import pandas as pd
    import numpy as np
    import logging
    import os
    import datetime as dt
    
    
    
    # Start coding here
    def extract():
        '''
        This functions combines both dataframes of data, query from the database and extra data from parquet file
        Returns a pandas data frame with the combined data from both data frames.
        '''
        p_df = ['index', 'IsHoliday', 'CPI', 'Unemployment']
        return pd.merge(store_df, parquet_df[p_df], left_on='index', right_on='index', how='inner')
    
    def transform(merged_df):
        '''
        Function takes the merged dataframe and unpacks the date columns into a the corresponding month number
        Also, it returns the dataframe with only the requested columns, with a filter that only rows with more than
        $ 10.000.00 will be included.
        '''
        #Unpacking the month and selecting only desired columns
        merged_df['Month'] = merged_df['Date'].dt.month.astype('Int64')
        #Filtering the dataframe 
        condition = merged_df['Weekly_Sales'] > 10000
        columns = ["Store_ID", "Month", "Dept", "IsHoliday",
                   "Weekly_Sales", "CPI", "Unemployment"]
        merged_df = merged_df[condition]
        return merged_df[columns]
    
    def avg_monthly_sales(clean_data_df):
        '''
        Function takes the clean data as input and then makes a group by function using the Month columns a the column to 
        aggregate by and the Weekly_Sales as the column to be aggregated.
        Returns a new data frame with the results as agg_sales.
        '''
        return clean_data_df.groupby(by='Month').agg(
            {'Weekly_Sales' : 'mean'})
    
    
    def load(clean_data, clean_data_path, agg_sales, agg_sales_path):
        '''
        Function loads the dataframes into the desired destination. 
        Function takes 4 arguments as follows: dataframe, filepath to store dataframe
        '''
        #Save clean data into a csv
        clean_data.to_csv(clean_data_path)
        #Save aggregated sales into a csv
        agg_sales.to_csv(agg_sales_path)    
    
    def validation():
        # Define the file names
        file1 = "clean_data.csv"
        file2 = "agg_sales.csv"
    
        # Get the current working directory
        current_directory = os.getcwd()
    
        # Check if the files exist in the current directory
        return os.path.isfile(os.path.join(current_directory, file1)) and os.path.isfile(os.path.join(current_directory, file2))
    
    merged_df = extract()
    
    clean_data_df = transform(merged_df)
    
    agg_sales = (avg_monthly_sales(clean_data_df))
    
    
    load(clean_data_df, 'clean_data.csv', agg_sales, 'agg_sales.csv')
    
    print(validation())
    Current Type: Bar
    Current X-axis: Month
    Current Y-axis: Weekly_Sales
    Current Color: None

    Wallmart sales report by month