Skip to content

A DVD rental company needs your help! They want to figure out how many days a customer will rent a DVD for based on some features and has approached you for help. They want you to try out some regression models which will help predict the number of days a customer will rent a DVD for. The company wants a model which yeilds a MSE of 3 or less on a test set. The model you make will help the company become more efficient inventory planning.

The data they provided is in the csv file rental_info.csv. It has the following features:

  • "rental_date": The date (and time) the customer rents the DVD.
  • "return_date": The date (and time) the customer returns the DVD.
  • "amount": The amount paid by the customer for renting the DVD.
  • "amount_2": The square of "amount".
  • "rental_rate": The rate at which the DVD is rented for.
  • "rental_rate_2": The square of "rental_rate".
  • "release_year": The year the movie being rented was released.
  • "length": Lenght of the movie being rented, in minuites.
  • "length_2": The square of "length".
  • "replacement_cost": The amount it will cost the company to replace the DVD.
  • "special_features": Any special features, for example trailers/deleted scenes that the DVD also has.
  • "NC-17", "PG", "PG-13", "R": These columns are dummy variables of the rating of the movie. It takes the value 1 if the move is rated as the column name and 0 otherwise. For your convinience, the reference dummy has already been dropped.

Data Preparation

import pandas as pd
import numpy as np

from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# Import any additional modules and start coding below
rental = pd.read_csv("rental_info.csv")
#Creating a Column "rental_date"
rental["return_date"] = pd.to_datetime(rental["return_date"])
rental["rental_date"] = pd.to_datetime(rental["rental_date"])
rental["rental_length_date"] = (rental["return_date"] - rental["rental_date"]).dt.days

#Displaying the new column
rental["rental_length_date"].head()
#Creating dummy variable for deleted scenes
rental["deleted_scenes"] = np.where(rental["special_features"].str.contains("Deleted Scenes"), 1, 0)

#Creating dummy variable for behind the scenes 
rental["behind_the_scenes"] = np.where(rental["special_features"].str.contains("Behind the Scenes"), 1, 0)

#Splitting into feature and target sets 
cols_to_drop = ["special_features", "rental_date", "return_date", "rental_length_date"]
X = rental.drop(cols_to_drop, axis=1)
y = rental["rental_length_date"]

random_state = 9

#Splitting the data into training and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = random_state)
target_MSE = 3

Experimenting Using Various ML Models

from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor

lin_reg = LinearRegression()
dt = DecisionTreeRegressor()
rf = RandomForestRegressor()
#Performing Feature Selection
from sklearn.linear_model import Lasso

SEED = 9

lasso = Lasso(alpha = 0.3, random_state = SEED)

lasso.fit(X_train, y_train)
lasso_coef = lasso.coef_

#Choosing the ones with positive results
X_lasso_train, X_lasso_test = X_train.iloc[:, lasso_coef > 0], X_test.iloc[:, lasso_coef > 0]

#Performing Linear Regression
lin_reg = lin_reg.fit(X_lasso_train, y_train)
y_test_pred = lin_reg.predict(X_lasso_test)

from sklearn.metrics import mean_squared_error
mse_lin_reg_lasso = mean_squared_error(y_test, y_test_pred)
#Performing Random Forst 
param_dist = {
    "n_estimators" : np.arange(1,101,1),
    "max_depth" : np.arange(1,11,1)
}

#Hyperparameter tuning
from sklearn.model_selection import RandomizedSearchCV
rand_search = RandomizedSearchCV(
    rf, param_distributions = param_dist, random_state = SEED
)

rand_search.fit(X_train, y_train)

#Finding the best parameter
hyper_params = rand_search.best_params_

#Using the best param
rf = RandomForestRegressor(n_estimators = hyper_params["n_estimators"], 
                           max_depth=hyper_params["max_depth"], 
                          random_state=SEED)

rf.fit(X_train, y_train)
rf_pred = rf.predict(X_test)
mse_random_forest = mean_squared_error(y_test, rf_pred)
print(f"Linear Regression: {mse_lin_reg_lasso**1/2}")
print(f"Random forst: {mse_random_forest**1/2}")
# Random forest gives lowest MSE so:
best_model = rf
best_mse = mse_random_forest