Skip to content

A DVD rental company needs your help! They want to figure out how many days a customer will rent a DVD for based on some features and has approached you for help. They want you to try out some regression models which will help predict the number of days a customer will rent a DVD for. The company wants a model which yeilds a MSE of 3 or less on a test set. The model you make will help the company become more efficient inventory planning.

The data they provided is in the csv file rental_info.csv. It has the following features:

  • "rental_date": The date (and time) the customer rents the DVD.
  • "return_date": The date (and time) the customer returns the DVD.
  • "amount": The amount paid by the customer for renting the DVD.
  • "amount_2": The square of "amount".
  • "rental_rate": The rate at which the DVD is rented for.
  • "rental_rate_2": The square of "rental_rate".
  • "release_year": The year the movie being rented was released.
  • "length": Lenght of the movie being rented, in minuites.
  • "length_2": The square of "length".
  • "replacement_cost": The amount it will cost the company to replace the DVD.
  • "special_features": Any special features, for example trailers/deleted scenes that the DVD also has.
  • "NC-17", "PG", "PG-13", "R": These columns are dummy variables of the rating of the movie. It takes the value 1 if the move is rated as the column name and 0 otherwise. For your convinience, the reference dummy has already been dropped.
import pandas as pd
import numpy as np

from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# Import any additional modules and start coding below

πŸ“¦ Cell 1: Import libraries and read data

import pandas as pd
import numpy as np

from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

# For Lasso feature selection
from sklearn.linear_model import Lasso
from sklearn.preprocessing import StandardScaler

# OLS
from sklearn.linear_model import LinearRegression

# Random Forest
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import RandomizedSearchCV

# Read the rental data
df_rental = pd.read_csv("rental_info.csv")

πŸ•’ Cell 2: Compute rental duration and add dummy variables

# Compute rental duration in days
df_rental["rental_length"] = (
    pd.to_datetime(df_rental["return_date"]) 
    - pd.to_datetime(df_rental["rental_date"])
)
df_rental["rental_length_days"] = df_rental["rental_length"].dt.days

# Create dummy columns
df_rental["deleted_scenes"] = np.where(
    df_rental["special_features"].str.contains("Deleted Scenes", na=False), 1, 0
)
df_rental["behind_the_scenes"] = np.where(
    df_rental["special_features"].str.contains("Behind the Scenes", na=False), 1, 0
)

🧹 Cell 3: Prepare features (X) and target (y); train/test split

# Drop leakage columns
cols_to_drop = [
    "special_features", "rental_length", 
    "rental_length_days", "rental_date", "return_date"
]
X = df_rental.drop(cols_to_drop, axis=1)
y = df_rental["rental_length_days"]  # Target

# 80/20 split
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=9
)

πŸ” Cell 4: Lasso feature selection + OLS modeling

# Train Lasso
lasso = Lasso(alpha=0.3, random_state=9)
lasso.fit(X_train, y_train)
lasso_coef = lasso.coef_

# Keep only features with positive coefficients
mask = lasso_coef > 0
X_lasso_train = X_train.iloc[:, mask]
X_lasso_test = X_test.iloc[:, mask]

# Train OLS on selected features
ols = LinearRegression()
ols.fit(X_lasso_train, y_train)
y_pred_ols = ols.predict(X_lasso_test)
mse_lin_reg_lasso = mean_squared_error(y_test, y_pred_ols)

🌲 Cell 5: Random forest + hyperparameter search

# Hyperparameter grid
param_dist = {
    'n_estimators': np.arange(1, 101),
    'max_depth': np.arange(1, 11)
}

# Randomized search
rf = RandomForestRegressor(random_state=9)
rand_search = RandomizedSearchCV(
    rf, param_distributions=param_dist, cv=5, random_state=9
)
rand_search.fit(X_train, y_train)
hyper_params = rand_search.best_params_

πŸ† Cell 6: Final model training, evaluation, and saving best model

# Train final model
best_model = RandomForestRegressor(
    n_estimators=hyper_params["n_estimators"],
    max_depth=hyper_params["max_depth"],
    random_state=9
)
best_model.fit(X_train, y_train)

# Predict and evaluate
rf_pred = best_model.predict(X_test)
best_mse = mean_squared_error(y_test, rf_pred)

print(f"MSE (OLS after Lasso): {mse_lin_reg_lasso:.2f}")
print(f"MSE (Random Forest): {best_mse:.2f}")

# Sanity check
assert best_mse < 3, "Best model MSE must be below 3."

Conclusion

β€Œ
β€Œ
β€Œ