Skip to content
Project: Predicting Movie Rental Durations
A DVD rental company needs your help! They want to figure out how many days a customer will rent a DVD for based on some features and has approached you for help. They want you to try out some regression models which will help predict the number of days a customer will rent a DVD for. The company wants a model which yeilds a MSE of 3 or less on a test set. The model you make will help the company become more efficient inventory planning.
The data they provided is in the csv file rental_info.csv
. It has the following features:
"rental_date"
: The date (and time) the customer rents the DVD."return_date"
: The date (and time) the customer returns the DVD."amount"
: The amount paid by the customer for renting the DVD."amount_2"
: The square of"amount"
."rental_rate"
: The rate at which the DVD is rented for."rental_rate_2"
: The square of"rental_rate"
."release_year"
: The year the movie being rented was released."length"
: Lenght of the movie being rented, in minuites."length_2"
: The square of"length"
."replacement_cost"
: The amount it will cost the company to replace the DVD."special_features"
: Any special features, for example trailers/deleted scenes that the DVD also has."NC-17"
,"PG"
,"PG-13"
,"R"
: These columns are dummy variables of the rating of the movie. It takes the value 1 if the move is rated as the column name and 0 otherwise. For your convinience, the reference dummy has already been dropped.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split, KFold, cross_val_score, RandomizedSearchCV
from sklearn.metrics import mean_squared_error
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.preprocessing import StandardScaler
# Import any additional modules and start coding below
rental = pd.read_csv('rental_info.csv', parse_dates=['rental_date', 'return_date'])
print(rental.info())
print(rental.isna().sum())
rental['rental_length_days'] = (rental['return_date'] - rental['rental_date']).dt.days
rental['return_date'] = pd.to_numeric(rental['return_date'])
rental['rental_date'] = pd.to_numeric(rental['rental_date'])
print(rental['rental_length_days'].head())
rental['deleted_scenes'] = np.where(rental['special_features'].str.contains('Deleted Scenes'), 1, 0)
rental['behind_the_scenes'] = np.where(rental['special_features'].str.contains('Behind the Scenes'), 1, 0)
rental = rental.drop(labels=['special_features'], axis=1)
print(rental[['deleted_scenes', 'behind_the_scenes']].head())
SEED = 9
X = rental.drop(['rental_length_days', 'length', 'length_2'], axis=1)
y = rental['rental_length_days']
rf = RandomForestRegressor(n_estimators=200, min_samples_leaf=0.1, random_state=SEED)
rf.fit(X, y)
y_pred = rf.predict(X)
rmse = mean_squared_error(y, y_pred) ** 0.5
print(rmse)
importances_rf = pd.Series(rf.feature_importances_, index=X.columns).sort_values()
importances_rf.plot(kind='barh', color='lightgreen')
plt.show()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=SEED, stratify=y)
# scaler = StandardScaler()
# X_train_scaled = scaler.fit_transform(X_train)
# X_test_scaled = scaler.transform(X_test)
models = {'Logistic Regression': LogisticRegression(), 'KNN': KNeighborsClassifier(), 'Decision Tree': DecisionTreeClassifier(), 'DTR': DecisionTreeRegressor()}
results = []
for model in models.values():
kf = KFold(n_splits=5, random_state=SEED, shuffle=True)
cv_results = cross_val_score(model, X_train, y_train, cv=kf)
results.append(cv_results)
plt.boxplot(results, labels=models.keys())
plt.show()
model_score = {}
for name, model in models.items():
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
mse = mean_squared_error(y_test, y_pred)
# score = -cross_val_score(model, X_train, y_train, cv=10, scoring='neg_mean_squared_error')
model_score[name] = mse
print(model_score)
best_model = LogisticRegression()
best_mse = 0.3366