Commercial banks receive a lot of applications for credit cards. Many of them get rejected for many reasons, like high loan balances, low income levels, or too many inquiries on an individual's credit report, for example. Manually analyzing these applications is mundane, error-prone, and time-consuming (and time is money!). Luckily, this task can be automated with the power of machine learning and pretty much every commercial bank does so nowadays. In this notebook, we will build an automatic credit card approval predictor using machine learning techniques, just like real banks do.
You have been provided with a small subset of the credit card applications a bank receives. The dataset has been loaded as a Pandas DataFrame for you. You will start from there.
Import libraries and load the credit card approval dataset.
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
cc_apps = pd.read_csv("cc_approvals.data", header=None)
cc_apps.head()
Preprocess the dataset by removing unnecessary columns and splitting it into training and testing sets.
cc_apps.drop([11, 13], axis=1, inplace=True)
cc_apps_train, cc_apps_test = train_test_split(cc_apps, test_size=0.33, random_state=42)
Impute missing values in numeric columns with mean values and in categorical columns with the most frequent values.
numeric_columns = cc_apps_train.select_dtypes(include=np.number).columns
cc_apps_train_imputed = cc_apps_train.copy()
cc_apps_test_imputed = cc_apps_test.copy()
for column in numeric_columns:
mean_value = cc_apps_train[column].astype(float).mean()
cc_apps_train_imputed[column].fillna(mean_value, inplace=True)
cc_apps_test_imputed[column].fillna(mean_value, inplace=True)
object_columns = cc_apps_train_imputed.select_dtypes(include='object').columns
for column in object_columns:
most_frequent_value = cc_apps_train_imputed[column].value_counts().idxmax()
cc_apps_train_imputed[column].fillna(most_frequent_value, inplace=True)
cc_apps_test_imputed[column].fillna(most_frequent_value, inplace=True)
Perform one-hot encoding on categorical variables.
cc_apps_train_cat_encoding = pd.get_dummies(cc_apps_train_imputed)
cc_apps_test_cat_encoding = pd.get_dummies(cc_apps_test_imputed)
cc_apps_test_cat_encoding = cc_apps_test_cat_encoding.reindex(columns=cc_apps_train_cat_encoding.columns, fill_value=0)
Segregate features and labels, and then perform feature rescaling.
# Segregating features and labels and feature rescaling
X_train, y_train = cc_apps_train_cat_encoding.iloc[:,:-1].to_numpy(), cc_apps_train_cat_encoding.iloc[:,-1]
X_test, y_test = cc_apps_test_cat_encoding.iloc[:,:-1].to_numpy(), cc_apps_test_cat_encoding.iloc[:,-1]
scaler = MinMaxScaler(feature_range=(0,1))
rescaledX_train = scaler.fit_transform(X_train)
rescaledX_test = scaler.transform(X_test)
Train a logistic regression model and evaluate its performance using a confusion matrix.
# Training and evaluating a logistic regression model
logreg = LogisticRegression()
logreg.fit(rescaledX_train, y_train)
y_pred = logreg.predict(rescaledX_test)
print(confusion_matrix(y_test, y_pred))
Perform a hyperparameter search to improve the model's performance. Print the best performance score and the corresponding model parameters.
# Hyperparameter search and making the model perform better
tol = [0.01, 0.001, 0.0001]
max_iter = [100, 150, 200]
param_grid = dict(tol=tol, max_iter=max_iter)
grid_model = GridSearchCV(estimator=logreg, param_grid=param_grid, cv=5)
grid_model_result = grid_model.fit(rescaledX_train, y_train)
best_score, best_params = grid_model_result.best_score_, grid_model_result.best_params_
best_model = grid_model_result.best_estimator_
print(f"Best Performance Score: {best_score}")
print(f"Best Model Parameters: {best_params}")