Commercial banks receive a lot of applications for credit cards. Many of them get rejected for many reasons, like high loan balances, low income levels, or too many inquiries on an individual's credit report, for example. Manually analyzing these applications is mundane, error-prone, and time-consuming (and time is money!). Luckily, this task can be automated with the power of machine learning and pretty much every commercial bank does so nowadays. In this workbook, you will build an automatic credit card approval predictor using machine learning techniques, just like real banks do.
The Data
The data is a small subset of the Credit Card Approval dataset from the UCI Machine Learning Repository showing the credit card applications a bank receives. This dataset has been loaded as a pandas DataFrame called cc_apps. The last column in the dataset is the target value.
# Import necessary libraries
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.model_selection import RandomizedSearchCV, KFold
# Load the dataset
cc_apps = pd.read_csv("cc_approvals.data", header=None)
cc_apps.head()# Handle missing data
# Replace the '?'s with NaN in dataset
cc_apps_nans_replaced = cc_apps.replace("?", np.NaN)
# There is missing data
cc_apps_nans_replaced.isna().any()
# Make copy of DataFrame
cc_apps_imputed = cc_apps_nans_replaced.copy()
# Iterate over each column of cc_apps_nans_replaced and impute the most frequent value for object data types and the mean for numeric data types
for col in cc_apps_imputed.columns:
# Check if the column is of object type
if cc_apps_imputed[col].dtypes == "object":
# Impute with the most frequent value
cc_apps_imputed[col] = cc_apps_imputed[col].fillna(
cc_apps_imputed[col].value_counts().index[0]
)
else:
cc_apps_imputed[col] = cc_apps_imputed[col].fillna(cc_apps_imputed[col].mean())
# Convert categorical data into numeric
cc_apps_dummies = pd.get_dummies(cc_apps_imputed, drop_first=True)# Define feature variables
X = cc_apps_dummies.drop(cc_apps_dummies.columns[-1], axis=1).values
# Define target variable (last column in DataFrame)
y = cc_apps_dummies[cc_apps_dummies.columns[-1]].values
# Split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
# Scale data
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)# Instantiate the model
logreg = LogisticRegression()
# Train model
logreg.fit(X_train_scaled, y_train)
# Get predictions
y_pred = logreg.predict(X_test_scaled)
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))# Create the logreg parameter space
params = {"penalty": ["l1", "l2"],
"tol": np.linspace(0.0001, 1.0, 50),
"C": np.linspace(0.1, 1, 50),
"class_weight": ["balanced", {0:0.8, 1:0.2}]}
kf = KFold(n_splits=5, shuffle=True)
# Instantiate the GridSearchCV object
logreg_cv = RandomizedSearchCV(logreg, params, cv=kf)
# Fit the data to the model
logreg_cv.fit(X_train_scaled, y_train)
# The logreg model with the optimal hyperparameters
logreg_best = logreg_cv.best_estimator_# Fit optimal logreg model to training data
logreg_best.fit(X_train_scaled, y_train)
# Evaluate the model on the training data (get the accuracy)
best_score = logreg_best.score(X_test_scaled, y_test)
print(best_score)