Commercial banks receive a lot of applications for credit cards. Many of them get rejected for many reasons, like high loan balances, low income levels, or too many inquiries on an individual's credit report, for example. Manually analyzing these applications is mundane, error-prone, and time-consuming (and time is money!). Luckily, this task can be automated with the power of machine learning and pretty much every commercial bank does so nowadays. In this workbook, you will build an automatic credit card approval predictor using machine learning techniques, just like real banks do.
The Data
The data is a small subset of the Credit Card Approval dataset from the UCI Machine Learning Repository showing the credit card applications a bank receives. This dataset has been loaded as a pandas
DataFrame called cc_apps
. The last column in the dataset is the target value.
# Import necessary libraries
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
# Load the dataset
cc_apps = pd.read_csv("cc_approvals.data", header=None)
cc_apps.head()
A quick look at the data...
cc_apps
cc_apps.info()
From a quick look at the dataset, there are no technical empty values, but there are entries with '?'. To ensure they are seen as NaNs, we replace them with NaNs.
# Replace the '?'s with NaN in dataset
cc_apps_nan_replaced = cc_apps.replace("?", np.NaN)
cc_apps_imputed = cc_apps_nan_replaced.copy()
The strategy for imputing missing values will depend on what type of data is in the column of interest. We do not know a lot about the columns to assume, as a lot of the attribute names and values have been changed to meaningless symbols to protect confidentiality of the data. We can however guess that the object data are not continuous data, hence are probably categorical. The choice in this project, is to impute missing object data with the most common value, and impute continous data with the mean of the variable. We can find the most common value by getting the value counts of a column, and taking the first one indexed at 0, since it's arranged from largest counts to smallest.
for col in cc_apps_imputed.columns:
if cc_apps_imputed[col].dtypes== 'object':
cc_apps_imputed[col]= cc_apps_imputed[col].fillna(cc_apps_imputed[col].value_counts().index[0])
else:
cc_apps_imputed[col]= cc_apps_imputed[col].fillna(cc_apps_imputed[col].mean())
#dummify the categorical variables
cc_apps_encoded = pd.get_dummies(cc_apps_imputed, drop_first= True)
cc_apps_encoded.head()
To prepare the data for modelling, we define the target and feature variables. The target is the y, and the features are the X. The target variable is the last column in our dataset, that contains a positive or a negative, denoting approval or rejection.
X= cc_apps_encoded.iloc[:, :-1].values
y= cc_apps_encoded.iloc[:, [-1]].values
We need to split the dataset into traing and test sets. This is easily done using the train_test_split function. I chose to use a test size of 0.2, in this case.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.2, random_state= 42)