Commercial banks receive a lot of applications for credit cards. Many of them get rejected for many reasons, like high loan balances, low income levels, or too many inquiries on an individual's credit report, for example. Manually analyzing these applications is mundane, error-prone, and time-consuming (and time is money!). Luckily, this task can be automated with the power of machine learning and pretty much every commercial bank does so nowadays. In this notebook, we will build an automatic credit card approval predictor using machine learning techniques, just like real banks do.
You have been provided with a small subset of the credit card applications a bank receives. The dataset has been loaded as a Pandas DataFrame for you. You will start from there.
# Import necessary libraries
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import GridSearchCV
# Load the dataset
cc_apps = pd.read_csv("cc_approvals.data", header=None)
cc_apps.head()Exploring the data
# Start coding here
# Use as many cells as you need
cc_apps.describe()
cc_apps.nunique()
cc_apps = cc_apps.drop([11,13], axis=1)
cc_apps_train, cc_apps_test = train_test_split(cc_apps, test_size=0.33, random_state=42)
cc_apps_train.shape
cc_apps_test.shapeAssessing Data Quality: Missing Values
In this section, we will assess the quality of the data by identifying and handling missing values.
cc_apps_test.isnull().sum()
cc_apps_train.isnull().sum()# Replace the '?'s with NaN in the train and test sets
cc_apps_train_nans_replaced = cc_apps_train.replace("?", np.NaN)
cc_apps_test_nans_replaced = cc_apps_test.replace("?", np.NaN)# Impute the missing values with mean imputation
cc_apps_train_imputed = cc_apps_train_nans_replaced.fillna(cc_apps_train_nans_replaced.mean())
cc_apps_test_imputed = cc_apps_test_nans_replaced.fillna(cc_apps_train_nans_replaced.mean())# Iterate over each column of cc_apps_train_imputed
for col in cc_apps_train_imputed.columns:
# Check if the column is of object type
if cc_apps_train_imputed[col].dtypes == "object":
# Impute with the most frequent value
cc_apps_train_imputed = cc_apps_train_imputed.fillna(
cc_apps_train_imputed[col].value_counts().index[0]
)
cc_apps_test_imputed = cc_apps_test_imputed.fillna(
cc_apps_train_imputed[col].value_counts().index[0]
)# Convert the categorical features in the train and test sets independently
cc_apps_train_cat_encoding = pd.get_dummies(cc_apps_train_imputed)
cc_apps_test_cat_encoding = pd.get_dummies(cc_apps_test_imputed)# Reindex the columns of the test set aligning with the train set
cc_apps_test_cat_encoding = cc_apps_test_cat_encoding.reindex(
columns=cc_apps_train_cat_encoding.columns, fill_value=0
)# Segregate features and labels into separate variables
X_train, y_train = (
cc_apps_train_cat_encoding.iloc[:, :-1].values,
cc_apps_train_cat_encoding.iloc[:, [-1]].values,
)
X_test, y_test = (
cc_apps_test_cat_encoding.iloc[:, :-1].values,
cc_apps_test_cat_encoding.iloc[:, [-1]].values,
)# Instantiate MinMaxScaler and use it to rescale X_train and X_test
scaler = MinMaxScaler(feature_range=(0, 1))
rescaledX_train = scaler.fit_transform(X_train)
rescaledX_test = scaler.transform(X_test)