Skip to content

Commercial banks receive a lot of applications for credit cards. Many of them get rejected for many reasons, like high loan balances, low income levels, or too many inquiries on an individual's credit report, for example. Manually analyzing these applications is mundane, error-prone, and time-consuming (and time is money!). Luckily, this task can be automated with the power of machine learning and pretty much every commercial bank does so nowadays. In this workbook, you will build an automatic credit card approval predictor using machine learning techniques, just like real banks do.

The Data

The data is a small subset of the Credit Card Approval dataset from the UCI Machine Learning Repository showing the credit card applications a bank receives. This dataset has been loaded as a pandas DataFrame called cc_apps. The last column in the dataset is the target value.

# Import necessary libraries
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, accuracy_score

# Load dataset (no headers)
cc_apps = pd.read_csv("cc_approvals.data", header=None)

# 1. Data Cleaning
# Replace '?' with NaN and create a clean copy
cc_apps_clean = cc_apps.replace("?", np.nan).copy()

# 2. Data Imputation
for col in cc_apps_clean.columns:
    if cc_apps_clean[col].dtype == 'object':
        # For categorical: impute most frequent
        cc_apps_clean[col].fillna(cc_apps_clean[col].mode()[0], inplace=True)
    else:
        # For numerical: impute mean
        cc_apps_clean[col].fillna(cc_apps_clean[col].mean(), inplace=True)

# 3. Feature Engineering
# Convert categorical to numerical (drop first to avoid dummy trap)
cc_apps_final = pd.get_dummies(cc_apps_clean, drop_first=True)

# 4. Prepare Data for Modeling
X = cc_apps_final.iloc[:, :-1].values  # All features
y = cc_apps_final.iloc[:, -1].values   # Last column as target

# 5. Train-Test Split (33% test size)
X_train, X_test, y_train, y_test = train_test_split(
    X, y, 
    test_size=0.33, 
    random_state=42,
    stratify=y  # Preserve class distribution
)

# 6. Feature Scaling
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)  # Important: only transform test data

# 7. Baseline Model
logreg = LogisticRegression(max_iter=1000, random_state=42)
logreg.fit(X_train_scaled, y_train)

# 8. Model Evaluation
train_pred = logreg.predict(X_train_scaled)
print("Training Confusion Matrix:\n", confusion_matrix(y_train, train_pred))
print("Training Accuracy:", accuracy_score(y_train, train_pred))

# 9. Hyperparameter Tuning
param_grid = {
    'tol': [0.01, 0.001, 0.0001],
    'C': [0.1, 1, 10],  # Added regularization strength
    'penalty': ['l1', 'l2'],
    'solver': ['liblinear']  # Required for L1 regularization
}

grid_search = GridSearchCV(
    LogisticRegression(max_iter=1000, random_state=42),
    param_grid,
    cv=5,
    scoring='accuracy'
)
grid_search.fit(X_train_scaled, y_train)

# 10. Final Evaluation
best_model = grid_search.best_estimator_
test_pred = best_model.predict(X_test_scaled)
best_score = accuracy_score(y_test, test_pred)

print("\nBest Parameters:", grid_search.best_params_)
print("Test Confusion Matrix:\n", confusion_matrix(y_test, test_pred))
print("Final Test Accuracy:", best_score)