Optimising a Credit Risk Strategy
Scenario
I am a Quantitative Analyst at ZNB bank. I have been with provided historical credit application data and must recommend a new strategy that expands approvals of credit card applications while keeping portfolio risk within tolerance.
Task 1: Analyse the Current Credit Risk Strategy & Recommend a Revised Strategy
- Calculate the portfolio-level default rate under current strategy
- Identify any score bands that could be included or excluded to optimize volume without exceeding 3% default rate
- Model the impact of your proposed strategy (including: change in approval volume, estimated portfolio default rate, projected revenue and loss).
| Score | % Applications | Approval rate | Default rate (6 months) |
|---|---|---|---|
| 750+ | 20% | 95% | 0.5% |
| 700–749 | 30% | 85% | 1.5% |
| 650–699 | 25% | 60% | 3.0% |
| 600–649 | 15% | 30% | 5.5% |
| <600 | 10% | 10% | 12.0% |
| Current Strategy | Credit Card Terms |
|---|---|
| • Approve all applicants with scores ≥ 675 • Decline all others • Portfolio default rate target: ≤ 1.65% • Applicant volume is fixed. | • Credit Limit: $5,000 • Annual Product Revenue: 20% • Average Utilisation: 40% • Loss Given Default (LGD): 90% |
Task 2: Economic Stress Test
- Assume default rates rise by 50% in a mild recession
- How would this affect your recommended policy?
Task 3: Strategy rules
The table below provides the results of 4 refer rules within the decision strategy alongside the number of applications that triggered the rule (note that referred applications are sent to a team for manual decisioning), the override volume (i.e. the number of applications which were manually approved following a review) and the bad rate on those applications which were manually approved. Based on these results what should ZNB do to improve its decision strategy and why?
| Rule | Number of applications | Overrides | Bad rates on drawn lending |
|---|---|---|---|
| 1 | 250 | 10 | 10% |
| 2 | 700 | 350 | 5% |
| 3 | 385 | 345 | 2% |
| 4 | 590 | 400 | 6% |
Task 4: Other strategy rules
- What other product, policy or credit strategy rules should ZNB be considering within its Credit Card Decision Strategy?
- What are some alternative data sources that may enable further strategy optimization?
- What regulations need to be considered?
Getting started
Below is essentially prep-work. Converting percentages stored as strings, e.g. '20%', into proper decimal values (e.g. 0.20) so we can do math on them. Below I need to convert the credit score bands into min and max lists so that I can pull the specific score values for each customer.
import pandas as pd
import matplotlib.pyplot as plt
# read in data as a dataframe
asd_df = pd.read_csv("task_data.csv")
asd_df.head()# convert spreadsheet values from strings to floats data types
newfloats = ['Applications percent', 'Approval rate', 'Default rate 6m']
for col in newfloats:
asd_df[col] = asd_df[col].str.rstrip('%').astype(float) / 100
# truncate columns for ease / base columns for new model
asd_df.rename(columns={
'Credit score': 'csbands',
'Applications percent': 'applic',
'Approval rate': 'approv',
'Default rate 6m': 'default'
}, inplace=True)
# check above
asd_df.dtypes# convert credit score bands into min and max values for modeling
asd_df["score_min"] = [750, 700, 650, 600, 0]
asd_df["score_max"] = [999, 749, 699, 649, 599]
# re-order columns with above added
asd_df = asd_df[['csbands', 'score_min','score_max', 'applic', 'approv', 'default']]
asd_df.head()Build a model to review strategies
# build a function that calculates credit strategy metrics based on a cut-off score
def portfolio_metrics(df, # clean dataframe to populate
cutoff=675, # score threshold for approvals
stress_multiplier=1.0, # 1.0 is normal, 1.5 is stress test (later)
credit_limit=5000, # customer's card limit
utilisation=0.4, # average % of card spend by customers
apr=0.20, # interest
lgd=0.90, # loss given default (potential bank loss)
total_applicants=100): # base number for easier scaling
#make a fresh copy to keep original data intact
data = df.copy()
# calculate what proportion of each band gets included
def inclusion_share(row):
min_score = row["score_min"]
max_score = row["score_max"]
if cutoff <= min_score:
return 1.0 # full band included
elif cutoff > max_score:
return 0.0 # no part of band included
else:
# if cutoff splits the band, include partial share
included_range = max_score - cutoff
total_range = max_score - min_score
return included_range / total_range # e.g. 25/50 = 0.5
# apply the function across all bands
data["included_share"] = data.apply(inclusion_share, axis=1)
"""
GEN-AI WITH ASSIST!
Struggled with the below 'bank math' here so asked AI to step in and explain how to work out each variable.
"""
# estimate number of applicants per band
# total applicants × % in this band × share included
data["applicants"] = data["applic"] * total_applicants * data["included_share"]
# calculate how many get approved in each band
data["approvals"] = data["applicants"] * data["approv"]
# estimate number of defaults
# apply stress multiplier (1.5 = 50% more defaults in a downturn)
data["defaults"] = data["approvals"] * data["default"] * stress_multiplier
# calculate revenue and loss per band
# revenue = approvals × limit × utilisation × APR
unit_revenue = credit_limit * utilisation * apr
# loss = defaults × limit × utilisation × LGD
unit_loss = credit_limit * utilisation * lgd
"""
THANK YOU DEAREST MACHINES...
Back to my own mediocre code.
"""
data["revenue"] = data["approvals"] * unit_revenue
data["loss"] = data["defaults"] * unit_loss
# calculate portfolio totals
total_approvals = data["approvals"].sum()
total_defaults = data["defaults"].sum()
portfolio_bad_rate = (total_defaults / total_approvals * 100) if total_approvals > 0 else None
# package the results
summary = {
"Approvals": total_approvals,
"Defaults": total_defaults,
"Portfolio Bad Rate %": portfolio_bad_rate,
"Revenue": data["revenue"].sum(),
"Loss": data["loss"].sum(),
"Net Profit": data["revenue"].sum() - data["loss"].sum()
}
return data, summary
# data: band-level calculations (approvals, defaults, revenue, loss)
# summary: portfolio-level totals and risk metrics
# run the portfolio_metrics function using your current dataframe (asd_df)
# and chosen cutoff score (e.g., 695 in this case).
band_results, portfolio_summary = portfolio_metrics(asd_df, cutoff=675)
# view defaults by score band
print("Band-level Results:")
print(band_results)
# call out the portfolio-level default rate from the summary dictionary
portfolio_default_rate = portfolio_summary["Portfolio Bad Rate %"]
# display the result, rounded to 2 decimal places
print(f"\nPortfolio Default Rate: {portfolio_default_rate:.2f}%")
Visualising potential strategy & safety margin
# define score range and increment (20 points)
cutoff_scores = range(500, 900, 20)
# create empty lists to collect metrics during loop
bad_rates = []
revenues = []
losses = []
# loop through each cutoff score and store the outputs
for cutoff in cutoff_scores:
_, summary = portfolio_metrics(asd_df, cutoff=cutoff)
# append each metric to its respective list
bad_rates.append(summary["Portfolio Bad Rate %"])
revenues.append(summary["Revenue"])
losses.append(summary["Loss"])
# visualise above, adding custom markers for current & potential strategy
plt.figure(figsize=(10, 6))
plt.plot(cutoff_scores, bad_rates, marker='o', label='Portfolio Default Rate (%)')
plt.title('Portfolio Default Rate vs Cutoff Score, intervals of 20')
plt.xlabel('Cutoff Score')
plt.ylabel('Default Rate (%)')
plt.axhline(1.65, color='red', linestyle=':', linewidth=2, label='Default rate target')
plt.axvline(635, color='green', linestyle=':', linewidth=2, label='Min score for rate target')
plt.axvline(675, color='blue', linestyle=':', linewidth=2, label='Current ZNB strategy (>=675)')
plt.axvspan(635, 675, color='green', alpha=0.1, label='Potential volume')
plt.grid(True)
plt.legend(loc='right')
plt.tight_layout()
plt.show()Summary of findings:
Based on the analysis, the current ZNB strategy (cutoff ≥675) results in a portfolio default rate of 1.35%, well below the 1.65% risk target.
The model shows that we could safely lower the cutoff to 635 while still remaining within the bank’s risk appetite.
Comparing strategies - current (≥ 675) vs. proposed (≥ 635)
# define the strategies as variables
cutoff_current = 675
cutoff_proposed = 635
# run model for the current strategy
_, current_summary = portfolio_metrics(asd_df, cutoff=cutoff_current)
# run model for the proposed strategy
_, proposed_summary = portfolio_metrics(asd_df, cutoff=cutoff_proposed)
# extract key metrics from each and build a comparison table
comparison_data = {
"Cut-off Strategy": [f"≥ {cutoff_current}", f"≥ {cutoff_proposed}"],
"Approvals": [current_summary["Approvals"], proposed_summary["Approvals"]],
"Defaults": [current_summary["Defaults"], proposed_summary["Defaults"]],
"Default Rate (%)": [current_summary["Portfolio Bad Rate %"], proposed_summary["Portfolio Bad Rate %"]],
"Revenue ($)": [current_summary["Revenue"], proposed_summary["Revenue"]],
"Loss ($)": [current_summary["Loss"], proposed_summary["Loss"]],
"Net Profit ($)": [current_summary["Net Profit"], proposed_summary["Net Profit"]]
}
# create a dataframe for display
comparison_df = pd.DataFrame(comparison_data).round(2)
display(comparison_df)
# compare profit outcomes between current and proposed strategies
profit_1 = float(comparison_df["Net Profit ($)"].iloc[0])
profit_2 = float(comparison_df["Net Profit ($)"].iloc[1])
profit_diff = profit_2 - profit_1
percent_increase = (profit_diff / profit_1) * 100
print(f"Reducing the cut-off threshold from 675 to 635 would potentially:")
print(f"• Increase total net profit by: ${profit_diff:.2f} per 100 customers (on average)")
print(f"• This represents an approximate percentage increase of: {percent_increase:.2f}%")
Factoring in economic uncertainty with a stress-test
# stress test function
def stress_test_row(original_summary, cutoff_label, unit_loss):
# simulate a recession by increasing defaults and bad rate by 50%
defaults = original_summary["Defaults"] * 1.5
bad_rate = original_summary["Portfolio Bad Rate %"] * 1.5
revenue = original_summary["Revenue"]
loss = defaults * unit_loss
net_profit = revenue - loss
# build and return the stress-tested row
return pd.Series([
cutoff_label,
original_summary["Approvals"],
defaults,
bad_rate,
revenue,
loss,
net_profit
], index=[
"Strategy",
"Approvals",
"Defaults",
"Default Rate (%)",
"Revenue ($)",
"Loss ($)",
"Net Profit ($)"
])
# calculate unit loss once
unit_loss = 5000 * 0.90
# pass it into the stress test & generate recession rows from original summaries
recession_row_675 = stress_test_row(current_summary, "Cutoff ≥675 (Mild Recession)", unit_loss)
recession_row_635 = stress_test_row(proposed_summary, "Cutoff ≥635 (Mild Recession)", unit_loss)
# combine them into a df
recession_df = pd.DataFrame([recession_row_675, recession_row_635])
# append to existing comparison table
comparison_df = pd.concat([comparison_df, recession_df], ignore_index=True)
# ensure all values are float for plotting
comparison_df["Net Profit ($)"] = comparison_df["Net Profit ($)"].astype(float)