Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!
Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to identify the single feature that results in the best performing model, as measured by accuracy, so they can start with a simple model in production.
They have supplied you with their customer data as a csv file called car_insurance.csv
, along with a table detailing the column names and descriptions below.
The dataset
Column | Description |
---|---|
id | Unique client identifier |
age | Client's age:
|
gender | Client's gender:
|
driving_experience | Years the client has been driving:
|
education | Client's level of education:
|
income | Client's income level:
|
credit_score | Client's credit score (between zero and one) |
vehicle_ownership | Client's vehicle ownership status:
|
vehcile_year | Year of vehicle registration:
|
married | Client's marital status:
|
children | Client's number of children |
postal_code | Client's postal code |
annual_mileage | Number of miles driven by the client each year |
vehicle_type | Type of car:
|
speeding_violations | Total number of speeding violations received by the client |
duis | Number of times the client has been caught driving under the influence of alcohol |
past_accidents | Total number of previous accidents the client has been involved in |
outcome | Whether the client made a claim on their car insurance (response variable):
|
# Import required modules
import pandas as pd
import numpy as np
from statsmodels.formula.api import logit
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv('car_insurance.csv')
df.head(20)
for col in df.columns:
print(df[col].value_counts(), '\n')
cat_map = {'driving_experience': {"0-9y": 0, "10-19y": 1, "20-29y": 2, "30y+": 3},'education': {"none": 0, "high school": 1, "university": 2},'income': {"poverty": 0, "working class": 1, "middle class": 2, "upper class": 3},'vehicle_year': {"before 2015": 0, "after 2015": 1}, "vehicle_type" : {"sedan" : 0, "sports car" : 1}}
for key, value_map in cat_map.items():
df[key] = df[key].replace(value_map)
df.head(20)
from sklearn.impute import KNNImputer
# Impute missing values using K-Nearest Neighbors
imputer = KNNImputer(n_neighbors=200)
df['credit_score'] = imputer.fit_transform(df[['credit_score']])
df['annual_mileage'] = imputer.fit_transform(df[['annual_mileage']])
df.isna().sum()
df.info()
n = len(df.columns) - 1 # Subtract 1 to exclude the 'outcome' column
# Set up the matplotlib figure with subplots
fig, axes = plt.subplots(nrows=n, figsize=(8, n * 5)) # Adjust the size as necessary
# Iterate over the DataFrame columns and create a regplot for each
for i, col in enumerate(df.columns):
if col != 'outcome': # Exclude the dependent variable from the plots
ax = axes[i] # Select the appropriate subplot
sns.regplot(x=col, y="outcome",data=df, ci=None, logistic=True, line_kws={"color" : "blue"}, ax=ax)
ax.set_title(f'Logistic Regression: {col} vs. outcome')
plt.tight_layout()
plt.show()
features = df.drop(columns=["id", "outcome"]).columns
models = []
results = pd.DataFrame(columns=['feature', 'intercept', 'slope', 'TN', 'TP', 'FN', 'FP', 'sensitivity', 'specificity', 'accuracy'])
for col in features:
model = logit(f"outcome ~ {col}", data=df).fit(disp=0) # disp=0 suppresses the fit output
models.append(model)
intercept = model.params['Intercept']
slope = model.params[col] # Ensure you use the correct index for slope
conf_matrix = model.pred_table() # Compute the confusion matrix
TN, FP, FN, TP = conf_matrix[0, 0], conf_matrix[0, 1], conf_matrix[1, 0], conf_matrix[1, 1]
accuracy = (TP + TN) / (TN + TP + FN + FP) # Calculate performance metrics
sensitivity = TP / (TP + FN) if TP + FN != 0 else float('nan') # Prevent division by zero
specificity = TN / (TN + FP) if TN + FP != 0 else float('nan') # Prevent division by zero
# Append the results for this model to the DataFrame
results = results.append({
'feature': col,
'intercept': intercept,
'slope': slope,
'TN': TN,
'TP': TP,
'FN': FN,
'FP': FP,
'sensitivity': sensitivity,
'specificity': specificity,
'accuracy': accuracy
}, ignore_index=True)
results.sort_values('accuracy', ascending=False)