Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!
Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to identify the single feature that results in the best performing model, as measured by accuracy, so they can start with a simple model in production.
They have supplied you with their customer data as a csv file called car_insurance.csv
, along with a table detailing the column names and descriptions below.
The dataset
Column | Description |
---|---|
id | Unique client identifier |
age | Client's age:
|
gender | Client's gender:
|
driving_experience | Years the client has been driving:
|
education | Client's level of education:
|
income | Client's income level:
|
credit_score | Client's credit score (between zero and one) |
vehicle_ownership | Client's vehicle ownership status:
|
vehcile_year | Year of vehicle registration:
|
married | Client's marital status:
|
children | Client's number of children |
postal_code | Client's postal code |
annual_mileage | Number of miles driven by the client each year |
vehicle_type | Type of car:
|
speeding_violations | Total number of speeding violations received by the client |
duis | Number of times the client has been caught driving under the influence of alcohol |
past_accidents | Total number of previous accidents the client has been involved in |
outcome | Whether the client made a claim on their car insurance (response variable):
|
# Import required modules
import pandas as pd
import numpy as np
from statsmodels.formula.api import logit
# Start coding!
# Load in data and inspect the first five rows
df = pd.read_csv("car_insurance.csv")
df.head(10)
# Inspect null values for cleaning purposes
df.isna().sum()
df["vehicle_type"].value_counts(dropna=False)
# Given that credit_score and annual_mileage each have null values, those will need to be filled. credit_score is likely to be related to income, so performing a groupby aggregation of average credit_score by income group will likely be beneficial. For annual_mileage, postal_code might prove to be a better predictor.
# Plotting miles driven by postal_code
df.groupby("postal_code")["annual_mileage"].agg(["mean", "median"]).plot(kind="bar", title="Mean Miles Traveled by Zip Code", rot = 0, ylabel="Miles Traveled", xlabel="Zip Code");
# Credit score has been "MinMaxScaled" so that all values are between 0 and 1
ax = df.groupby("income")["credit_score"].agg(["mean", "median"]).plot(kind="bar", title="Mean Credit Score by Income Group", ylabel="Credit Score Rated 0-1", xlabel="Income Group", rot=0)
ax.legend(["Mean", "Median"])
ax.set_xticks([0, 1, 2, 3], ["Middle Class", "Poverty", "Upper Class", "Working Class"]);
# Handling nulls in the credit_score column
credit_dict = df.groupby("income")["credit_score"].mean().to_dict()
# Filling in the nulls by mapping the dictionaries
df["credit_score"].fillna(df["income"].map(credit_dict), inplace=True)
# No more missing values for credit_score!
df.isna().sum()
# Filling in the nulls by mapping the dictionaries
df["credit_score"].fillna(df["income"].map(credit_dict), inplace=True)
# Handling nulls in the annual_mileage column
miles_dict = df.groupby("postal_code")["annual_mileage"].mean().to_dict()
print("Miles Dictionary:", miles_dict)
df["annual_mileage"].fillna(df["postal_code"].map(miles_dict), inplace=True)
# Sanity Check
print("Credit Nulls:", df["credit_score"].isna().sum(), "Mileage Nulls:", df["annual_mileage"].isna().sum(), "Total Nulls:", df.isna().sum().sum())
# Now that null values have been addressed, we can begin the process of determining which individual independent variable has the greatest impact on whether a claim is filed
y = df["outcome"]
X = df.drop(columns=["id", "outcome"])
from statsmodels.graphics.mosaicplot import mosaic
# Iterate through the different columns to fit models and assess performance
# We want to extract the
feature_df = pd.DataFrame(columns=["Feature", "Accuracy"])
columns = list(X.columns)
for col in columns:
model = logit(f"outcome ~ {col}", data=df).fit()
matrix = model.pred_table()
tn = matrix[0, 0]
tp = matrix[1, 1]
fn = matrix[1, 0]
fp = matrix[0, 1]
accuracy = (tp + tn) / (tn + tp + fn + fp)
feature_df.loc[len(feature_df)] = {"Feature":col, "Accuracy": accuracy}