Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!
(Source: https://www.accenture.com/_acnmedia/pdf-84/accenture-machine-leaning-insurance.pdf)
Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to identify the single feature that results in the best performing model, as measured by accuracy, so they can start with a simple model in production.
They have supplied you with their customer data as a csv file called car_insurance.csv, along with a table detailing the column names and descriptions below.
The dataset
| Column | Description |
|---|---|
id | Unique client identifier |
age | Client's age:
|
gender | Client's gender:
|
driving_experience | Years the client has been driving:
|
education | Client's level of education:
|
income | Client's income level:
|
credit_score | Client's credit score (between zero and one) |
vehicle_ownership | Client's vehicle ownership status:
|
vehcile_year | Year of vehicle registration:
|
married | Client's marital status:
|
children | Client's number of children |
postal_code | Client's postal code |
annual_mileage | Number of miles driven by the client each year |
vehicle_type | Type of car:
|
speeding_violations | Total number of speeding violations received by the client |
duis | Number of times the client has been caught driving under the influence of alcohol |
past_accidents | Total number of previous accidents the client has been involved in |
outcome | Whether the client made a claim on their car insurance (response variable):
|
# Import required modules
import pandas as pd
import numpy as np
from statsmodels.formula.api import logit
# Start coding!Reading in and exploring the dataset
df = pd.read_csv("car_insurance.csv")
import matplotlib.pyplot as plt
import seaborn as sns
#finding missing data
df.isna().sum()
#finding the distribution of columns with missing data
sns.displot(data=df["credit_score"])
sns.displot(data=df["annual_mileage"])We could see that both columns (credit_score and annual_mileage) are normally distributed.
Therefore, it s possible to replace the missing values with the mean of the column
Filling Missing Value
#replacing the missing values with the mean of the columns
df["credit_score"].fillna(df["credit_score"].mean(), inplace=True)
df["annual_mileage"].fillna(df["annual_mileage"].mean(), inplace=True)
#checking null values one more time
print("Null values in both column: ", df["credit_score"].isna().sum(), df["annual_mileage"].isna().sum())Preparing for Modeling
We are trying to predict whether a customer will make a claim or not.
Since this is a binomial situation, we could safely categorize the customer's behavior as:
- 1: Make a claim
- 0: Not make a claim
Because the prediction is either 0 or 1, we should use Logistic Regression, with the following details:
- Response variable: "outcome"
- Explanaroty variable: all the features available
Models will be paired with each features. Our aim is to find the feature that bes predicts the outcome. We will do it in the code below:
#create a list for models
models = []
features = df.drop(columns=["id", "outcome"]).columns
#loop through all the feature
for feature in features:
#Creating model with each feature
model = logit(f"outcome ~ {feature}", data=df).fit()
#Add each model to the list
models.append(model)Assessing Accuracy
#creating a list to retain the accuracy scores
accuracies = []
#looping through the models
for feature in range(0, len(models)):
#Compute the confusion matrix
conf_matrix = models[feature].pred_table()
#True Negatives
tn = conf_matrix[0,0]
#True Positives
tp = conf_matrix[1,1]
#False Negatives
fn = conf_matrix[1,0]
#False Positive
fp = conf_matrix[0,1]
#Compute Accuracy
acc = (tn + tp) / (tn + fn + fp + tp)
accuracies.append(acc)Final Answer
#Storing outcome
best_feature = features[accuracies.index(max(accuracies))]
#Storing answer
best_feature_df = pd.DataFrame({
"best_feature":best_feature,
"best_accuracy":max(accuracies)},
index=[0])
print(best_feature_df)