Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!
(Source: https://www.accenture.com/_acnmedia/pdf-84/accenture-machine-leaning-insurance.pdf
)
Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to identify the single feature that results in the best performing model, as measured by accuracy, so they can start with a simple model in production.
They have supplied you with their customer data as a csv file called car_insurance.csv
, along with a table detailing the column names and descriptions below.
The dataset
Column | Description |
---|---|
id | Unique client identifier |
age | Client's age:
|
gender | Client's gender:
|
driving_experience | Years the client has been driving:
|
education | Client's level of education:
|
income | Client's income level:
|
credit_score | Client's credit score (between zero and one) |
vehicle_ownership | Client's vehicle ownership status:
|
vehcile_year | Year of vehicle registration:
|
married | Client's marital status:
|
children | Client's number of children |
postal_code | Client's postal code |
annual_mileage | Number of miles driven by the client each year |
vehicle_type | Type of car:
|
speeding_violations | Total number of speeding violations received by the client |
duis | Number of times the client has been caught driving under the influence of alcohol |
past_accidents | Total number of previous accidents the client has been involved in |
outcome | Whether the client made a claim on their car insurance (response variable):
|
# import required modules
import pandas as pd
import numpy as np
from statsmodels.formula.api import logit
import matplotlib.pyplot as plt
# Start coding!
# importing dataset
car_insurance_df = pd.read_csv('car_insurance.csv')
car_insurance_df.head()
# get info
print(car_insurance_df.info())
print(car_insurance_df.shape)
Dealing with Missing Data
Missing data is a problem as it may affect distributions of values and makes it less representative of the population under study. Certain groups may be disporportionately represented.
Missing data can lead to drawing wrong conclusions.
To address missing data, the missing data can be addressed through
- Dropping missing values if they form 5% or less of the total values
- Based on the distribution and context, use mean, median or mode to impute values
- Impute by subgroup
# number of missing values in each column
print(car_insurance_df.isna().sum())
# plotting missing values
car_insurance_df.isna().sum().plot(kind='bar')
plt.show()
# check if number of missing values is greater than threshold
threshold = len(car_insurance_df) * 0.05
print(threshold)
# calculate mean values of the credit score and annual mileage columns
mean_credit_score = car_insurance_df['credit_score'].mean()
mean_annual_mileage = car_insurance_df['annual_mileage'].mean()
# replacing missing values in the two columns with the calculated mean values
car_insurance_df['credit_score'].fillna(mean_credit_score, inplace=True)
car_insurance_df['annual_mileage'].fillna(mean_annual_mileage, inplace=True)
print(car_insurance_df.isna().sum())
# empty list for the models
model_list = []
# defining features
features = car_insurance_df.drop(columns=['id', 'outcome']).columns
features
Logistic Regression
A logistic regression model are a type of generalized linear model, used when the response variable is logical.
# Loop through features
for col in features:
# Create a model
model = logit(f"outcome ~ {col}", data=car_insurance_df).fit()
# Add each model to the models list
model_list.append(model)
# Empty list to store accuracies
accuracies = []
# Loop through models
for feature in range(0, len(model_list)):
# compute confusion matrix
conf_matrix = model_list[feature].pred_table()
# True negatives
tn = conf_matrix[0,0]
# True positives
tp = conf_matrix[1,1]
# False negatives
fn = conf_matrix[1,0]
# False positives
fp = conf_matrix[0,1]
# Compute accuracy
acc = (tn + tp) / (tn + fn + fp + tp)
accuracies.append(acc)
# feature with the best accuracy
best_feature = features[accuracies.index(max(accuracies))]
# create best_feature_df
best_feature_df = pd.DataFrame({"best_feature": best_feature,
"best_accuracy": max(accuracies)},
index=[0])
best_feature_df