Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!
(Source: https://www.accenture.com/_acnmedia/pdf-84/accenture-machine-leaning-insurance.pdf)
Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to identify the single feature that results in the best performing model, as measured by accuracy, so they can start with a simple model in production.
They have supplied you with their customer data as a csv file called car_insurance.csv, along with a table detailing the column names and descriptions below.
The dataset
| Column | Description |
|---|---|
id | Unique client identifier |
age | Client's age:
|
gender | Client's gender:
|
driving_experience | Years the client has been driving:
|
education | Client's level of education:
|
income | Client's income level:
|
credit_score | Client's credit score (between zero and one) |
vehicle_ownership | Client's vehicle ownership status:
|
vehcile_year | Year of vehicle registration:
|
married | Client's marital status:
|
children | Client's number of children |
postal_code | Client's postal code |
annual_mileage | Number of miles driven by the client each year |
vehicle_type | Type of car:
|
speeding_violations | Total number of speeding violations received by the client |
duis | Number of times the client has been caught driving under the influence of alcohol |
past_accidents | Total number of previous accidents the client has been involved in |
outcome | Whether the client made a claim on their car insurance (response variable):
|
# Import required modules
import pandas as pd
import numpy as np
from statsmodels.formula.api import logit
car_insurance = pd.read_csv('car_insurance.csv')
car_insurance.head()
# Start coding!The columns "credit_score" and "annual_mileage" have missing values based on the count of values comparing it to the count of values of "id" column
car_insurance.describe()cols_missing = ['credit_score','annual_mileage']
for col in cols_missing:
skewness = car_insurance[col].skew()
if skewness <= 0.25 and skewness >= -0.25: # condition for mean imputation, describes near symmetrical distribution
print(f'{col} skewness: {skewness}\nMean imputation performed')
car_insurance[col] = car_insurance[col].fillna(car_insurance[col].mean())
else: # condition for median imputation, describes moderate to highly skewed distribution
print(f'{col} skewness: {skewness}\nMedian imputation performed')
car_insurance[col] = car_insurance[col].fillna(car_insurance[col].median())car_insurance.describe()car_insurance.info()Check for inconsistencies in values
for col in car_insurance.columns:
unique_values = car_insurance[col].unique()
if len(unique_values) <= 50:
print(f'{col} --> {unique_values}')Remapping data types, values, and categorical ordering
replace_map = {
'driving_experience': {'0-9y':0, '10-19y':1, '20-29y':2, '30y+':3},
'education': {'high school':1, 'none':0, 'university':2},
'income': {'upper class':3, 'poverty':0, 'working class':1, 'middle class':2},
'vehicle_year': {'before 2015':0, 'after 2015':1},
'vehicle_type': {'sedan':0, 'sports car':1}
}
dtype_map = {
'category':['age', 'driving_experience', 'education', 'income'],
'bool':['gender', 'vehicle_ownership', 'vehicle_year', 'married', 'vehicle_type'],
'int32':['children','postal_code','speeding_violations','duis','past_accidents', 'outcome'],
'float16':['credit_score','annual_mileage']
}
orderedcategory_map = {
'age': [0, 1, 2, 3],
'driving_experience': [0, 1, 2, 3],
'education': [0, 1, 2],
'income': [0, 1, 2, 3]
}
for col, values_map in replace_map.items():
car_insurance[col] = car_insurance[col].replace(values_map)
for dtype, cols in dtype_map.items():
for col in cols:
car_insurance[col] = car_insurance[col].astype(dtype)
for col, order in orderedcategory_map.items():
dtype = pd.CategoricalDtype(categories=order, ordered=True)
car_insurance[col] = car_insurance[col].astype(dtype)
car_insurance.info()for col in car_insurance.columns:
unique_values = car_insurance[col].unique()
if len(unique_values) <= 50:
print(f'{col} --> {unique_values}')features = car_insurance.columns.drop(['id','outcome'])
features
models_list = []
for feature in features:
model = logit(f'outcome ~ {feature}', data = car_insurance).fit()
CfMtx = model.pred_table()
accuracy = (CfMtx[0][0] + CfMtx[1][1]) / CfMtx.sum()
models_list.append([feature, accuracy])models_list_sorted = sorted(models_list, key=lambda x: x[1], reverse=True)
models_list_sorted
best_feature_df = pd.DataFrame({'best_feature':[models_list_sorted[0][0]],
'best_accuracy':[models_list_sorted[0][1]]})
best_feature_df