Skip to content

Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!

Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to identify the single feature that results in the best performing model, as measured by accuracy, so they can start with a simple model in production.

They have supplied you with their customer data as a csv file called car_insurance.csv, along with a table detailing the column names and descriptions below.

The dataset

ColumnDescription
idUnique client identifier
ageClient's age:
  • 0: 16-25
  • 1: 26-39
  • 2: 40-64
  • 3: 65+
genderClient's gender:
  • 0: Female
  • 1: Male
driving_experienceYears the client has been driving:
  • 0: 0-9
  • 1: 10-19
  • 2: 20-29
  • 3: 30+
educationClient's level of education:
  • 0: No education
  • 1: High school
  • 2: University
incomeClient's income level:
  • 0: Poverty
  • 1: Working class
  • 2: Middle class
  • 3: Upper class
credit_scoreClient's credit score (between zero and one)
vehicle_ownershipClient's vehicle ownership status:
  • 0: Does not own their vehilce (paying off finance)
  • 1: Owns their vehicle
vehcile_yearYear of vehicle registration:
  • 0: Before 2015
  • 1: 2015 or later
marriedClient's marital status:
  • 0: Not married
  • 1: Married
childrenClient's number of children
postal_codeClient's postal code
annual_mileageNumber of miles driven by the client each year
vehicle_typeType of car:
  • 0: Sedan
  • 1: Sports car
speeding_violationsTotal number of speeding violations received by the client
duisNumber of times the client has been caught driving under the influence of alcohol
past_accidentsTotal number of previous accidents the client has been involved in
outcomeWhether the client made a claim on their car insurance (response variable):
  • 0: No claim
  • 1: Made a claim
# Import required modules
import pandas as pd
import numpy as np
from statsmodels.formula.api import logit

df = pd.read_csv("car_insurance.csv")

EDA

check for missing values and data type

df.info()
Hidden output
df.describe()
Hidden output

Handling Missing Values

In our dataset, we noticed that the credit_score and annual_mileage columns contain some missing values. To ensure the integrity of our analysis, we will fill these missing values with the mean of their respective columns. This approach helps maintain the overall distribution of the data without introducing significant bias. After filling the missing values, we will display the dataframe information to confirm that the missing values have been appropriately addressed.

df["credit_score"].fillna(df["credit_score"].mean(), inplace=True)
df["annual_mileage"].fillna(df["annual_mileage"].mean(), inplace=True)

df.info()
Hidden output

Evaluating Feature Importance with Logistic Regression

To understand the predictive power of each feature in our dataset, we employed logistic regression models. Our goal was to determine how well each individual feature could predict the outcome.

We began by defining a function, cal_accuracy, to compute the accuracy from the confusion matrix. This function would serve as the backbone of our evaluation process.

Next, we iterated over each feature in the dataframe df, excluding the first column (id) and the last column (outcome). For each feature, we fitted a logistic regression model to predict the outcome. The accuracy of each model was then calculated and stored in a dictionary, models_accuracy.

This methodical approach allowed us to quantify the contribution of each feature, providing valuable insights into which attributes hold the most predictive power in our dataset.



models_accuracy = {}
features = df.columns[1:-1]

def cal_accuracy(array):
    tn = array[0][0]
    fp = array[0][1]
    fn = array[1][0]
    tp = array[1][1]
    
    accuracy = (tn + tp) / (tn + fp + fn + tp)
    return accuracy

for col in features:
    formular = "outcome ~ " + col
    model = logit(formular, df).fit()
    models_accuracy[col] = cal_accuracy(model.pred_table())
    
models_accuracy
Hidden output

Identifying the Best Predictive Feature

After calculating the accuracy of each feature using logistic regression, our next step was to identify the single most predictive feature. To achieve this, we created a DataFrame to store the feature with the highest accuracy from our models_accuracy dictionary.

We began by initializing a variable, highest_accuracy, to zero. This variable would help us keep track of the highest accuracy encountered during our iteration through the models_accuracy dictionary. As we examined each feature's accuracy, we compared it to the current highest_accuracy. If a feature's accuracy surpassed this threshold, we updated highest_accuracy and recorded the feature and its accuracy in our DataFrame.

This systematic approach allowed us to pinpoint the feature with the greatest predictive power, providing a clear direction for further analysis and model refinement. Below is the code that facilitated this crucial step in our evaluation process:

# Initialize the highest accuracy variable
highest_accuracy = 0

# Iterate through the models_accuracy dictionary
for feature, accuracy in models_accuracy.items():
    if accuracy > highest_accuracy:
        highest_accuracy = accuracy
        best_feature_df = pd.DataFrame({
            "best_feature": [feature],
            "best_accuracy": [accuracy]
        })

# Display the DataFrame with the best feature and its accuracy
best_feature_df