Skip to content

Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!

(Source: https://www.accenture.com/_acnmedia/pdf-84/accenture-machine-leaning-insurance.pdf)

Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to identify the single feature that results in the best performing model, as measured by accuracy, so they can start with a simple model in production.

They have supplied you with their customer data as a csv file called car_insurance.csv, along with a table detailing the column names and descriptions below.

The dataset

ColumnDescription
idUnique client identifier
ageClient's age:
  • 0: 16-25
  • 1: 26-39
  • 2: 40-64
  • 3: 65+
genderClient's gender:
  • 0: Female
  • 1: Male
driving_experienceYears the client has been driving:
  • 0: 0-9
  • 1: 10-19
  • 2: 20-29
  • 3: 30+
educationClient's level of education:
  • 0: No education
  • 1: High school
  • 2: University
incomeClient's income level:
  • 0: Poverty
  • 1: Working class
  • 2: Middle class
  • 3: Upper class
credit_scoreClient's credit score (between zero and one)
vehicle_ownershipClient's vehicle ownership status:
  • 0: Does not own their vehilce (paying off finance)
  • 1: Owns their vehicle
vehcile_yearYear of vehicle registration:
  • 0: Before 2015
  • 1: 2015 or later
marriedClient's marital status:
  • 0: Not married
  • 1: Married
childrenClient's number of children
postal_codeClient's postal code
annual_mileageNumber of miles driven by the client each year
vehicle_typeType of car:
  • 0: Sedan
  • 1: Sports car
speeding_violationsTotal number of speeding violations received by the client
duisNumber of times the client has been caught driving under the influence of alcohol
past_accidentsTotal number of previous accidents the client has been involved in
outcomeWhether the client made a claim on their car insurance (response variable):
  • 0: No claim
  • 1: Made a claim
# Import required modules
import pandas as pd
import numpy as np
from statsmodels.formula.api import logit
# Reading the file
cars = pd.read_csv('car_insurance.csv')

# Display the first few rows
print("First few rows of the dataset:")
print(cars.head())

# Display the summary of the DataFrame
print("\nDataFrame info:")
print(cars.info())

# Check for missing values
print("\nMissing values in each column:")
print(cars.isnull().sum())

# Summary statistics for numeric columns
print("\nSummary statistics for numeric columns:")
print(cars.describe())
# credit score and mileage score contain null values
# Fill missing values with the mean
cars['credit_score'].fillna(cars['credit_score'].mean(),inplace=True)
cars['annual_mileage'].fillna(cars['annual_mileage'].mean(),inplace=True)

# Check if there are still any missing values
missing_values = cars.isnull().sum()

# Print missing values if any
print("Missing values after initial fill:\n", missing_values)
# Empty list to store model results
models=[]

# Feature columns
features = cars.drop(columns=['id','outcome']).columns

#separate logistic regression model for each feature to see how well it predicts the outcome by itself.

# Loop through each column (feature) we're using to predict the outcome
for col in features:
    # Build a logistic regression model using just this one feature
    model = logit(f"outcome ~ {col}", data=cars).fit()
    models.append(model)
# Create an empty list to store accuracy values for each model
accuracies = []

# Loop through each model in the 'models' list
for feature in range(0, len(models)):
    # Get the confusion matrix for the current model
    # This shows how well the model's predictions matched the actual outcomes
    conf_matrix = models[feature].pred_table()

    # Extract values from the confusion matrix:
    # True Negatives: Model correctly predicted 0 (negative outcome)
    tn = conf_matrix[0, 0]

    # True Positives: Model correctly predicted 1 (positive outcome)
    tp = conf_matrix[1, 1]

    # False Negatives: Model predicted 0 but actual was 1
    fn = conf_matrix[1, 0]

    # False Positives: Model predicted 1 but actual was 0
    fp = conf_matrix[0, 1]

    # Calculate accuracy: the percentage of total correct predictions
    acc = (tn + tp) / (tn + fn + fp + tp)

    # Add the accuracy of the current model to the list
    accuracies.append(acc)

# Find the feature with the largest accuracy
best_feature = features[accuracies.index(max(accuracies))] 

# Create best_feature_df
best_feature_df = pd.DataFrame({"best_feature": best_feature,
                                "best_accuracy": max(accuracies)},
                                index=[0])
best_feature_df