Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!
Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to identify the single feature that results in the best performing model, as measured by accuracy, so they can start with a simple model in production.
They have supplied you with their customer data as a csv file called car_insurance.csv, along with a table detailing the column names and descriptions below.
The dataset
| Column | Description |
|---|---|
id | Unique client identifier |
age | Client's age:
|
gender | Client's gender:
|
driving_experience | Years the client has been driving:
|
education | Client's level of education:
|
income | Client's income level:
|
credit_score | Client's credit score (between zero and one) |
vehicle_ownership | Client's vehicle ownership status:
|
vehcile_year | Year of vehicle registration:
|
married | Client's marital status:
|
children | Client's number of children |
postal_code | Client's postal code |
annual_mileage | Number of miles driven by the client each year |
vehicle_type | Type of car:
|
speeding_violations | Total number of speeding violations received by the client |
duis | Number of times the client has been caught driving under the influence of alcohol |
past_accidents | Total number of previous accidents the client has been involved in |
outcome | Whether the client made a claim on their car insurance (response variable):
|
# Import required modules
import pandas as pd
import numpy as np
from statsmodels.formula.api import logit
# Start coding!#create s pandas dataframe
df = pd.read_csv('car_insurance.csv')
#preview the dataframe
df.head()#examine the datatypes
df.info()#get the summary of the dataframe
df.describe()#check for missing values
df.isna().sum()#fiil in the missing values for the credit score
df['credit_score']=df['credit_score'].fillna(df['credit_score'].mean())#fiil in the missing values for the annual_mileage
df['annual_mileage']=df['annual_mileage'].fillna(df['annual_mileage'].mean())#Recheck for missing values
df.isna().sum()# Create an empty list called models
models = []
colum = ["outcome", "id"]
features = df.drop(columns=colum, axis=1)
# Import necessary libraries
from statsmodels.formula.api import logit
# Initialize an empty list to store the models
models = []
# Loop through each feature and create a Logistic Regression model
for col in features:
# Fit the model using the current feature and the outcome
formula = f'outcome ~ {col}'
model = logit(formula, data=df).fit()
# Append the model to the models list
models.append(model)
# print the name of the feature being processed
print(f"Model for {col} created and fitted.")# Create an empty list to store model accuracies
accuracies = []
# Initialize lists to store confusion matrices and individual metrics
confusion_matrices = []
true_negatives = []
true_positives = []
false_negatives = []
false_positives = []
# Iterate over the index of models
for i in range(len(models)):
# Access the model at index i
model = models[i]
# Use the pred_table() method to get the confusion matrix
conf_matrix = model.pred_table()
# Store the confusion matrix
confusion_matrices.append(conf_matrix)
# Extract individual metrics
tn = conf_matrix[0, 0]
tp = conf_matrix[1, 1]
fn = conf_matrix[1, 0]
fp = conf_matrix[0, 1]
# Store the individual metrics
true_negatives.append(tn)
true_positives.append(tp)
false_negatives.append(fn)
false_positives.append(fp)
# Compute accuracy and store it
accuracy = (tn + tp) / (tn + tp + fn + fp)
accuracies.append(accuracy)
# Find the index of the model with the highest accuracy
max_accuracy_index = accuracies.index(max(accuracies))
print(f"Index of the model with the highest accuracy: {max_accuracy_index}")
print(f"Highest accuracy: {accuracies[max_accuracy_index]}")
# Map the highest accuracy to the feature
best_feature = features.columns[max_accuracy_index]
best_accuracy = accuracies[max_accuracy_index]
# Create a pandas DataFrame with the best feature and its accuracy
best_feature_df = pd.DataFrame({
"best_feature": [best_feature],
"best_accuracy": [best_accuracy]
}, index=[0])
print(best_feature_df)