Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!
Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to identify the single feature that results in the best performing model, as measured by accuracy, so they can start with a simple model in production.
They have supplied you with their customer data as a csv file called car_insurance.csv
, along with a table detailing the column names and descriptions below.
The dataset
Column | Description |
---|---|
id | Unique client identifier |
age | Client's age:
|
gender | Client's gender:
|
driving_experience | Years the client has been driving:
|
education | Client's level of education:
|
income | Client's income level:
|
credit_score | Client's credit score (between zero and one) |
vehicle_ownership | Client's vehicle ownership status:
|
vehcile_year | Year of vehicle registration:
|
married | Client's marital status:
|
children | Client's number of children |
postal_code | Client's postal code |
annual_mileage | Number of miles driven by the client each year |
vehicle_type | Type of car:
|
speeding_violations | Total number of speeding violations received by the client |
duis | Number of times the client has been caught driving under the influence of alcohol |
past_accidents | Total number of previous accidents the client has been involved in |
outcome | Whether the client made a claim on their car insurance (response variable):
|
Project Summary
This project involves analyzing a car insurance dataset to identify the best features that predict a certain outcome. The project likely involves steps such as data preprocessing, feature selection, model training, and evaluation to determine which features are the best predictors of the outcome in the car insurance dataset.
# Import required modules
import pandas as pd
import numpy as np
from statsmodels.formula.api import logit
# Start coding!
# Read the car insurance data from a CSV file into a DataFrame
car_insurance = pd.read_csv('car_insurance.csv')
# Display the DataFrame
car_insurance
# Convert the DataFrame columns to a list
car_insurance_cols = car_insurance.columns.to_list()
# Print the list of column names
print(car_insurance_cols)
# Fill missing values in 'credit_score' with the column's mean
car_insurance['credit_score'].fillna(car_insurance['credit_score'].mean(), inplace=True)
# Fill missing values in 'annual_mileage' with the column's mean
car_insurance['annual_mileage'].fillna(car_insurance['annual_mileage'].mean(), inplace=True)
# Print a message to confirm the action
print("Missing values in 'credit_score' and 'annual_mileage' columns have been replaced with the respective column means.")
# Generate descriptive statistics for the car_insurance DataFrame
car_insurance.describe()
# Initialize an empty list to store the models
models = []
# Drop unwanted columns 'id' and 'outcome' from the DataFrame and store the remaining features
car_features = car_insurance.drop(columns=['id', 'outcome'], axis=1).columns
# Convert the Index object containing the feature names to a list
car_features_list = car_features.tolist()
# Print the list of features to verify the result
print(car_features_list)
# List to store logistic regression models
models = []
# Iterate over each feature in the car_features list
for col in car_features:
# Formula for logistic regression, using the current feature to predict 'outcome'
formula = f"outcome ~ {col}"
# Create and fit the logistic regression model using the formula and car_insurance DataFrame
model = logit(formula, data=car_insurance).fit()
# Append the fitted model to the 'models' list
models.append(model)
accuracies = []
# Loop through models and calculate accuracy
accuracies = []
for i in range(len(models)):
# Access the current model using its index
model = models[i]
# Compute the confusion matrix for the current model
conf_matrix = model.pred_table()
# Extract the number of true negatives from the confusion matrix
tn = conf_matrix[0, 0]
# Extract the number of true positives from the confusion matrix
tp = conf_matrix[1, 1]
# Extract the number of false negatives from the confusion matrix
fn = conf_matrix[1, 0]
# Extract the number of false positives from the confusion matrix
fp = conf_matrix[0, 1]
# Compute accuracy using the formula: (TN + TP) / (TN + FN + FP + TP)
acc = (tn + tp) / (tn + fn + fp + tp)
# Append the computed accuracy to the 'accuracies' list
accuracies.append(acc)
# Find the index of the model with the highest accuracy
best_model_feature = accuracies.index(max(accuracies))
# Get the best feature name and accuracy
best_feature = car_features[best_model_feature]
best_accuracy = accuracies[best_model_feature]
# Create a DataFrame to summarize results
best_feature_df = pd.DataFrame({
"best_feature": [best_feature], # Add the best feature name to the DataFrame
"best_accuracy": [best_accuracy] # Add the best accuracy to the DataFrame
}, index=[0]) # Set index to 0
# Display the best performing model results
print("Best performing model results:")
print(best_feature_df)