Skip to content

Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!

Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to use simple Logistic Regression, identifying the single feature that results in the best performing model, as measured by accuracy.

They have supplied you with their customer data as a csv file called car_insurance.csv, along with a table detailing the column names and descriptions below.

The dataset

ColumnDescription
idUnique client identifier
ageClient's age:
  • 0: 16-15
  • 1: 26-39
  • 2: 40-64
  • 3: 65+
genderClient's gender:
  • 0: Female
  • 1: Male
driving_experienceYears the client has been driving:
  • 0: 0-9
  • 1: 10-19
  • 2: 20-29
  • 3: 30+
educationClient's level of education:
  • 0: No education
  • 1: High school
  • 2: University
incomeClient's income level:
  • 0: Poverty
  • 1: Working class
  • 2: Middle class
  • 3: Upper class
credit_scoreClient's credit score (between zero and one)
vehicle_ownershipClient's vehicle ownership status:
  • 0: Does not own their vehilce (paying off finance)
  • 1: Owns their vehicle
vehcile_yearYear of vehicle registration:
  • 0: Before 2015
  • 1: 2015 or later
marriedClient's marital status:
  • 0: Not married
  • 1: Married
childrenClient's number of children
postal_codeClient's postal code
annual_mileageNumber of miles driven by the client each year
vehicle_typeType of car:
  • 0: Sedan
  • 1: Sports car
speeding_violationsTotal number of speeding violations received by the client
duisNumber of times the client has been caught driving under the influence of alcohol
past_accidentsTotal number of previous accidents the client has been involved in
outcomeWhether the client made a claim on their car insurance (response variable):
  • 0: No claim
  • 1: Made a claim
# Import required modules
import pandas as pd
import numpy as np
from statsmodels.formula.api import logit

# Start coding!
df = pd.read_csv('car_insurance.csv')
df.info()
print(df.describe())
print(df.head(10))
print(df[['age','driving_experience']])
df.isnull().sum()

We can see that the annual milage and credit score have the ~ 1000 values missing. We can drop the columns or impute them with the mean value. Ill save both df as df_impt and df_drop and study the effect of both on the data.

df_drop=df.copy()
df_impt=df.copy()
df_impt['credit_score'].fillna(df_impt['credit_score'].mean(), inplace=True)
df_impt['annual_mileage'].fillna(df_impt['annual_mileage'].mean(), inplace=True)
df_impt.isnull().sum()
#df_impt.info()
df_drop=df_drop.dropna()
df_drop.isnull().sum()
#df_drop.info()
# here we create dummy variables for the non numerical variables
df_drop=pd.get_dummies(df_drop, columns=['driving_experience', 'education', 'income', 'vehicle_year', 'vehicle_type'])
df_impt=pd.get_dummies(df_impt, columns=['driving_experience', 'education', 'income', 'vehicle_year', 'vehicle_type'])
df_drop.info()
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# So here since it says find the best 'feature' Ill usea loop to fit each feature first. Then Ill use l1 regularisation and check if they both match or at least give similar results 

# Split the data into training and testing sets first straing with the drop set
X_drop = df_drop.drop('outcome', axis=1)
y_drop = df_drop['outcome']
X_train, X_test, y_train, y_test = train_test_split(X_drop, y_drop, test_size=0.2,random_state=42)

# Initialize an empty dictionary to store feature accuracies
feature_accuracies = {}

# Iterate over each feature
for feature in X_train.columns:
    # Create a Logistic Regression model
    model = LogisticRegression()
    
    # Train the model
    model.fit(X_train[[feature]], y_train)
    
    # Make predictions on the test set
    y_pred = model.predict(X_test[[feature]])
    
    # Calculate accuracy
    accuracy = accuracy_score(y_test, y_pred)
    
    # Store the accuracy in the dictionary
    feature_accuracies[feature] = accuracy

best_feature = max(feature_accuracies, key=feature_accuracies.get)
best_accuracy = feature_accuracies[best_feature]
best_feature_df = pd.DataFrame({'best_feature': [best_feature], 'best_accuracy': [best_accuracy]})
print(best_feature)
print(feature_accuracies)

Based on the simple on feature logistic regression #age appears to be the best feature to predict wether or not a client will make a claim.

# Split the data into training and testing sets first straing with the drop set
X_impt = df_impt.drop('outcome', axis=1)
y_impt = df_impt['outcome']
X_train, X_test, y_train, y_test = train_test_split(X_impt, y_impt, test_size=0.2,random_state=42)

# Initialize an empty dictionary to store feature accuracies
feature_accuracies = {}

# Iterate over each feature
for feature in X_train.columns:
    # Create a Logistic Regression model
    model = LogisticRegression()
    
    # Train the model
    model.fit(X_train[[feature]], y_train)
    
    # Make predictions on the test set
    y_pred = model.predict(X_test[[feature]])
    
    # Calculate accuracy
    accuracy = accuracy_score(y_test, y_pred)
    
    # Store the accuracy in the dictionary
    feature_accuracies[feature] = accuracy

best_feature = max(feature_accuracies, key=feature_accuracies.get)
best_accuracy = feature_accuracies[best_feature]
best_feature_df = pd.DataFrame({'best_feature': [best_feature], 'best_accuracy': [best_accuracy]})
best_feature_df.head()

Now lets build a more complex model using L1 regularization

from sklearn.model_selection import GridSearchCV
X_drop = df_drop.drop('outcome', axis=1)
y_drop = df_drop['outcome']
X_train, X_test, y_train, y_test = train_test_split(X_drop, y_drop, test_size=0.2,random_state=42)


# Define the parameter grid
param_grid = {
    'C': [0.001,0.01, 0.1, 1, 10,100],
    'penalty': ['l1'], # we only look at l1 regularisation 
    'solver': ['liblinear', 'lbfgs', 'sag', 'saga'],
    'max_iter': [100, 500, 1000]
}

# Perform grid search
grid_search = GridSearchCV(model, param_grid, cv=5)
grid_search.fit(X_train, y_train)

# Get the best parameters and best score
best_params = grid_search.best_params_
best_score = grid_search.best_score_

print("Best Parameters:", best_params)
print("Best Score:", best_score)
    
# fitting the model with the best parameters
bestmodel=LogisticRegression(C= 10, max_iter= 500, penalty='l1',solver= 'liblinear')
bestmodel.fit(X_train,y_train) # training the model
print(bestmodel.coef_[0])
best_coef=np.max(np.abs(bestmodel.coef_[0]))
print(df_drop.columns[np.where(np.abs(bestmodel.coef_[0]) == best_coef)])
vals=dict(zip(df_drop.columns,bestmodel.coef_[0]))
print(vals)
valdf=pd.DataFrame(vals,columns=['Parameter','Coef in model'])
print(valdf.head())

We get an accuracy of ~85%