Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!
Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to use simple Logistic Regression, identifying the single feature that results in the best performing model, as measured by accuracy.
They have supplied you with their customer data as a csv file called car_insurance.csv, along with a table detailing the column names and descriptions below.
The dataset
| Column | Description |
|---|---|
id | Unique client identifier |
age | Client's age:
|
gender | Client's gender:
|
driving_experience | Years the client has been driving:
|
education | Client's level of education:
|
income | Client's income level:
|
credit_score | Client's credit score (between zero and one) |
vehicle_ownership | Client's vehicle ownership status:
|
vehcile_year | Year of vehicle registration:
|
married | Client's marital status:
|
children | Client's number of children |
postal_code | Client's postal code |
annual_mileage | Number of miles driven by the client each year |
vehicle_type | Type of car:
|
speeding_violations | Total number of speeding violations received by the client |
duis | Number of times the client has been caught driving under the influence of alcohol |
past_accidents | Total number of previous accidents the client has been involved in |
outcome | Whether the client made a claim on their car insurance (response variable):
|
# Import required modules
import pandas as pd
import numpy as np
from statsmodels.formula.api import logit
# Start coding!
car_insurance = pd.read_csv("car_insurance.csv")
car_insurance.head()pip install category_encoderscar_insurance.info()#Get the length of the dataframe
df_len = len(car_insurance)
#Iterate over each column of the dataframe
for col in car_insurance.columns:
miss_percent = (len(col) / df_len) * 100
print(col, miss_percent)#Drop rows with missing values since the missing data are not significant
car_insurance.dropna(inplace=True)
car_insurance.info()
corr_ =car_insurance.select_dtypes("number").corr()
display(corr_)
#Import seaborn as sns
import seaborn as sns
sns.heatmap(corr_)#Get columns
car_insurance.columns#Solit data into target vector (y) and feature matrix (X)
y = car_insurance["outcome"]
features = car_insurance.drop(columns=["outcome", "id", "duis", "postal_code"])
print(features.columns)#Import modules for predictive model
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from category_encoders import OneHotEncoder#Extract and save the training and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)#Instatiate logistic regression
OHE = OneHotEncoder(use_cat_names=True)
logreg = LogisticRegression()
pipeline = make_pipeline(OHE, logreg)#Fit and train model
pipeline.fit(X_train, y_train)
label = pipeline.predict(X_test)
label[:5]#import performance metrics
from sklearn.metrics import accuracy_score
score = accuracy_score(y_test, label)
score