Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries, insurance is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!
Knowing all of this, On the Road car insurance has requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to use simple Logistic Regression, identifying the single feature that results in the best-performing model, as measured by accuracy.
They have supplied you with their customer data as a csv file called car_insurance.csv, along with a table (below) detailing the column names and descriptions below.
The dataset
| Column | Description |
|---|---|
id | Unique client identifier |
age | Client's age:
|
gender | Client's gender:
|
driving_experience | Years the client has been driving:
|
education | Client's level of education:
|
income | Client's income level:
|
credit_score | Client's credit score (between zero and one) |
vehicle_ownership | Client's vehicle ownership status:
|
vehcile_year | Year of vehicle registration:
|
married | Client's marital status:
|
children | Client's number of children |
postal_code | Client's postal code |
annual_mileage | Number of miles driven by the client each year |
vehicle_type | Type of car:
|
speeding_violations | Total number of speeding violations received by the client |
duis | Number of times the client has been caught driving under the influence of alcohol |
past_accidents | Total number of previous accidents the client has been involved in |
outcome | Whether the client made a claim on their car insurance (response variable):
|
# Import required libraries
library(readr)
library(dplyr)
library(glue)
library(yardstick)
# Start coding!data <- read_csv("car_insurance.csv")
str(data)#Check for missing values
data %>%
summarise(across(everything(), ~ sum(is.na(.))))#Check mean & median credit score for all
data %>%
filter(!is.na(credit_score)) %>%
summarise(mean = mean(credit_score), median = median(credit_score))
#Check for differences by postal code
data %>%
group_by(postal_code) %>%
filter(!is.na(credit_score)) %>%
summarise(mean = mean(credit_score), median = median(credit_score))
#Visualize rows with missing values
data %>%
filter(is.na(credit_score))
#Change missing values for the median of the column
newData2 <- data %>%
mutate(clean_credit_score = ifelse(is.na(credit_score), median(credit_score, na.rm = TRUE), credit_score))
#Check new mean and median
newData2 %>%
summarise(mean = mean(clean_credit_score), median = median(clean_credit_score))#Check mean & median credit score for all
data %>%
filter(!is.na(annual_mileage)) %>%
summarise(mean(annual_mileage), median(annual_mileage))
#Check for differences by postal code and vehicle type
data %>%
group_by(age) %>%
filter(!is.na(annual_mileage)) %>%
summarise(mean(annual_mileage), median(annual_mileage))
#Change missing values for the mean of the column
newData2 <- newData2 %>%
mutate(clean_annual_mileage = ifelse(is.na(annual_mileage), median(annual_mileage, na.rm = TRUE), annual_mileage))
#Check new mean and median
newData2 %>%
summarise(mean = mean(clean_annual_mileage), median = median(clean_annual_mileage))#Check for missing values in newData2 (should be 0 for our new columns)
newData2 %>%
summarise(across(everything(), ~ sum(is.na(.))))
#remove the id, credit_score and annual_mileage columns
clean_no_id <- newData2 %>%
select(-id,-credit_score,-annual_mileage)
head(clean_no_id)#Find the feature with the best predictive performance for a car insurance claim
#Outcome = 1 -> an insurance claim was made
#Extract features
features <- setdiff(names(clean_no_id), "outcome")
#Create results table
results <- tibble(
feature = character(),
accuracy = numeric()
)
for (feat in features) {
# Build dynamic formula
f <- as.formula(glue::glue("outcome ~ {feat}"))
# Fit logistic regression
fit <- glm(f, data = clean_no_id, family = binomial())
# Predict probabilities
p <- predict(fit, type = "response")
# Convert probabilities to classes (threshold 0.5)
pred_class <- factor(ifelse(p >= 0.5, 1, 0), levels = c(0, 1))
# Save the actual outcome as a factor
truth <- factor(clean_no_id$outcome, levels = c(0, 1))
# Calculate the accuracy of the predictions
acc <- accuracy_vec(truth = truth, estimate = pred_class)
# Save the results per feature
results <- add_row(results, feature = feat, accuracy = acc)
}
#Extract feature with highest accuracy
best_feature_df <- results %>%
slice_max(accuracy, n = 1) %>%
rename(best_feature = feature, best_accuracy = accuracy)
#Make sure best_feature_df is a dataframe, as requested
best_feature_df <- best_feature_df %>% as.data.frame()
best_feature_df