Skip to content
Modeling Car Insurance Claim Outcomes (R)
  • AI Chat
  • Code
  • Report
  • Insurance companies invest a lot of time and money into optimizing their pricing and accurately estimating the likelihood that customers will make a claim. In many countries insurance it is a legal requirement to have car insurance in order to drive a vehicle on public roads, so the market is very large!

    Knowing all of this, On the Road car insurance have requested your services in building a model to predict whether a customer will make a claim on their insurance during the policy period. As they have very little expertise and infrastructure for deploying and monitoring machine learning models, they've asked you to use simple Logistic Regression, identifying the single feature that results in the best performing model, as measured by accuracy.

    They have supplied you with their customer data as a csv file called car_insurance.csv, along with a table detailing the column names and descriptions below.

    The dataset

    ColumnDescription
    idUnique client identifier
    ageClient's age:
    • 0: 16-15
    • 1: 26-39
    • 2: 40-64
    • 3: 65+
    genderClient's gender:
    • 0: Female
    • 1: Male
    driving_experienceYears the client has been driving:
    • 0: 0-9
    • 1: 10-19
    • 2: 20-29
    • 3: 30+
    educationClient's level of education:
    • 0: No education
    • 1: High school
    • 2: University
    incomeClient's income level:
    • 0: Poverty
    • 1: Working class
    • 2: Middle class
    • 3: Upper class
    credit_scoreClient's credit score (between zero and one)
    vehicle_ownershipClient's vehicle ownership status:
    • 0: Does not own their vehilce (paying off finance)
    • 1: Owns their vehicle
    vehcile_yearYear of vehicle registration:
    • 0: Before 2015
    • 1: 2015 or later
    marriedClient's marital status:
    • 0: Not married
    • 1: Married
    childrenClient's number of children
    postal_codeClient's postal code
    annual_mileageNumber of miles driven by the client each year
    vehicle_typeType of car:
    • 0: Sedan
    • 1: Sports car
    speeding_violationsTotal number of speeding violations received by the client
    duisNumber of times the client has been caught driving under the influence of alcohol
    past_accidentsTotal number of previous accidents the client has been involved in
    outcomeWhether the client made a claim on their car insurance (response variable):
    • 0: No claim
    • 1: Made a claim
    # Import required libraries and suppress messages
    suppressMessages(library(dplyr))
    suppressMessages(library(readr))
    suppressMessages(library(glue))
    suppressMessages(library(yardstick))
    library(readr)
    library(dplyr)
    library(glue)
    library(yardstick)
    
    # Start coding!
    
    
    # Read in dataset
    cars = read_csv('car_insurance.csv')
    
    # View data types
    str(cars)
    
    # Missing values per column
    colSums(is.na(cars))
    
    # Distribution of credit_score
    summary(cars$credit_score)
    
    # Distribution of annual_mileage
    summary(cars$annual_mileage)
    
    # Fill missing values with the mean
    cars$credit_score[is.na(cars$credit_score)] <- mean(cars$credit_score, na.rm = TRUE)
    cars$annual_mileage[is.na(cars$annual_mileage)] <- mean(cars$annual_mileage, na.rm = TRUE)
    
    # Feature columns
    features <- names(subset(cars, select = -c(id,outcome)))
    
    # Empty vector to store accuracies
    accuracies <- c()
    
    # Loop through features
    for (col in features) {
        # Create a model
        model <- glm(glue('outcome ~ {col}'), data = cars, family = 'binomial')
        # Get prediction values for the model
        predictions <- round(fitted(model))
        # Convert to a table
        outcomes <- table(predictions, cars$outcome)
        # If table only has one row
    	if (dim(outcomes)[1] == 1) {
            # Accuracy is the first value divided by all values
        	accuracy <- outcomes[1] / sum(outcomes)
    	} else {
            # Accuracy is the sum of the top left and bottom right values divided by all values
        	accuracy <- (outcomes[1,1] + outcomes[2,2]) / sum(outcomes)
        }
        # Append accuracy to accuracies 
        accuracies <- c(accuracies, accuracy)
    }
    
    # Find the feature with the largest accuracy
    best_feature <- features[which.max(accuracies)]
    best_accuracy <- max(accuracies)
    
    # Create best_feature_df
    best_feature_df <- data.frame(best_feature, best_accuracy)
    
    # Run in a new cell to check your solution
    best_feature_df
    
    
    
    
    Unknown integration
    Data frameavailable as
    df
    variable