Dive into the heart of data science with a project that combines healthcare insights and predictive analytics. As a Data Scientist at a top Health Insurance company, you have the opportunity to predict customer healthcare costs using the power of machine learning. Your insights will help tailor services and guide customers in planning their healthcare expenses more effectively.
Dataset Summary
Meet your primary tool: the insurance.csv dataset. Packed with information on health insurance customers, this dataset is your key to unlocking patterns in healthcare costs. Here's what you need to know about the data you'll be working with:
insurance.csv
| Column | Data Type | Description |
|---|---|---|
age | int | Age of the primary beneficiary. |
sex | object | Gender of the insurance contractor (male or female). |
bmi | float | Body mass index, a key indicator of body fat based on height and weight. |
children | int | Number of dependents covered by the insurance plan. |
smoker | object | Indicates whether the beneficiary smokes (yes or no). |
region | object | The beneficiary's residential area in the US, divided into four regions. |
charges | float | Individual medical costs billed by health insurance. |
A bit of data cleaning is key to ensure the dataset is ready for modeling. Once your model is built using the insurance.csv dataset, the next step is to apply it to the validation_dataset.csv. This new dataset, similar to your training data minus the charges column, tests your model's accuracy and real-world utility by predicting costs for new customers.
Let's Get Started!
This project is your playground for applying data science in a meaningful way, offering insights that have real-world applications. Ready to explore the data and uncover insights that could revolutionize healthcare planning? Let's begin this exciting journey!
# Import required libraries
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import cross_val_score
# Loading the insurance dataset
insurance_data_path = 'insurance.csv'
insurance = pd.read_csv(insurance_data_path)
#insurance.head()
insurance.dropna(subset=['age', 'sex'], inplace=True)
columns = insurance.columns
# Data cleaning on age column
insurance = insurance[insurance['age'] > 0]
# Data cleaning on sex column
gender = {'woman': 'female', 'F': 'female', 'man': 'male', 'M': 'male'}
insurance['sex'] = insurance['sex'].replace(gender)
# Data cleaning on bmi column
insurance['bmi'].fillna(insurance['bmi'].mean(), inplace=True)
insurance['bmi'] = insurance['bmi'].round(2)
# Data cleaning in children column
insurance['children'] = np.where(insurance['children'] < 0, 0, insurance['children'])
insurance['children'].fillna(0, inplace=True)
# Data cleaning in children column
insurance['smoker'].fillna('no', inplace=True)
# Data cleaning in region column
insurance['region'] = insurance['region'].str.lower()
insurance['region'].fillna('southeast', inplace=True)
# Data cleaning in charges column
insurance['charges'] = insurance['charges'].str.replace('$','').astype(float).round(2)
insurance['charges'].fillna(insurance['charges'].median(), inplace=True)
for c in columns:
print(c,insurance[c].unique())
insurance.isna().sum()
# Multicollinearity checks
import matplotlib.pyplot as plt
import seaborn as sns
heat_map = insurance[['age', 'bmi', 'children']].corr()
sns.heatmap(heat_map, annot=True)
plt.show()
# Implement model creation and training here
# Use as many cells as you need
X = pd.get_dummies(data=insurance, columns=['smoker', 'region', 'sex'], drop_first=True)
X.drop(columns='charges', inplace=True)
y = insurance['charges']
# Splitting the data into training and testing
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, shuffle=True, random_state=21)
# Cross validation to train the data effectively
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
# K-fold validation
model_1 = LinearRegression()
kv = KFold(n_splits=10, shuffle = True, random_state=21)
cv_result = cross_val_score(model_1, X_train, y_train, cv=kv)
cv_result = np.quantile(cv_result, [0.025, 0.975])
print(f'The KFold validation r2_score varies between the range of {cv_result[0]} and {cv_result[1]}\n')
model_1.fit(X_train, y_train)
y_pred = model_1.predict(X_test)
r2_score = model_1.score(X_test, y_test)
print(f'The R2_score for the model is {r2_score}\n')
# Prediction on validation dataset
test_2 = pd.read_csv('validation_dataset.csv')
validation_data = pd.get_dummies(data=test_2, columns=['smoker', 'region', 'sex'], drop_first=True)
validation_data['predicted_charges'] = model_1.predict(validation_data).round(2)
validation_data['predicted_charges'] = np.where(
validation_data['predicted_charges']<1000
,1000
,validation_data['predicted_charges']
)
print(validation_data)