Skip to content

Alt text source: @allison_horst https://github.com/allisonhorst/penguins

You have been asked to support a team of researchers who have been collecting data about penguins in Antartica! The data is available in csv-Format as penguins.csv

Origin of this data : Data were collected and made available by Dr. Kristen Gorman and the Palmer Station, Antarctica LTER, a member of the Long Term Ecological Research Network.

The dataset consists of 5 columns.

ColumnDescription
culmen_length_mmculmen length (mm)
culmen_depth_mmculmen depth (mm)
flipper_length_mmflipper length (mm)
body_mass_gbody mass (g)
sexpenguin sex

Unfortunately, they have not been able to record the species of penguin, but they know that there are at least three species that are native to the region: Adelie, Chinstrap, and Gentoo. Your task is to apply your data science skills to help them identify groups in the dataset!

# Import Required Packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler

# Loading and examining the dataset
penguins_df = pd.read_csv("penguins.csv")
penguins_df.head()

Preprocessing data

#Find empty values 
penguins_df.isna().sum().sort_values()

The data is cleaned, but we need to standardized the value.

#Create Dummy variable for categorical feature (sex)

penguins_df = pd.get_dummies(penguins_df, drop_first=False)
penguins_df.columns
#EDA to find mean and variance
penguins_df.describe()

We could observe that each features or columns has extremely different values. This could affect the result of our classification. Therefore, we will implement scaling to make the features "closer" to each other.

#Instantiate scaler object
scaler = StandardScaler()

#Scale the value and fit to new data frame
X = scaler.fit_transform(penguins_df)
penguins_processed = pd.DataFrame(data=X, columns=penguins_df.columns)

#Displaying the head
penguins_processed.head(5)

The data has been pre-processed. Now, we can create the clustering model. Since we don't have pre-labelled data, we will utilize KMeans (unsupervised learning).

#Detect optimal cluster using inertia

inertia = []

for k in range(1, 10):
    #Instantiating the model
    kmeans = KMeans(n_clusters = k, random_state = 42)
    
    #Fitting the data into the model
    kmeans.fit(penguins_processed)
    
    #Storing the store in inertia 
    inertia.append(kmeans.inertia_)

#Visualize the performance of the model
plt.plot(range(1,10), inertia)
plt.xlabel("Number of clusters")
plt.ylabel("Inertia")
plt.xticks(range(1,10))
plt.show()

We could see that the elbow of this model is located at cluster = 4. Therefore, we will use n_clusters = 4 from this code and onwards.

#Instantiating Final Model
final_model = KMeans(n_clusters=4, random_state=42)

#Fitting data 
final_model.fit(penguins_processed)

#Append labels
penguins_df["label"] = final_model.labels_

#Visualize clusters
plt.scatter(penguins_df["label"], penguins_df["culmen_length_mm"], c=final_model.labels_, cmap="viridis")
plt.xlabel("Cluster")
plt.ylabel("Culmen_length_mm")
plt.xticks(range(int(penguins_df['label'].min()), int(penguins_df['label'].max()) + 1))
plt.title(f'K-means Clustering (K= 4)')
plt.show()
# Step - create final `stat_penguins` DataFrame
numeric_columns = ['culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm','label']
stat_penguins = penguins_df[numeric_columns].groupby('label').mean()
stat_penguins