Skip to content

Alt text source: @allison_horst https://github.com/allisonhorst/penguins

You have been asked to support a team of researchers who have been collecting data about penguins in Antartica! The data is available in csv-Format as penguins.csv

Origin of this data : Data were collected and made available by Dr. Kristen Gorman and the Palmer Station, Antarctica LTER, a member of the Long Term Ecological Research Network.

The dataset consists of 5 columns.

ColumnDescription
culmen_length_mmculmen length (mm)
culmen_depth_mmculmen depth (mm)
flipper_length_mmflipper length (mm)
body_mass_gbody mass (g)
sexpenguin sex

Unfortunately, they have not been able to record the species of penguin, but they know that there are at least three species that are native to the region: Adelie, Chinstrap, and Gentoo. Your task is to apply your data science skills to help them identify groups in the dataset!

# Import Required Packages
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
import seaborn as sns
import numpy as np

# Loading and examining the dataset
penguins_df = pd.read_csv("penguins.csv")
penguins_df.head()
# penguin types
penguin_types = ['Adelie', 'Chinstrap', 'Gentoo']

Only numeric columns

penguins_df.info()
penguins_df.describe()
penguins_df['sex'].value_counts()
sns.pairplot(penguins_df, diag_kind='kde')
  • We can see initial groupings within the data, for example in the scatterplot of culmen_length_mm vs. flipper_length_mm we can start to see 3 clusters.
  • The kde plot for each variable seems fairly normal

Clustering penguins

One-hot encoding is not recommended for k-means clustering

K-means clustering relies on Euclidean distance to measure similarity between points. One-hot encoded categorical variables (e.g., gender_Male = 0 or 1, gender_Female = 0 or 1) create binary vectors, and Euclidean distance between such vectors may not meaningfully represent the similarity between categories.

  • If you proceed with one-hot encoding, ensure that all features are scaled (e.g., using standardization or normalization) to avoid bias toward features with larger magnitudes.
# only numeric columns
penguins = pd.get_dummies(penguins_df)
penguins.head()
# Normalize data
scaler = StandardScaler()
X_scaled = scaler.fit_transform(penguins)
X_scaled
kmeans = KMeans(n_clusters=3, random_state=42)
penguins['cluster'] = kmeans.fit_predict(X_scaled)
penguins.sample(n=5, random_state=42)