Skip to content

Arctic Penguin Exploration: Unraveling Clusters in the Icy Domain with K-means clustering

Alt text source: @allison_horst https://github.com/allisonhorst/penguins

You have been asked to support a team of researchers who have been collecting data about penguins in Antartica!

Origin of this data : Data were collected and made available by Dr. Kristen Gorman and the Palmer Station, Antarctica LTER, a member of the Long Term Ecological Research Network.

The dataset consists of 5 columns.

  • culmen_length_mm: culmen length (mm)
  • culmen_depth_mm: culmen depth (mm)
  • flipper_length_mm: flipper length (mm)
  • body_mass_g: body mass (g)
  • sex: penguin sex

Unfortunately, they have not been able to record the species of penguin, but they know that there are three species that are native to the region: Adelie, Chinstrap, and Gentoo, so your task is to apply your data science skills to help them identify groups in the dataset!

# Import Required Packages
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler

# Loading and examining the dataset
penguins_df = pd.read_csv("data/penguins.csv")
penguins_columns = penguins_df.columns



# Task 1: Clean the dataframe
# Removing a row with '.' in the 'sex' column (FOR SUBMISSION THIS HAS TO BE REMOVED LATER)
# Inspection also noticed a negative value, and an excessively large value for flipper_length_mm
penguins_df = penguins_df.dropna()

flipper_len_con = (penguins_df['flipper_length_mm'] > 0) & (penguins_df['flipper_length_mm'] < 700)
penguins_clean = penguins_df[flipper_len_con]

print(penguins_df.shape)
print(penguins_clean.shape)



# Task 2: Preprocess: standard scaling, one-hot encoding
penguins_preprocessed = pd.get_dummies(penguins_clean, columns=['sex']).drop('sex_.', axis=1)
penguins_preprocessed_cols = penguins_preprocessed.columns

scaler = StandardScaler()
scaler.fit(penguins_preprocessed)
penguins_preprocessed = scaler.transform(penguins_preprocessed)
penguins_preprocessed = pd.DataFrame(penguins_preprocessed, columns=penguins_preprocessed_cols)





# Task 3: Perform PCA, determine desired number of components (variance ratio > 10%)
pca = PCA()
pca.fit(penguins_preprocessed)

features = range(pca.n_components_)
explained_var = pca.explained_variance_ratio_

plt.bar(features, explained_var)
plt.xticks(features)
plt.xlabel('PCA features')
plt.ylabel('Explained variance')
plt.show()
print(explained_var)
n_components = sum(explained_var > 0.1)

pca = PCA(n_components = n_components)
pca.fit(penguins_preprocessed)
penguins_PCA = pca.transform(penguins_preprocessed)



# Task 4: Determine optimal number of clusters with elbow analysis.
nc_range = range(1, 10)
model_inertia = []
for nc in nc_range:
    model = KMeans(n_clusters=nc, random_state=42)
    model.fit(penguins_PCA)
    model_inertia.append(model.inertia_)
    
plt.plot(nc_range, model_inertia, '-o')
plt.xticks(nc_range)
plt.xlabel('Number of clusters')
plt.ylabel('Inertia')
plt.show()

# Probably there is a more quantitative way to do this, but just eyeballing the graph, I would say 4 clusters
n_clusters = 4
kmeans = KMeans(n_clusters=n_clusters, random_state=42).fit(penguins_PCA)
plt.scatter(penguins_PCA[:,0], penguins_PCA[:,1], c=kmeans.labels_)
plt.show()



# Task 5: Add label column to penguins_clean
penguins_clean['label'] = kmeans.labels_



# Task 6: Create a statistical table
stat_penguins = penguins_clean.groupby('label').mean()
print(stat_penguins)

#test = pd.DataFrame(penguins_preprocessed[:,4].reshape(-1,1), columns=['testor'])
#print(test['testor'].value_counts())
#print(penguins_clean.info())