Skip to content

Arctic Penguin Exploration: Unraveling Clusters in the Icy Domain with K-means clustering

Alt text source: @allison_horst https://github.com/allisonhorst/penguins

You have been asked to support a team of researchers who have been collecting data about penguins in Antartica!

Origin of this data : Data were collected and made available by Dr. Kristen Gorman and the Palmer Station, Antarctica LTER, a member of the Long Term Ecological Research Network.

The dataset consists of 5 columns.

  • culmen_length_mm: culmen length (mm)
  • culmen_depth_mm: culmen depth (mm)
  • flipper_length_mm: flipper length (mm)
  • body_mass_g: body mass (g)
  • sex: penguin sex

Unfortunately, they have not been able to record the species of penguin, but they know that there are three species that are native to the region: Adelie, Chinstrap, and Gentoo, so your task is to apply your data science skills to help them identify groups in the dataset!

# Import Required Packages
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler

# Loading and examining the dataset
penguins_df = pd.read_csv("data/penguins.csv")
# Display the first few rows of the dataset
print(penguins_df.head())

# Get information about the dataset
print(penguins_df.info())

# Summary statistics of numerical columns
print(penguins_df.describe())

# Check for missing values
print(penguins_df.isnull().sum())
# Remove null values
penguins_clean = penguins_df.dropna()

# Remove outliers
# Assuming flipper_length_mm shouldn't be negative
penguins_clean = penguins_clean[penguins_clean['flipper_length_mm'] > 0]

# Sort the DataFrame by 'flipper_length_mm' column in descending order
sorted_df = penguins_clean.sort_values(by='flipper_length_mm', ascending=False)

# Print the sorted DataFrame
print(sorted_df)

# Filter out rows where 'flipper_length_mm' is equal to 5000
penguins_clean = penguins_clean[penguins_clean['flipper_length_mm'] != 5000]

# Confirm the removal by printing the DataFrame shape
print("DataFrame shape after removing outlier:", penguins_clean.info)
# Summary after cleaning
print(penguins_clean.info())  # Summary of the cleaned dataset

# Summary statistics of numerical columns after cleaning
print(penguins_clean.describe())
# Create Dummy Variables and Remove Original Categorical Feature
penguins_preprocessed = pd.get_dummies(penguins_clean, drop_first=True)
print(penguins_preprocessed.info)
scaler = StandardScaler()
X = scaler.fit_transform(penguins_preprocessed)
penguins_preprocessed = pd.DataFrame(data=X, columns=penguins_preprocessed.columns)
penguins_preprocessed.head(10)
# Perform Principal Component Analysis (PCA)
pca = PCA()
pca.fit(penguins_preprocessed)

# Determine the desired number of components for PCA
explained_variance_ratio_threshold = 0.10
n_components = sum(pca.explained_variance_ratio_ >= explained_variance_ratio_threshold)

print("Desired number of components:", n_components)

# Execute PCA using the desired number of components
pca = PCA(n_components=n_components)
penguins_PCA = pca.fit_transform(penguins_preprocessed)
# Perform elbow analysis to determine the number of clusters
inertia = []
for n_cluster in range(1, 11):
    kmeans = KMeans(n_clusters=n_cluster, random_state=42)
    kmeans.fit(penguins_PCA)
    inertia.append(kmeans.inertia_)

# Plot the elbow curve
plt.plot(range(1, 11), inertia, marker='o')
plt.xlabel('Number of Clusters')
plt.ylabel('Inertia')
plt.title('Elbow Analysis')
plt.show()
# Save the optimal number of clusters in a variable called n_cluster
n_clusters = 4

# Create and fit the final KMeans model
kmeans = KMeans(n_clusters=n_clusters, random_state=42)
kmeans.fit(penguins_PCA)

print(penguins_PCA.shape)

# Visualize the clusters using the first two principal components
plt.scatter(penguins_PCA[:, 0], penguins_PCA[:, 1], c=kmeans.labels_, cmap='viridis')
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title('K-Means Clustering')
plt.colorbar(label='Cluster')
plt.show()
# Add label column to penguins_clean DataFrame
penguins_clean['label'] = kmeans.labels_

# Create a list containing the names of the numeric columns
numeric_columns = ['culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g']

# Create a final characteristic DataFrame by using .groupby('label')[numeric_columns].mean() on penguins_clean
stat_penguins = penguins_clean.groupby('label')[numeric_columns].mean()

# Display the statistical table
print(stat_penguins)