Skip to content

Alt text source: @allison_horst https://github.com/allisonhorst/penguins

You have been asked to support a team of researchers who have been collecting data about penguins in Antartica! The data is available in csv-Format as penguins.csv

Origin of this data : Data were collected and made available by Dr. Kristen Gorman and the Palmer Station, Antarctica LTER, a member of the Long Term Ecological Research Network.

The dataset consists of 5 columns.

ColumnDescription
culmen_length_mmculmen length (mm)
culmen_depth_mmculmen depth (mm)
flipper_length_mmflipper length (mm)
body_mass_gbody mass (g)
sexpenguin sex

Unfortunately, they have not been able to record the species of penguin, but they know that there are at least three species that are native to the region: Adelie, Chinstrap, and Gentoo. Your task is to apply your data science skills to help them identify groups in the dataset!

1 - Perform preprocessing steps on the dataset to create dummy variables

# Import Required Packages
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler

# Loading and examining the dataset
penguins_df = pd.read_csv("penguins.csv")
penguins_df.head()

# Creating dummy variables for the 'sex' column
penguins_df = pd.get_dummies(penguins_df, columns=['sex'], drop_first=True)

# Standardizing the numerical columns
scaler = StandardScaler()
penguins_scaled = scaler.fit_transform(penguins_df)

# Convert the scaled data back to a DataFrame for easier handling
penguins_scaled = pd.DataFrame(penguins_scaled, columns=penguins_df.columns)

Step 1: Preprocessing the Dataset

In the first step, we prepared the penguins dataset for clustering by creating dummy variables for the categorical feature sex. This process involved converting the categorical values into numerical ones using the pd.get_dummies() function, which helps in effectively including categorical data in our clustering model. After creating the dummy variables, we dropped the original sex column. Additionally, we standardized the dataset using StandardScaler() to ensure that all features contributed equally to the clustering process.

2 - Detect the optimal number of clusters for k-means clustering

# List to store inertia values
inertia_valeus = []

# Perform Elbow analysis for number of clusters ranging from 1 to 9
for i in range(1, 10):
    kmeans = KMeans(n_clusters=i, random_state=42)
    kmeans.fit(penguins_df)
    inertia_valeus.append(kmeans.inertia_)

# Plot the Elbow curve
plt.figure(figsize=(8, 5))
plt.plot(range(1, 10), inertia_valeus, marker='o')
plt.title('Elbow Analysis for Optimal K')
plt.xlabel('Number of clusters')
plt.ylabel('Inertia')
plt.grid(True)
plt.show()

Step 2: Determining the Optimal Number of Clusters

In the second step, we performed an Elbow analysis to determine the optimal number of clusters for K-Means clustering. We trained several K-Means models with a varying number of clusters (from 1 to 9) and calculated the inertia for each model. Inertia measures how well the data points fit within their assigned clusters; lower inertia indicates better clustering. We then visualized the inertia values using an Elbow curve, where the "elbow" point suggests the ideal number of clusters, as it is where the decrease in inertia starts to slow down.

3 - Run the k-means clustering algorithm

# Define the optimal number of clusters obtained from the elbow method
n_clusters = 3 

# Run the K-means clustering algorithm on the preprocessed data
kmeans = KMeans(n_clusters=n_clusters, random_state=42)
penguins_df['cluster'] = kmeans.fit_predict(penguins_df)

# Visualize the clusters
plt.figure(figsize=(10, 6))
plt.scatter(penguins_df['culmen_length_mm'], penguins_df['culmen_depth_mm'], c=penguins_df['cluster'], cmap='viridis')
plt.xlabel('Culmen Length (mm)')
plt.ylabel('Culmen Depth (mm)')
plt.title('K-means Clustering of Penguins')
plt.colorbar(label='Cluster')
plt.show()

Step 3 - Running the K-means clustering algorithm

In the third stage of the project, we applied the K-means clustering algorithm using the optimal number of clusters determined in the previous step. We utilized the preprocessed data and assigned each data point to a cluster based on the K-means model. This process involved creating cluster labels for each penguin in the dataset, allowing us to group them into distinct clusters. We also visualized these clusters to observe how the penguins were categorized based on their physical attributes.

4 - Create a final statistical DataFrame for each cluster

# Create a list of numeric columns
numeric_columns = ['culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g']

# Add the cluster labels to the DataFrame
penguins_df['label'] = kmeans.labels_

# Create the final characteristic DataFrame for each cluster
stat_penguins = penguins_df.groupby('label')[numeric_columns].mean()

# Display the final DataFrame
stat_penguins

Step 4 - Creating a final statistical DataFrame for each cluster

In the final step of the project, we created a summary DataFrame called stat_penguins to represent the average characteristics of each cluster identified by the K-means algorithm. First, we identified the numeric columns in the dataset and added a new column named label to store the cluster assignments for each penguin. Then, using the groupby method combined with the mean function, we calculated the mean values of the numeric features for each cluster. This resulted in a DataFrame that provides a clear statistical overview of the key characteristics defining each penguin group.

Conclusion:

Based on the final statistical DataFrame stat_penguins, we can conclude that the k-means clustering algorithm successfully identified distinct groups within the penguins dataset, reflecting potential species differences. By analyzing the average values of key characteristics such as culmen length, culmen depth, flipper length, and body mass for each cluster, we can gain insights into the unique traits of each group. These findings can serve as a valuable resource for the research team to classify and study the different penguin species in Antarctica more effectively. This project showcases the power of unsupervised learning techniques in revealing underlying patterns in data, even when specific labels or categories are not initially available.