Skip to content

Alt text source: @allison_horst https://github.com/allisonhorst/penguins

You have been asked to support a team of researchers who have been collecting data about penguins in Antartica! The data is available in csv-Format as penguins.csv

Origin of this data : Data were collected and made available by Dr. Kristen Gorman and the Palmer Station, Antarctica LTER, a member of the Long Term Ecological Research Network.

The dataset consists of 5 columns.

ColumnDescription
culmen_length_mmculmen length (mm)
culmen_depth_mmculmen depth (mm)
flipper_length_mmflipper length (mm)
body_mass_gbody mass (g)
sexpenguin sex

Unfortunately, they have not been able to record the species of penguin, but they know that there are at least three species that are native to the region: Adelie, Chinstrap, and Gentoo. Your task is to apply your data science skills to help them identify groups in the dataset!

# Import Required Packages
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler

# Loading and examining the dataset
penguins_df = pd.read_csv("penguins.csv")
penguins_df.head()
#Finding Nulls
print(penguins_df.head())
print(penguins_df.info())
print(penguins_df.isnull().sum())

#dropping NaNs
penguins_df = penguins_df.dropna()
#Finding Outliers
penguins_df.boxplot()
plt.show()
#Removing Outliers
print(penguins_df[penguins_df['flipper_length_mm']>4000],
penguins_df[penguins_df['flipper_length_mm']<0])

penguins_clean = penguins_df.drop([9,14])
penguins_clean.boxplot()
plt.show()
# Create dummy variables

df = pd.get_dummies(penguins_clean, columns=['sex'])

df.head()
#Pre-Processing data for PCA

scaler = StandardScaler()

X = scaler.fit_transform(df)

penguins_preprocessed = pd.DataFrame(data=X,columns=df.columns)
penguins_preprocessed.head(10)
# Performing PCA
%matplotlib inline

from sklearn.decomposition import PCA
import matplotlib.pyplot as plt

pca = PCA()
pca.fit(penguins_preprocessed)
exp_variance = pca.explained_variance_ratio_

# Plotting explained variance
fig, ax = plt.subplots()
ax.bar(range(pca.n_components_), exp_variance)
ax.set_xlabel('Principal Component #')
ax.axhline(y=0.1, linestyle='--')
#Graph shows only features 0 and 1 are above 10%
#Running PCA with only these features

n_components = 2

pca = PCA(n_components=n_components, random_state=42)

penguins_PCA = pca.fit_transform(penguins_preprocessed)
#Finding optimum number of clusters

inertia = []

for k in range(1,10):
    kmeans= KMeans(n_clusters=k, random_state=42).fit(penguins_PCA)
    inertia.append(kmeans.inertia_)
plt.plot(range(1, 10), inertia, marker='o')
plt.xlabel('Number of clusters')
plt.ylabel('Inertia')
plt.title('Elbow Method')
plt.show()
# Define the number of clusters
n_clusters = 3  # You can change this number based on your requirement

# Running KMeans cluster algorithm
kmeans = KMeans(n_clusters=n_clusters, random_state=42).fit(penguins_PCA)

plt.scatter(penguins_PCA[:, 0], penguins_PCA[:, 1], c=kmeans.labels_, cmap='viridis')
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
plt.title(f'K-means Clustering (K={n_clusters})')
plt.legend()
plt.show()
#Creating a dataframe for each cluster

penguins_clean['label']= kmeans.labels_
numeric_columns = ['culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm',
                  'label']
stat_penguins = penguins_clean[numeric_columns].groupby('label').mean()

stat_penguins