source: @allison_horst https://github.com/allisonhorst/penguins
You have been asked to support a team of researchers who have been collecting data about penguins in Antartica! The data is available in csv-Format as penguins.csv
Origin of this data : Data were collected and made available by Dr. Kristen Gorman and the Palmer Station, Antarctica LTER, a member of the Long Term Ecological Research Network.
The dataset consists of 5 columns.
Column | Description |
---|---|
culmen_length_mm | culmen length (mm) |
culmen_depth_mm | culmen depth (mm) |
flipper_length_mm | flipper length (mm) |
body_mass_g | body mass (g) |
sex | penguin sex |
Unfortunately, they have not been able to record the species of penguin, but they know that there are at least three species that are native to the region: Adelie, Chinstrap, and Gentoo. Your task is to apply your data science skills to help them identify groups in the dataset!
# Import Required Packages
from sklearn.metrics import silhouette_score
from sklearn.manifold import TSNE
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from yellowbrick.cluster import KElbowVisualizer
# Loading and examining the dataset
penguins_df = pd.read_csv("penguins.csv")
penguins_df.head()
Since the 'Sex' column contains categorical values, we’ll convert it to binary using one-hot encoding to prepare it for clustering
penguin_dummies=pd.get_dummies(penguins_df,drop_first=True)
penguin_dummies=penguin_dummies.rename(columns={'Sex_MALE':'isMale'})
We now have a numeric DataFrame ready for clustering.
The column isMale
represents the binary encoding of the sex variable.
Now that the data is cleaned and ready, we can explore how many distinct groups exist in our penguin population using clustering techniques
We are going to test each number of cluster(3,9) and visualize it , then we are going to calculate our inertia then we re going to test using elbow rule the optimal number of cluster
k_clusters=np.arange(3,10)
scaler=StandardScaler()
tsne=TSNE(n_components=2,random_state=1808)
peng_tsne=tsne.fit_transform(penguin_dummies)
%matplotlib inline
inertia=[]
for n in k_clusters:
model=KMeans(n_clusters=n)
pipeline=make_pipeline(scaler,model)
pipeline.fit(peng_tsne)
labels=pipeline.predict(peng_tsne)
inertia.append(model.inertia_)
df=pd.DataFrame({'k':k_clusters,'inertia':inertia})
sns.lineplot(data=df,x='k',y='inertia',marker='o')
plt.title('Inertia variance')
plt.show()
elbow_vis=KElbowVisualizer(KMeans(),k=10)
elbow_vis.fit(penguin_dummies)
elbow_vis.show()
Since the Elbow of inertia Line Plot in 4 and TSNE visualization prove that 4 is the best number of clusters we re going to use 5 as the optimal number of clusters for k_means clustering
optimalK=4
visualization of our model using the optimal n clusers
finalModel=KMeans(n_clusters=optimalK)
%matplotlib inline
labels=finalModel.fit_predict(peng_tsne)
sns.scatterplot(x=peng_tsne[:, 0], y=peng_tsne[:, 1], hue=labels,)
plt.show()
penguins_df['labels']=labels
stat_penguins=penguins_df.groupby(labels).mean().drop('labels',axis=1)
stat_penguins
Based on our analysis, KMeans clustering successfully identified five main groups in the data Further work could involve validating this assumption with labeled data.