Supervised Learning with scikit-learn
# Importing modules
import pandas as pd
import numpy as np
# Importing the course datasets
diabetes_df = pd.read_csv('datasets/diabetes_clean.csv') # 3
music_df = pd.read_csv('datasets/music_clean.csv') # 4
sales_df = pd.read_csv('datasets/advertising_and_sales_clean.csv') # 2
churn_df = pd.read_csv("datasets/telecom_churn_clean.csv") # 1
1 . Classification
1.1 Binary classification
-
In the video, you saw that there are two types of supervised learning — classification and regression.
-
Recall that binary classification is used to predict a target variable that has only two labels, typically represented numerically with a zero or a one.
-
A dataset, churn_df, has been preloaded for you in the console.
Your task is to examine the data and choose which column could be the target variable for binary classification.
'churn'
Correct! churn has values of 0 or 1, so it can be predicted using a binary classification model.
1.1.2 The supervised learning workflow
-
Recall that scikit-learn offers a repeatable workflow for using supervised learning models to predict the target variable values when presented with new data.
-
Reorder the pseudo-code provided so it accurately represents the workflow of building a supervised learning model and making predictions.
1 => from sklearn import Model
2 => model Model()
3 => model.fit( X , y )
4 => model.predict(X_new)
Great work! You can see how scikit-learn enables predictions to be made in only a few lines of code!
1.2 The classification challenge
1.2.1 k-Nearest Neighbors: Fit
-
In this exercise, you will build your first classification model using the churn_df dataset, which has been preloaded for the remainder of the chapter.
-
The features to use will be "account_length" and "customer_service_calls".
-
The target, "churn", needs to be a single column with the same number of observations as the feature data.
-
You will convert the features and the target variable into NumPy arrays, create an instance of a KNN classifier, and then fit it to the data.
# Import KNeighborsClassifier
from sklearn.neighbors import KNeighborsClassifier
# Create arrays for the features and the target variable
y = churn_df["churn"].values
X = churn_df[["account_length","customer_service_calls"]].values
# Create a KNN classifier with 6 neighbors
knn = KNeighborsClassifier(n_neighbors = 6)
# Fit the classifier to the data
knn.fit(X, y)
Excellent! Now that your KNN classifier has been fit to the data, it can be used to predict the labels of new data points.
1.2.2 k-Nearest Neighbors: Predict
-
Now you have fit a KNN classifier, you can use it to predict the label of new data points.
-
All available data was used for training, however, fortunately, there are new observations available.
-
These have been preloaded for you as X_new .
-
The model knn, which you created and fit the data in the last exercise, has been preloaded for you.
-
You will use your classifier to predict the labels of a set of new data points:
X_new = np.array([[30.0, 17.5],
[107.0, 24.1],
[213.0, 10.9]])