Skip to main content
HomeAbout RLearn R

Machine Learning in R for beginners

This small tutorial is meant to introduce you to the basics of machine learning in R: it will show you how to use R to work with KNN.
Nov 2018  · 24 min read

Introducing: Machine Learning in R

Machine learning is a branch in computer science that studies the design of algorithms that can learn. Typical machine learning tasks are concept learning, function learning or “predictive modeling”, clustering and finding predictive patterns. These tasks are learned through available data that were observed through experiences or instructions, for example. Machine learning hopes that including the experience into its tasks will eventually improve the learning. The ultimate goal is to improve the learning in such a way that it becomes automatic, so that humans like ourselves don’t need to interfere any more.

This small tutorial is meant to introduce you to the basics of machine learning in R: more specifically, it will show you how to use R to work with the well-known machine learning algorithm called “KNN” or k-nearest neighbors.

If you’re interested in following a course, consider checking out our Introduction to Machine Learning with R or DataCamp’s Unsupervised Learning in R course!

Using R For k-Nearest Neighbors (KNN)

The KNN or k-nearest neighbors algorithm is one of the simplest machine learning algorithms and is an example of instance-based learning, where new data are classified based on stored, labeled instances.

More specifically, the distance between the stored data and the new instance is calculated by means of some kind of a similarity measure. This similarity measure is typically expressed by a distance measure such as the Euclidean distance, cosine similarity or the Manhattan distance.

In other words, the similarity to the data that was already in the system is calculated for any new data point that you input into the system.

Then, you use this similarity value to perform predictive modeling. Predictive modeling is either classification, assigning a label or a class to the new instance, or regression, assigning a value to the new instance. Whether you classify or assign a value to the new instance depends of course on your how you compose your model with KNN.

The k-nearest neighbor algorithm adds to this basic algorithm that after the distance of the new point to all stored data points has been calculated, the distance values are sorted and the k-nearest neighbors are determined. The labels of these neighbors are gathered and a majority vote or weighted vote is used for classification or regression purposes.

In other words, the higher the score for a certain data point that was already stored, the more likely that the new instance will receive the same classification as that of the neighbor. In the case of regression, the value that will be assigned to the new data point is the mean of its k nearest neighbors.



Step One. Get your Data

Machine learning usually starts from observed data. You can take your own data set or browse through other sources to find one.

Built-in Datasets of R

This tutorial uses the Iris data set, which is very well-known in the area of machine learning. This dataset is built into R, so you can take a look at this dataset by typing the following into your console:

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiJpcmlzIn0=

UC Irvine Machine Learning Repository

If you want to download the data set instead of using the one that is built into R, you can go to the UC Irvine Machine Learning Repository and look up the Iris data set.


Tip: don’t only check out the data folder of the Iris data set, but also take a look at the data description page!

Then, use the following command to load in the data:

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIFJlYWQgaW4gYGlyaXNgIGRhdGFcbmlyaXMgPC0gcmVhZC5jc3YodXJsKFwiaHR0cDovL2FyY2hpdmUuaWNzLnVjaS5lZHUvbWwvbWFjaGluZS1sZWFybmluZy1kYXRhYmFzZXMvaXJpcy9pcmlzLmRhdGFcIiksIFxuICAgICAgICAgICAgICAgICBoZWFkZXIgPSBGQUxTRSkgXG5cbiMgUHJpbnQgZmlyc3QgbGluZXNcbmhlYWQoaXJpcylcblxuIyBBZGQgY29sdW1uIG5hbWVzXG5uYW1lcyhpcmlzKSA8LSBjKFwiU2VwYWwuTGVuZ3RoXCIsIFwiU2VwYWwuV2lkdGhcIiwgXCJQZXRhbC5MZW5ndGhcIiwgXCJQZXRhbC5XaWR0aFwiLCBcIlNwZWNpZXNcIilcblxuIyBDaGVjayB0aGUgcmVzdWx0XG5pcmlzIn0=

The command reads the .csv or “Comma Separated Value” file from the website. The header argument has been put to FALSE, which means that the Iris data set from this source does not give you the attribute names of the data.

Instead of the attribute names, you might see strange column names such as “V1” or “V2” when you inspect the iris attribute with a function such as head(). Those are set at random.

To simplify working with the data set, it is a good idea to make the column names yourself: you can do this through the function names(), which gets or sets the names of an object. Concatenate the names of the attributes as you would like them to appear. In the code chunk above, you’ll have listed Sepal.Length, Sepal.Width, Petal.Length, Petal.Width and Species.

Once again, these names don’t come out of the blue: take a look at the description of the data set that is linked above; You’ll normally see all these names listed.

Learn Python for Data Science With DataCamp

Step Two. Know your Data

Now that you have loaded the Iris data set into RStudio, you should try to get a thorough understanding of what your data is about. Just looking or reading about your data is certainly not enough to get started!

You need to get your hands dirty, explore and visualize your data set and even gather some more domain knowledge if you feel the data is way over your head.

Probably you’ll already have the domain knowledge that you need, but just as a reminder, all flowers contain a sepal and a petal. The sepal encloses the petals and is typically green and leaf-like, while the petals are typically colored leaves. For the iris flowers, this is just a little bit different, as you can see in the following picture:

machine learning R

Initial Overview of the Data Set

First, you can already try to get an idea of your data by making some graphs, such as histograms or boxplots. In this case, however, scatter plots can give you a great idea of what you’re dealing with: it can be interesting to see how much one variable is affected by another.

In other words, you want to see if there is any correlation between two variables.

You can make scatterplots with the ggvis package, for example.

Note that you first need to load the ggvis package:

# Load in `ggvis`
library(ggvis)

# Iris scatter plot
iris %>% ggvis(~Sepal.Length, ~Sepal.Width, fill = ~Species) %>% layer_points()

correlation iris

You see that there is a high correlation between the sepal length and the sepal width of the Setosa iris flowers, while the correlation is somewhat less high for the Virginica and Versicolor flowers: the data points are more spread out over the graph and don’t form a cluster like you can see in the case of the Setosa flowers.

The scatter plot that maps the petal length and the petal width tells a similar story:

iris %>% ggvis(~Petal.Length, ~Petal.Width, fill = ~Species) %>% layer_points()

scatterplot iris

You see that this graph indicates a positive correlation between the petal length and the petal width for all different species that are included into the Iris data set. Of course, you probably need to test this hypothesis a bit further if you want to be really sure of this:

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIE92ZXJhbGwgY29ycmVsYXRpb24gYFBldGFsLkxlbmd0aGAgYW5kIGBQZXRhbC5XaWR0aGBcbmNvcihpcmlzJFBldGFsLkxlbmd0aCwgaXJpcyRQZXRhbC5XaWR0aClcblxuIyBSZXR1cm4gdmFsdWVzIG9mIGBpcmlzYCBsZXZlbHMgXG54PWxldmVscyhpcmlzJFNwZWNpZXMpXG5cbiMgUHJpbnQgU2V0b3NhIGNvcnJlbGF0aW9uIG1hdHJpeFxucHJpbnQoeFsxXSlcbmNvcihpcmlzW2lyaXMkU3BlY2llcz09eFsxXSwxOjRdKVxuXG4jIFByaW50IFZlcnNpY29sb3IgY29ycmVsYXRpb24gbWF0cml4XG5wcmludCh4WzJdKVxuY29yKGlyaXNbaXJpcyRTcGVjaWVzPT14WzJdLDE6NF0pXG5cbiMgUHJpbnQgVmlyZ2luaWNhIGNvcnJlbGF0aW9uIG1hdHJpeFxucHJpbnQoeFszXSlcbmNvcihpcmlzW2lyaXMkU3BlY2llcz09eFszXSwxOjRdKSJ9

You see that when you combined all three species, the correlation was a bit stronger than it is when you look at the different species separately: the overall correlation is 0.96, while for Versicolor this is 0.79. Setosa and Virginica, on the other hand, have correlations of petal length and width at 0.31 and 0.32 when you round up the numbers.

Tip: are you curious about ggvis, graphs or histograms in particular? Check out our histogram tutorial and/or ggvis course.

After a general visualized overview of the data, you can also view the data set by entering

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIFJldHVybiBhbGwgYGlyaXNgIGRhdGFcbmlyaXNcblxuIyBSZXR1cm4gZmlyc3QgNSBsaW5lcyBvZiBgaXJpc2BcbmhlYWQoaXJpcylcblxuIyBSZXR1cm4gc3RydWN0dXJlIG9mIGBpcmlzYFxuc3RyKGlyaXMpIn0=

However, as you will see from the result of this command, this really isn’t the best way to inspect your data set thoroughly: the data set takes up a lot of space in the console, which will impede you from forming a clear idea about your data. It is therefore a better idea to inspect the data set by executing head(iris) or str(iris).

Note that the last command will help you to clearly distinguish the data type num and the three levels of the Species attribute, which is a factor. This is very convenient, since many R machine learning classifiers require that the target feature is coded as a factor.

Remember that factor variables represent categorical variables in R. They can thus take on a limited number of different values.

A quick look at the Species attribute through tells you that the division of the species of flowers is 50-50-50. On the other hand, if you want to check the percentual division of the Species attribute, you can ask for a table of proportions:

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIERpdmlzaW9uIG9mIGBTcGVjaWVzYFxudGFibGUoaXJpcyRTcGVjaWVzKSBcblxuIyBQZXJjZW50dWFsIGRpdmlzaW9uIG9mIGBTcGVjaWVzYFxucm91bmQocHJvcC50YWJsZSh0YWJsZShpcmlzJFNwZWNpZXMpKSAqIDEwMCwgZGlnaXRzID0gMSkifQ==

Note that the round argument rounds the values of the first argument, prop.table(table(iris$Species))*100 to the specified number of digits, which is one digit after the decimal point. You can easily adjust this by changing the value of the digits argument.

Profound Understanding of your Data

Let’s not remain on this high-level overview of the data! R gives you the opportunity to go more in-depth with the summary() function. This will give you the minimum value, first quantile, median, mean, third quantile and maximum value of the data set Iris for numeric data types. For the class variable, the count of factors will be returned:

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIFN1bW1hcnkgb3ZlcnZpZXcgb2YgYGlyaXNgXG5zdW1tYXJ5KC4uLi4pIFxuXG4jIFJlZmluZWQgc3VtbWFyeSBvdmVydmlld1xuc3VtbWFyeSguLi4uW2MoXCJQZXRhbC5XaWR0aFwiLCBcIlNlcGFsLldpZHRoXCIpXSkiLCJzb2x1dGlvbiI6IiMgU3VtbWFyeSBvdmVydmlldyBvZiBgaXJpc2BcbnN1bW1hcnkoaXJpcykgXG5cbiMgUmVmaW5lZCBzdW1tYXJ5IG92ZXJ2aWV3XG5zdW1tYXJ5KGlyaXNbYyhcIlBldGFsLldpZHRoXCIsIFwiU2VwYWwuV2lkdGhcIildKSIsInNjdCI6InRlc3RfZnVuY3Rpb24oXCJzdW1tYXJ5XCIsYXJncz1cIm9iamVjdFwiLCBpbmRleD0xKVxudGVzdF9mdW5jdGlvbihcInN1bW1hcnlcIiwgYXJncz1cIm9iamVjdFwiLCBpbmRleD0yKVxuc3VjY2Vzc19tc2coXCJHcmVhdCBqb2IhXCIpIn0=

As you can see, the c() function is added to the original command: the columns petal width and sepal width are concatenated and a summary is then asked of just these two columns of the Iris data set.

Step Three. Where to go Now?

After you have acquired a good understanding of your data, you have to decide on the use cases that would be relevant for your data set. In other words, you think about what your data set might teach you or what you think you can learn from your data. From there on, you can think about what kind of algorithms you would be able to apply to your data set in order to get the results that you think you can obtain.

Tip: keep in mind that the more familiar you are with your data, the easier it will be to assess the use cases for your specific data set. The same also holds for finding the appropriate machine algorithm.

For this tutorial, the Iris data set will be used for classification, which is an example of predictive modeling. The last attribute of the data set, Species, will be the target variable or the variable that you want to predict in this example.

Note that you can also take one of the numerical classes as the target variable if you want to use KNN to do regression.

Step Four. Prepare your Workspace

Many of the algorithms used in machine learning are not incorporated by default into R. You will most probably need to download the packages that you want to use when you want to get started with machine learning.


Tip: got an idea of which learning algorithm you may use, but not of which package you want or need? You can find a pretty complete overview of all the packages that are used in R right here.

To illustrate the KNN algorithm, this tutorial works with the package class:

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiJsaWJyYXJ5KC4uLi4uKSIsInNvbHV0aW9uIjoibGlicmFyeShjbGFzcykiLCJzY3QiOiJ0ZXN0X2Z1bmN0aW9uKFwibGlicmFyeVwiLCBhcmdzPVwicGFja2FnZVwiKVxuc3VjY2Vzc19tc2coXCJBd2Vzb21lIGpvYiFcIikifQ==

If you don’t have this package yet, you can quickly and easily do so by typing the following line of code:

install.packages("<package name>")

Remember the nerd tip: if you’re not sure if you have this package, you can run the following command to find out!

any(grepl("<name of your package>", installed.packages()))

Step Five. Prepare your Data

After exploring your data and preparing your workspace, you can finally focus back on the task ahead: making a machine learning model. However, before you can do this, it’s important to also prepare your data. The following section will outline two ways in which you can do this: by normalizing your data (if necessary) and by splitting your data in training and testing sets.

Normalization

As a part of your data preparation, you might need to normalize your data so that its consistent. For this introductory tutorial, just remember that normalization makes it easier for the KNN algorithm to learn. There are two types of normalization:

  • example normalization is the adjustment of each example individually, while
  • feature normalization indicates that you adjust each feature in the same way across all examples.

So when do you need to normalize your dataset?

In short: when you suspect that the data is not consistent.

You can easily see this when you go through the results of the summary() function. Look at the minimum and maximum values of all the (numerical) attributes. If you see that one attribute has a wide range of values, you will need to normalize your dataset, because this means that the distance will be dominated by this feature.

For example, if your dataset has just two attributes, X and Y, and X has values that range from 1 to 1000, while Y has values that only go from 1 to 100, then Y’s influence on the distance function will usually be overpowered by X’s influence.

When you normalize, you actually adjust the range of all features, so that distances between variables with larger ranges will not be over-emphasised.

Tip: go back to the result of summary(iris) and try to figure out if normalization is necessary.

The Iris data set doesn’t need to be normalized: the Sepal.Length attribute has values that go from 4.3 to 7.9 and Sepal.Width contains values from 2 to 4.4, while Petal.Length’s values range from 1 to 6.9 and Petal.Width goes from 0.1 to 2.5. All values of all attributes are contained within the range of 0.1 and 7.9, which you can consider acceptable.

Nevertheless, it’s still a good idea to study normalization and its effect, especially if you’re new to machine learning. You can perform feature normalization, for example, by first making your own normalize() function.

You can then use this argument in another command, where you put the results of the normalization in a data frame through as.data.frame() after the function lapply() returns a list of the same length as the data set that you give in. Each element of that list is the result of the application of the normalize argument to the data set that served as input:

YourNormalizedDataSet <- as.data.frame(lapply(YourDataSet, normalize))

Test this in the DataCamp Light chunk below!

eyJsYW5ndWFnZSI6InIiLCJzYW1wbGUiOiIjIEJ1aWxkIHlvdXIgb3duIGBub3JtYWxpemUoKWAgZnVuY3Rpb25cbm5vcm1hbGl6ZSA8LSBmdW5jdGlvbih4KSB7XG5udW0gPC0geCAtIG1pbih4KVxuZGVub20gPC0gbWF4KHgpIC0gbWluKHgpXG5yZXR1cm4gKG51bS9kZW5vbSlcbn1cblxuIyBOb3JtYWxpemUgdGhlIGBpcmlzYCBkYXRhXG5pcmlzX25vcm0gPC0gLi4uLi4uLi4uLi4uLiguLi4uLi4oaXJpc1sxOjRdLCBub3JtYWxpemUpKVxuXG4jIFN1bW1hcml6ZSBgaXJpc19ub3JtYFxuc3VtbWFyeSguLi4uLi4uLi4pIiwic29sdXRpb24iOiIjIEJ1aWxkIHlvdXIgb3duIGBub3JtYWxpemUoKWAgZnVuY3Rpb25cbm5vcm1hbGl6ZSA8LSBmdW5jdGlvbih4KSB7XG5udW0gPC0geCAtIG1pbih4KVxuZGVub20gPC0gbWF4KHgpIC0gbWluKHgpXG5yZXR1cm4gKG51bS9kZW5vbSlcbn1cblxuIyBOb3JtYWxpemUgdGhlIGBpcmlzYCBkYXRhXG5pcmlzX25vcm0gPC0gYXMuZGF0YS5mcmFtZShsYXBwbHkoaXJpc1sxOjRdLCBub3JtYWxpemUpKVxuXG4jIFN1bW1hcml6ZSBgaXJpc19ub3JtYFxuc3VtbWFyeShpcmlzX25vcm0pIiwic2N0IjoidGVzdF9vYmplY3QoXCJub3JtYWxpemVcIilcbnRlc3Rfb2JqZWN0KFwiaXJpc19ub3JtXCIpXG50ZXN0X2Z1bmN0aW9uKFwic3VtbWFyeVwiLCBhcmdzPVwib2JqZWN0XCIpIn0=

For the Iris dataset, you would have applied the normalize argument on the four numerical attributes of the Iris data set (Sepal.Length, Sepal.Width, Petal.Length, Petal.Width) and put the results in a data frame.

Tip: to more thoroughly illustrate the effect of normalization on the data set, compare the following result to the summary of the Iris data set that was given in step two.

Training and Test Sets

In order to assess your model’s performance later, you will need to divide the data set into two parts: a training set and a test set.

The first is used to train the system, while the second is used to evaluate the learned or trained system. In practice, the division of your data set into a test and a training sets is disjoint: the most common splitting choice is to take 2/3 of your original data set as the training set, while the 1/3 that remains will compose the test set.

One last look on the data set teaches you that if you performed the division of both sets on the data set as is, you would get a training class with all species of “Setosa” and “Versicolor”, but none of “Virginica”. The model would therefore classify all unknown instances as either “Setosa” or “Versicolor”, as it would not be aware of the presence of a third species of flowers in the data.

In short, you would get incorrect predictions for the test set.

You thus need to make sure that all three classes of species are present in the training model. What’s more, the amount of instances of all three species needs to be more or less equal so that you do not favour one or the other class in your predictions.

To make your training and test sets, you first set a seed. This is a number of R’s random number generator. The major advantage of setting a seed is that you can get the same sequence of random numbers whenever you supply the same seed in the random number generator.

set.seed(1234)

Then, you want to make sure that your Iris data set is shuffled and that you have an equal amount of each species in your training and test sets.

You use the sample() function to take a sample with a size that is set as the number of rows of the Iris data set, or 150. You sample with replacement: you choose from a vector of 2 elements and assign either 1 or 2 to the 150 rows of the Iris data set. The assignment of the elements is subject to probability weights of 0.67 and 0.33.

ind <- sample(2, nrow(iris), replace=TRUE, prob=c(0.67, 0.33))

Note that the replace argument is set to TRUE: this means that you assign a 1 or a 2 to a certain row and then reset the vector of 2 to its original state. This means that, for the next rows in your data set, you can either assign a 1 or a 2, each time again. The probability of choosing a 1 or a 2 should not be proportional to the weights amongst the remaining items, so you specify probability weights. Note also that, even though you don’t see it in the DataCamp Light chunk, the seed has still been set to 1234.

Remember that you want your training set to be 2/3 of your original data set: that is why you assign “1” with a probability of 0.67 and the “2”s with a probability of 0.33 to the 150 sample rows.

You can then use the sample that is stored in the variable ind to define your training and test sets:

eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6InNldC5zZWVkKDEyMzQpXG5pbmQgPC0gc2FtcGxlKDIsIG5yb3coaXJpcyksIHJlcGxhY2U9VFJVRSwgcHJvYj1jKDAuNjcsIDAuMzMpKSIsInNhbXBsZSI6IiMgQ29tcG9zZSB0cmFpbmluZyBzZXRcbmlyaXMudHJhaW5pbmcgPC0gLi4uLltpbmQ9PTEsIDE6NF1cblxuIyBJbnNwZWN0IHRyYWluaW5nIHNldFxuaGVhZCguLi4uLi4uLi4uLi4uLi4uKVxuXG4jIENvbXBvc2UgdGVzdCBzZXRcbmlyaXMudGVzdCA8LSAuLi4uW2luZD09MiwgMTo0XVxuXG4jIEluc3BlY3QgdGVzdCBzZXRcbmhlYWQoLi4uLi4uLi4uLi4pIiwic29sdXRpb24iOiIjIENvbXBvc2UgdHJhaW5pbmcgc2V0XG5pcmlzLnRyYWluaW5nIDwtIGlyaXNbaW5kPT0xLCAxOjRdXG5cbiMgSW5zcGVjdCB0cmFpbmluZyBzZXRcbmhlYWQoaXJpcy50cmFpbmluZylcblxuIyBDb21wb3NlIHRlc3Qgc2V0XG5pcmlzLnRlc3QgPC0gaXJpc1tpbmQ9PTIsIDE6NF1cblxuIyBJbnNwZWN0IHRlc3Qgc2V0XG5oZWFkKGlyaXMudGVzdCkiLCJzY3QiOiJ0ZXN0X29iamVjdChcImlyaXMudHJhaW5pbmdcIilcbnRlc3RfZnVuY3Rpb24oXCJoZWFkXCIsIGFyZ3M9XCJ4XCIsIGluZGV4PTEpXG50ZXN0X29iamVjdChcImlyaXMudGVzdFwiKVxudGVzdF9mdW5jdGlvbihcImhlYWRcIiwgYXJncz1cInhcIiwgaW5kZXg9MikifQ==

Note that, in addition to the 2/3 and 1/3 proportions specified above, you don’t take into account all attributes to form the training and test sets. Specifically, you only take Sepal.Length, Sepal.Width, Petal.Length and Petal.Width. This is because you actually want to predict the fifth attribute, Species: it is your target variable. However, you do want to include it into the KNN algorithm, otherwise there will never be any prediction for it.

You therefore need to store the class labels in factor vectors and divide them over the training and test sets:

eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6InNldC5zZWVkKDEyMzQpXG5pbmQgPC0gc2FtcGxlKDIsIG5yb3coaXJpcyksIHJlcGxhY2U9VFJVRSwgcHJvYj1jKDAuNjcsIDAuMzMpKSIsInNhbXBsZSI6IiMgQ29tcG9zZSBgaXJpc2AgdHJhaW5pbmcgbGFiZWxzXG5pcmlzLnRyYWluTGFiZWxzIDwtIGlyaXNbaW5kPT0xLDVdXG5cbiMgSW5zcGVjdCByZXN1bHRcbnByaW50KGlyaXMudHJhaW5MYWJlbHMpXG5cbiMgQ29tcG9zZSBgaXJpc2AgdGVzdCBsYWJlbHNcbmlyaXMudGVzdExhYmVscyA8LSBpcmlzW2luZD09MiwgNV1cblxuIyBJbnNwZWN0IHJlc3VsdFxucHJpbnQoaXJpcy50ZXN0TGFiZWxzKSIsInNvbHV0aW9uIjoiIyBDb21wb3NlIGBpcmlzYCB0cmFpbmluZyBsYWJlbHNcbmlyaXMudHJhaW5MYWJlbHMgPC0gaXJpc1tpbmQ9PTEsNV1cblxuIyBJbnNwZWN0IHJlc3VsdFxucHJpbnQoaXJpcy50cmFpbkxhYmVscylcblxuIyBDb21wb3NlIGBpcmlzYCB0ZXN0IGxhYmVsc1xuaXJpcy50ZXN0TGFiZWxzIDwtIGlyaXNbaW5kPT0yLCA1XVxuXG4jIEluc3BlY3QgcmVzdWx0XG5wcmludChpcmlzLnRlc3RMYWJlbHMpIiwic2N0IjoidGVzdF9vYmplY3QoXCJpcmlzLnRyYWluTGFiZWxzXCIpXG50ZXN0X2Z1bmN0aW9uKFwicHJpbnRcIiwgYXJncz1cInhcIiwgaW5kZXg9MSlcbnRlc3Rfb2JqZWN0KFwiaXJpcy50ZXN0TGFiZWxzXCIpXG50ZXN0X2Z1bmN0aW9uKFwicHJpbnRcIiwgYXJncz1cInhcIiwgaW5kZXg9MikifQ==

Step Six. The Actual KNN Model

Building your Classifier

After all these preparation steps, you have made sure that all your known (training) data is stored. No actual model or learning was performed up until this moment. Now, you want to find the k nearest neighbors of your training set.

An easy way to do these two steps is by using the knn() function, which uses the Euclidian distance measure in order to find the k-nearest neighbours to your new, unknown instance. Here, the k parameter is one that you set yourself.

As mentioned before, new instances are classified by looking at the majority vote or weighted vote. In case of classification, the data point with the highest score wins the battle and the unknown instance receives the label of that winning data point. If there is an equal amount of winners, the classification happens randomly.

Note: the k parameter is often an odd number to avoid ties in the voting scores.

To build your classifier, you need to take the knn() function and simply add some arguments to it, just like in this example:

eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6ImxpYnJhcnkoY2xhc3MpXG5zZXQuc2VlZCgxMjM0KVxuaW5kIDwtIHNhbXBsZSgyLCBucm93KGlyaXMpLCByZXBsYWNlPVRSVUUsIHByb2I9YygwLjY3LCAwLjMzKSlcbmlyaXMudHJhaW5pbmcgPC0gaXJpc1tpbmQ9PTEsIDE6NF1cbmlyaXMudGVzdCA8LSBpcmlzW2luZD09MiwgMTo0XVxuaXJpcy50cmFpbkxhYmVscyA8LSBpcmlzW2luZD09MSw1XSIsInNhbXBsZSI6IiMgQnVpbGQgdGhlIG1vZGVsXG5pcmlzX3ByZWQgPC0gLi4uKHRyYWluID0gaXJpcy50cmFpbmluZywgdGVzdCA9IGlyaXMudGVzdCwgY2wgPSBpcmlzLnRyYWluTGFiZWxzLCBrPTMpXG5cbiMgSW5zcGVjdCBgaXJpc19wcmVkYFxuLi4uLi4uLi4uIiwic29sdXRpb24iOiIjIEJ1aWxkIHRoZSBtb2RlbFxuaXJpc19wcmVkIDwtIGtubih0cmFpbiA9IGlyaXMudHJhaW5pbmcsIHRlc3QgPSBpcmlzLnRlc3QsIGNsID0gaXJpcy50cmFpbkxhYmVscywgaz0zKVxuXG4jIEluc3BlY3QgYGlyaXNfcHJlZGBcbmlyaXNfcHJlZCIsInNjdCI6InRlc3RfZnVuY3Rpb24oXCJrbm5cIiwgYXJncz1jKFwidHJhaW5cIiwgXCJ0ZXN0XCIsIFwiY2xcIiwgXCJrXCIpKVxudGVzdF9vdXRwdXRfY29udGFpbnMoXCJpcmlzX3ByZWRcIiwgaW5jb3JyZWN0X21zZz1cIkRpZCB5b3UgaW5zcGVjdCBgaXJpc19wcmVkYD9cIilcbnN1Y2Nlc3NfbXNnKFwiQ29uZ3JhdHMhIFlvdSd2ZSBzdWNjZXNzZnVsbHkgYnVpbHQgeW91ciBmaXJzdCBtYWNoaW5lIGxlYXJuaW5nIG1vZGVsIVwiKSJ9

You store into iris_pred the knn() function that takes as arguments the training set, the test set, the train labels and the amount of neighbours you want to find with this algorithm. The result of this function is a factor vector with the predicted classes for each row of the test data.

Note that you don’t want to insert the test labels: these will be used to see if your model is good at predicting the actual classes of your instances!

You see that when you inspect the the result, iris_pred, you’ll get back the factor vector with the predicted classes for each row of the test data.

Step Seven. Evaluation of your Model

An essential next step in machine learning is the evaluation of your model’s performance. In other words, you want to analyze the degree of correctness of the model’s predictions.

For a more abstract view, you can just compare the results of iris_pred to the test labels that you had defined earlier:

eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6ImxpYnJhcnkoY2xhc3MpXG5zZXQuc2VlZCgxMjM0KVxuaW5kIDwtIHNhbXBsZSgyLCBucm93KGlyaXMpLCByZXBsYWNlPVRSVUUsIHByb2I9YygwLjY3LCAwLjMzKSlcbmlyaXMudHJhaW5pbmcgPC0gaXJpc1tpbmQ9PTEsIDE6NF1cbmlyaXMudGVzdCA8LSBpcmlzW2luZD09MiwgMTo0XVxuaXJpcy50cmFpbkxhYmVscyA8LSBpcmlzW2luZD09MSw1XVxuaXJpcy50ZXN0TGFiZWxzIDwtIGlyaXNbaW5kPT0yLCA1XVxuaXJpc19wcmVkIDwtIGtubih0cmFpbiA9IGlyaXMudHJhaW5pbmcsIHRlc3QgPSBpcmlzLnRlc3QsIGNsID0gaXJpcy50cmFpbkxhYmVscywgaz0zKSIsInNhbXBsZSI6IiMgUHV0IGBpcmlzLnRlc3RMYWJlbHNgIGluIGEgZGF0YSBmcmFtZVxuaXJpc1Rlc3RMYWJlbHMgPC0gZGF0YS5mcmFtZSguLi4uLi4uLi4uLi4uLi4uKVxuXG4jIE1lcmdlIGBpcmlzX3ByZWRgIGFuZCBgaXJpcy50ZXN0TGFiZWxzYCBcbm1lcmdlIDwtIGRhdGEuZnJhbWUoLi4uLi4uLi4uLCAuLi4uLi4uLi4uLi4uLi4pXG5cbiMgU3BlY2lmeSBjb2x1bW4gbmFtZXMgZm9yIGBtZXJnZWBcbm5hbWVzKC4uLi4uKSA8LSBjKFwiUHJlZGljdGVkIFNwZWNpZXNcIiwgXCJPYnNlcnZlZCBTcGVjaWVzXCIpXG5cbiMgSW5zcGVjdCBgbWVyZ2VgIFxubWVyZ2UiLCJzb2x1dGlvbiI6IiMgUHV0IGBpcmlzLnRlc3RMYWJlbHNgIGluIGEgZGF0YSBmcmFtZVxuaXJpc1Rlc3RMYWJlbHMgPC0gZGF0YS5mcmFtZShpcmlzLnRlc3RMYWJlbHMpXG5cbiMgTWVyZ2UgYGlyaXNfcHJlZGAgYW5kIGBpcmlzLnRlc3RMYWJlbHNgIFxubWVyZ2UgPC0gZGF0YS5mcmFtZShpcmlzX3ByZWQsIGlyaXMudGVzdExhYmVscylcblxuIyBTcGVjaWZ5IGNvbHVtbiBuYW1lcyBmb3IgYG1lcmdlYFxubmFtZXMobWVyZ2UpIDwtIGMoXCJQcmVkaWN0ZWQgU3BlY2llc1wiLCBcIk9ic2VydmVkIFNwZWNpZXNcIilcblxuIyBJbnNwZWN0IGBtZXJnZWAgXG5tZXJnZSIsInNjdCI6InRlc3Rfb2JqZWN0KFwiaXJpc1Rlc3RMYWJlbHNcIilcbnRlc3RfZGF0YV9mcmFtZShcIm1lcmdlXCIsIGNvbHVtbnM9YyhcIlByZWRpY3RlZCBTcGVjaWVzXCIsIFwiT2JzZXJ2ZWQgU3BlY2llc1wiKSwgdW5kZWZpbmVkX21zZz1cIkRpZCB5b3UgY3JlYXRlIGEgZGF0YSBmcmFtZSB3aXRoIGBpcmlzX3ByZWRgIGFuZCBgaXJpcy50ZXN0TGFiZWxzYCBhcyBkYXRhP1wiLCB1bmRlZmluZWRfY29sc19tc2c9XCJEaWQgeW91IGRlZmluZSB0aGUgY29sdW1ucyBgUHJlZGljdGVkIFNwZWNpZXNgIGFuZCBgT2JzZXJ2ZWQgU3BlY2llc2A/XCIsIGluY29ycmVjdF9tc2c9XCJTb21ldGhpbmcgaXNuJ3QgcmlnaHQgd2l0aCB0aGUgYG1lcmdlYCBkYXRhIGZyYW1lLiBBcmUgeW91IHN1cmUgeW91IGRlZmluZWQgdGhlIGNvcnJlY3QgZGF0YSBhbmQgY29sdW1uIG5hbWVzP1wiKVxudGVzdF9vdXRwdXRfY29udGFpbnMoXCJtZXJnZVwiKSJ9

You see that the model makes reasonably accurate predictions, with the exception of one wrong classification in row 29, where “Versicolor” was predicted while the test label is “Virginica”.

This is already some indication of your model’s performance, but you might want to go even deeper into your analysis. For this purpose, you can import the package gmodels:

install.packages("package name")

However, if you have already installed this package, you can simply enter

library(gmodels)

Then you can make a cross tabulation or a contingency table. This type of table is often used to understand the relationship between two variables. In this case, you want to understand how the classes of your test data, stored in iris.testLabels relate to your model that is stored in iris_pred:

CrossTable(x = iris.testLabels, y = iris_pred, prop.chisq=FALSE)

Crosstable iris knn

Note that the last argument prop.chisq indicates whether or not the chi-square contribution of each cell is included. The chi-square statistic is the sum of the contributions from each of the individual cells and is used to decide whether the difference between the observed and the expected values is significant.

From this table, you can derive the number of correct and incorrect predictions: one instance from the testing set was labeled Versicolor by the model, while it was actually a flower of species Virginica. You can see this in the first row of the “Virginica” species in the iris.testLabels column. In all other cases, correct predictions were made. You can conclude that the model’s performance is good enough and that you don’t need to improve the model!

Learn Python for Data Science With DataCamp

Machine Learning in R with caret

In the previous sections, you have gotten started with supervised learning in R via the KNN algorithm. As you might not have seen above, machine learning in R can get really complex, as there are various algorithms with various syntax, different parameters, etc. Maybe you’ll agree with me when I say that remembering the different package names for each algorithm can get quite difficult or that applying the syntax for each specific algorithm is just too much.

That’s where the caret package can come in handy: it’s short for “Classification and Regression Training” and offers everything you need to know to solve supervised machine learning problems: it provides a uniform interface to a ton of machine learning algorithms. If you’re a bit familiar with Python machine learning, you might see similarities with scikit-learn!

In the following, you’ll go through the steps as they have been outlined above, but this time, you’ll make use of caret to classify your data. Note that you have already done a lot of work if you’ve followed the steps as they were outlined above: you already have a hold on your data, you have explored it, prepared your workspace, etc. Now it’s time to preprocess your data with caret!

As you have done before, you can study the effect of the normalization, but you’ll see this later on in the tutorial.

You already know what’s next! Let’s split up the data in a training and test set. In this case, though, you handle things a little bit differently: you split up the data based on the labels that you find in iris$Species. Also, the ratio is in this case set at 75-25 for the training and test sets.

eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6ImxpYnJhcnkoY2FyZXQpXG5zZXQuc2VlZCgxMjM0KSIsInNhbXBsZSI6IiMgQ3JlYXRlIGluZGV4IHRvIHNwbGl0IGJhc2VkIG9uIGxhYmVscyAgXG5pbmRleCA8LSBjcmVhdGVEYXRhUGFydGl0aW9uKGlyaXMkU3BlY2llcywgcD0wLjc1LCBsaXN0PUZBTFNFKVxuXG4jIFN1YnNldCB0cmFpbmluZyBzZXQgd2l0aCBpbmRleFxuaXJpcy50cmFpbmluZyA8LSBpcmlzWy4uLi4uLi4sXVxuXG4jIFN1YnNldCB0ZXN0IHNldCB3aXRoIGluZGV4XG5pcmlzLnRlc3QgPC0gaXJpc1stLi4uLi4uLi4uLF0iLCJzb2x1dGlvbiI6IiMgQ3JlYXRlIGluZGV4IHRvIHNwbGl0IGJhc2VkIG9uIGxhYmVscyAgXG5pbmRleCA8LSBjcmVhdGVEYXRhUGFydGl0aW9uKGlyaXMkU3BlY2llcywgcD0wLjc1LCBsaXN0PUZBTFNFKVxuXG4jIFN1YnNldCB0cmFpbmluZyBzZXQgd2l0aCBpbmRleFxuaXJpcy50cmFpbmluZyA8LSBpcmlzW2luZGV4LF1cblxuIyBTdWJzZXQgdGVzdCBzZXQgd2l0aCBpbmRleFxuaXJpcy50ZXN0IDwtIGlyaXNbLWluZGV4LF0iLCJzY3QiOiJ0ZXN0X29iamVjdChcImluZGV4XCIpXG50ZXN0X29iamVjdChcImlyaXMudHJhaW5pbmdcIilcbnRlc3Rfb2JqZWN0KFwiaXJpcy50ZXN0XCIpXG5zdWNjZXNzX21zZyhcIkF3ZXNvbWUhIFdlbGwgZG9uZSFcIikifQ==

You’re all set to go and train models now! But, as you might remember, caret is an extremely large project that includes a lot of algorithms. If you’re in doubt on what algorithms are included in the project, you can get a list of all of them. Pull up the list by running names(getModelInfo()), just like the code chunk below demonstrates. Next, pick an algorithm and train a model with the train() function:

eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6ImxpYnJhcnkoY2FyZXQpXG5zZXQuc2VlZCgxMjM0KVxuaW5kZXggPC0gY3JlYXRlRGF0YVBhcnRpdGlvbihpcmlzJFNwZWNpZXMsIHA9MC43NSwgbGlzdD1GQUxTRSlcbmlyaXMudHJhaW5pbmcgPC0gaXJpc1tpbmRleCxdXG5pcmlzLnRlc3QgPC0gaXJpc1staW5kZXgsXSIsInNhbXBsZSI6IiMgT3ZlcnZpZXcgb2YgYWxnb3Mgc3VwcG9ydGVkIGJ5IGNhcmV0XG5uYW1lcyhnZXRNb2RlbEluZm8oKSlcblxuIyBUcmFpbiBhIG1vZGVsXG5tb2RlbF9rbm4gPC0gdHJhaW4oaXJpcy50cmFpbmluZ1ssIDE6NF0sIGlyaXMudHJhaW5pbmdbLCA1XSwgbWV0aG9kPSdrbm4nKSJ9

Note that making other models is extremely simple when you have gotten this far; You just have to change the method argument, just like in this example:

model_cart <- train(iris.training[, 1:4], iris.training[, 5], method='rpart2')

Now that you have trained your model, it’s time to predict the labels of the test set that you have just made and evaluate how the model has done on your data:

eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6ImxpYnJhcnkoY2FyZXQpXG5zZXQuc2VlZCgxMjM0KVxuaW5kZXggPC0gY3JlYXRlRGF0YVBhcnRpdGlvbihpcmlzJFNwZWNpZXMsIHA9MC43NSwgbGlzdD1GQUxTRSlcbmlyaXMudHJhaW5pbmcgPC0gaXJpc1tpbmRleCxdXG5pcmlzLnRlc3QgPC0gaXJpc1staW5kZXgsXVxubW9kZWxfa25uIDwtIHRyYWluKGlyaXMudHJhaW5pbmdbLCAxOjRdLCBpcmlzLnRyYWluaW5nWywgNV0sIG1ldGhvZD0na25uJykiLCJzYW1wbGUiOiIjIFByZWRpY3QgdGhlIGxhYmVscyBvZiB0aGUgdGVzdCBzZXRcbnByZWRpY3Rpb25zPC1wcmVkaWN0KG9iamVjdD1tb2RlbF9rbm4saXJpcy50ZXN0WywxOjRdKVxuXG4jIEV2YWx1YXRlIHRoZSBwcmVkaWN0aW9uc1xudGFibGUocHJlZGljdGlvbnMpXG5cbiMgQ29uZnVzaW9uIG1hdHJpeCBcbmNvbmZ1c2lvbk1hdHJpeChwcmVkaWN0aW9ucyxpcmlzLnRlc3RbLDVdKSIsInNvbHV0aW9uIjoiIyBQcmVkaWN0IHRoZSBsYWJlbHMgb2YgdGhlIHRlc3Qgc2V0XG5wcmVkaWN0aW9uczwtcHJlZGljdC50cmFpbihvYmplY3Q9bW9kZWxfa25uLGlyaXMudGVzdFssMTo0XSwgdHlwZT1cInJhd1wiKVxuXG4jIEV2YWx1YXRlIHRoZSBwcmVkaWN0aW9uc1xudGFibGUocHJlZGljdGlvbnMpXG5cbiMgQ29uZnVzaW9uIG1hdHJpeCBcbmNvbmZ1c2lvbk1hdHJpeChwcmVkaWN0aW9ucyxpcmlzLnRlc3RbLDVdKSIsInNjdCI6InRlc3Rfb2JqZWN0KFwicHJlZGljdGlvbnNcIilcbnRlc3RfZnVuY3Rpb24oXCJ0YWJsZVwiKVxudGVzdF9mdW5jdGlvbihcImNvbmZ1c2lvbk1hdHJpeFwiKVxuc3VjY2Vzc19tc2coXCJXZWxsIGRvbmUhXCIpIn0=

Additionally, you can try to perform the same test as before, to examine the effect of preprocessing, such as scaling and centering, on your model. Run the following code chunk:

eyJsYW5ndWFnZSI6InIiLCJwcmVfZXhlcmNpc2VfY29kZSI6ImxpYnJhcnkoY2FyZXQpXG5zZXQuc2VlZCgxMjM0KVxuaW5kZXggPC0gY3JlYXRlRGF0YVBhcnRpdGlvbihpcmlzJFNwZWNpZXMsIHA9MC43NSwgbGlzdD1GQUxTRSlcbmlyaXMudHJhaW5pbmcgPC0gaXJpc1tpbmRleCxdXG5pcmlzLnRlc3QgPC0gaXJpc1staW5kZXgsXSIsInNhbXBsZSI6IiMgVHJhaW4gdGhlIG1vZGVsIHdpdGggcHJlcHJvY2Vzc2luZ1xubW9kZWxfa25uIDwtIHRyYWluKGlyaXMudHJhaW5pbmdbLCAxOjRdLCBpcmlzLnRyYWluaW5nWywgNV0sIG1ldGhvZD0na25uJywgcHJlUHJvY2Vzcz1jKFwiY2VudGVyXCIsIFwic2NhbGVcIikpXG5cbiMgUHJlZGljdCB2YWx1ZXNcbnByZWRpY3Rpb25zPC1wcmVkaWN0LnRyYWluKG9iamVjdD1tb2RlbF9rbm4saXJpcy50ZXN0WywxOjRdLCB0eXBlPVwicmF3XCIpXG5cbiMgQ29uZnVzaW9uIG1hdHJpeFxuY29uZnVzaW9uTWF0cml4KHByZWRpY3Rpb25zLGlyaXMudGVzdFssNV0pIiwic29sdXRpb24iOiIjIFRyYWluIHRoZSBtb2RlbCB3aXRoIHByZXByb2Nlc3Npbmdcbm1vZGVsX2tubiA8LSB0cmFpbihpcmlzLnRyYWluaW5nWywgMTo0XSwgaXJpcy50cmFpbmluZ1ssIDVdLCBtZXRob2Q9J2tubicsIHByZVByb2Nlc3M9YyhcImNlbnRlclwiLCBcInNjYWxlXCIpKVxuXG4jIFByZWRpY3QgdmFsdWVzXG5wcmVkaWN0aW9uczwtcHJlZGljdC50cmFpbihvYmplY3Q9bW9kZWxfa25uLGlyaXMudGVzdFssMTo0XSwgdHlwZT1cInJhd1wiKVxuXG4jIENvbmZ1c2lvbiBtYXRyaXhcbmNvbmZ1c2lvbk1hdHJpeChwcmVkaWN0aW9ucyxpcmlzLnRlc3RbLDVdKSIsInNjdCI6InRlc3Rfb2JqZWN0KFwibW9kZWxfa25uXCIsIGV2YWw9RkFMU0UpXG50ZXN0X29iamVjdChcInByZWRpY3Rpb25zXCIpXG50ZXN0X2Z1bmN0aW9uKFwiY29uZnVzaW9uTWF0cml4XCIsIGFyZ3M9YyhcImRhdGFcIiwgXCJyZWZlcmVuY2VcIikpXG5zdWNjZXNzX21zZyhcIkNvbmdyYXR1bGF0aW9ucyEgWW91IGhhdmUgc3VjY2Vzc2Z1bGx5IGNvbXBsZXRlZCB0aGUgdHV0b3JpYWwhXCIpIn0=

Move on to Big Data

Congratulations! You’ve made it through this tutorial!

This tutorial was primarily concerned with performing basic machine learning algorithm KNN with the help of R. The Iris data set that was used was small and overviewable; Not only did you see how you can perform all of the steps by yourself, but you’ve also seen how you can easily make use of a uniform interface, such as the one that caret offers, to spark your machine learning.

But you can do so much more!

If you have experimented enough with the basics presented in this tutorial and other machine learning algorithms, you might want to find it interesting to go further into R and data analysis.

Topics

R Courses

Certification available

Course

Introduction to R

4 hr
2.7M
Master the basics of data analysis in R, including vectors, lists, and data frames, and practice R with real data sets.
See DetailsRight Arrow
Start Course
See MoreRight Arrow
Related

Navigating the World of MLOps Certifications

Explore the dynamic world of MLOps certifications: key career benefits, certification vs. certificate insights, and how to choose the right path for you.
Adel Nehme's photo

Adel Nehme

10 min

A Data Science Roadmap for 2024

Do you want to start or grow in the field of data science? This data science roadmap helps you understand and get started in the data science landscape.
Mark Graus's photo

Mark Graus

10 min

How to Learn Machine Learning in 2024

Discover how to learn machine learning in 2024, including the key skills and technologies you’ll need to master, as well as resources to help you get started.
Adel Nehme's photo

Adel Nehme

15 min

Multilayer Perceptrons in Machine Learning: A Comprehensive Guide

Dive into Multilayer Perceptrons. Unravel the secrets of MLPs in machine learning for advanced pattern recognition, classification, and prediction.
Sejal Jaiswal's photo

Sejal Jaiswal

15 min

A Beginner's Guide to CI/CD for Machine Learning

Learn how to automate model training, evaluation, versioning, and deployment using GitHub Actions with the easiest MLOps guide available online.
Abid Ali Awan's photo

Abid Ali Awan

15 min

OpenCV Tutorial: Unlock the Power of Visual Data Processing

This article provides a comprehensive guide on utilizing the OpenCV library for image and video processing within a Python environment. We dive into the wide range of image processing functionalities OpenCV offers, from basic techniques to more advanced applications.
Richmond Alake's photo

Richmond Alake

13 min

See MoreSee More