Loved by learners at thousands of companies
As with any fundamentals course, Introduction to Natural Language Processing in R is designed to equip you with the necessary tools to begin your adventures in analyzing text. Natural language processing (NLP) is a constantly growing field in data science, with some very exciting advancements over the last decade. This course will cover the basics of these topics and prepare you for expanding your analysis capabilities. We dive into regular expressions, topic modeling, named entity recognition, and others, all while providing thorough examples that can be used to kick start your future analysis.
Chapter 1 of Introduction to Natural Langauge Processing prepares you for running your first analysis on text. You will explore regular expressions and tokenization, two of the most common components of most analysis tasks. With regular expressions, you can search for any pattern you can think of, and with tokenization, you can prepare and clean text for more sophisticated analysis. This chapter is necessary for tackling the techniques we will learn in the remaining chapters of this course.
Representations of Text
In this chapter, you will learn the most common and studied ways to analyze text. You will look at creating a text corpus, expanding a bag-of-words representation into a TFIDF matrix, and use cosine-similarity metrics to determine how similar two pieces of text are to each other. You build on your foundations for practicing NLP before you dive into applications of NLP in chapters 3 and 4.Understanding an R corpus50 xpExplore an R corpus100 xpCreating a tibble from a corpus100 xpCreating a corpus100 xpThe bag-of-words representation50 xpPractice BoW50 xpBoW Example100 xpSparse matrices100 xpThe TFIDF50 xpManual calculations50 xpTFIDF Practice100 xpCosine Similarity50 xpAn example of failing at text analysis100 xpCosine similarity example100 xp
Applications: Classification and Topic Modeling
Chapter 3 focuses on two common text analysis approaches, classification modeling, and topic modeling. If you are working on text analysis projects, you will inevitably use one or both of these methods. This chapter teaches you how to perform both techniques and provides insight into how to approach these techniques from a practical point of you.Preparing text for modeling50 xpData preparation100 xpRemoving sparse terms100 xpClassification modeling50 xpClassification modeling example100 xpConfusion matrices100 xpTFIDF tibble vs dtm50 xpIntroduction to topic modeling50 xpLDA practice100 xpAssigning topics to documents100 xpLDA in practice50 xpTesting perplexity100 xpReviewing LDA results100 xp
In chapter 4 we cover two staples of natural language processing, sentiment analysis, and word embeddings. These are two analysis techniques that are a must for anyone learning the fundamentals of text analysis. Furthermore, you will briefly learn about BERT, part-of-speech tagging, and named entity recognition. Almost 15 different analysis techniques were covered in this course, so chapter 4 ends by recapping all of the great techniques you will learn about in this course.
Research Data Scientist
Kasey Jones is a research data scientist at RTI International. His work focuses primarily on agent-based model simulations and natural language processing analysis. He also enjoys creating unique visualizations using D3, and building R-Shiny and python Dash dashboards. Outside of RTI he spends his time working through leet code problems, playing chess, and traveling all over the world.