Have you ever come across a website that displays a lot of data such as statistics, product reviews, or prices in a format that’s not data analysis-ready? Often, authorities and other data providers publish their data in neatly formatted tables. However, not all of these sites include a download button, but don’t despair. In this course, you’ll learn how to efficiently collect and download data from any website using R. You'll learn how to automate the scraping and parsing of Wikipedia using the rvest and httr packages. Through hands-on exercises, you’ll also expand your understanding of HTML and CSS, the building blocks of web pages, as you make your data harvesting workflows less error-prone and more efficient.
Introduction to HTML and Web ScrapingFree
In this chapter, you'll be introduced to Hyper Text Markup Language (HTML), a declarative language used to structure modern websites. Using the rvest library, you'll learn how to query simple HTML elements and scrape your first table.
Navigation and Selection with CSS
Cascading Style Sheets (CSS) describe how HTML elements are displayed on a web page, including colors, fonts, and general layout. In this chapter, you'll learn why CSS selectors and combinators are a crucial ingredient for web scraping.Introduction to CSS50 xpSelect multiple HTML types100 xpOrder CSS selectors by the number of results100 xpCSS classes and IDs50 xpIdentify the correct selector types100 xpLeverage the uniqueness of IDs100 xpSelect the last child with a pseudo-class100 xpCSS combinators50 xpSelect direct descendants with the child combinator100 xpHow many elements get returned?50 xpSimply the best!100 xpNot every sibling is the same100 xp
Advanced Selection with XPATH
The CSS selectors you got to know in the last chapter are powerful but have their limitations. For example, if you want to select nodes based on the properties of their descendants. XPath to the rescue! Using this query language, you can navigate and scrape even the most hideous HTML.Introduction to XPATH50 xpFind the correct CSS equivalent100 xpSelect by class and ID with XPATH100 xpUse predicates to select nodes based on their children100 xpXPATH functions and advanced predicates50 xpFind a more elegant XPATH alternative50 xpGet to know the position() function100 xpExtract nodes based on the number of their children100 xpThe XPATH text() function50 xpThe shortcomings of html_table() with badly structured tables100 xpSelect directly from a parent element with XPATH's text()100 xpCombine extracted data into a data frame100 xpScrape an element based on its text100 xp
Scraping Best Practices
Now that you know how to extract content from web pages, it's time to look behind the curtains. In this final chapter, you’ll learn why HTTP requests are the foundation of every scraping action and how they can be customized to comply with best practices in web scraping.The nature of HTTP requests50 xpWhich of these statements about HTTP is false?50 xpDo it the httr way100 xpHouston, we got a 404!100 xpTelling who you are with custom user agents50 xpCheck out your user agent100 xpAdd a custom user agent100 xpHow to be gentle and slow down your requests50 xpCustom arguments for throttled functions50 xpApply throttling to a multi-page crawler100 xpRecap: Web Scraping in R50 xp
In the following tracksR Developer
Timo GrossenbacherSee More
Head of Newsroom Automation at Tamedia
Timo Grossenbacher is Head of Newsroom Automation at Swiss publisher Tamedia. Prior to that, he used to be a data journalist working with the Swiss Public Broadcast (SRF), where he used scripting and databases for almost every data-driven story he published. He also teaches data journalism at the University of Zurich and is the creator of rddj.info – resources for doing data journalism with R. Follow him at grssnbchr on Twitter or visit his personal website.