Extract all the URLs from the webpage Using R Language
In this article, we will learn how to scrap all the URLs from the webpage using the R Programming language.
To scrap URLs we will be using httr and XML libraries. We will be using httr package to make HTTP requestsXML and XML to identify URLs using xml tags.
- httr library is used to make HTTP requests in R language as it provides a wrapper for the curl package.
- XML library is used for working with XML files and XML tags.
After installing the required packages we need to import httr and XML libraries and create a variable and store the URL of the site. Now we will be using GET() of httr packages to make HTTP requests, so we have raw data and we need to convert it in HTML format which can be done by using htmlParse()the
We have successfully scrapped HTML data but we only need URLs so to scrap URL we xpathSApply() and pass the HTML data to it, We have not yet completed now we have to pass XML tag to it so that we can get everything related to that tag. For URLs, we will “href” tag which is used to declare URLs.
Note: You need not use install.packages() if you have already installed the package once.
Step 1: Installing libraries:
Step 2: Import libraries:
Step 3: Making HTTP requests:
In this step, we will pass our URL in GET() to request site data and store the returned data in the resource variable.
Step 4: Parse site data in HTML format:
In this step, we parsed the data to HTML using htmlparse().
Step 5: Identify URLs and print them:
In this step, we used to xpathSApply() to locate URLs.
We know <a> tag is used to define URL and it is stored in href attribute.
So xpathSApply() will find all the <a> tags and scrap the link stored in href attribute. And then we will store all the URLs in a variable and print it.
Please Login to comment...