Extract all the URLs that are nested within <li> tags using BeautifulSoup
Last Updated :
07 Jun, 2023
Beautiful Soup is a python library used for extracting html and xml files. In this article we will understand how we can extract all the URLSs from a web page that are nested within <li> tags.
Module needed and installation:
- BeautifulSoup: Our primary module contains a method to access a webpage over HTTP.
pip install bs4
- Requests: used to perform GET request to the webpage and get their content.
Note: You do not need to install it separately as it downloads automatically with bs4, but in case of any problem you can download it manually.
pip install requests
Approach
- We will first import our required libraries.
- We will perform a get request to the desired web page from which we want all the URLs from.
- We will pass the text to the BeautifulSoup function and convert it to a soup object.
- Using a for loop we will look for all the <li> tags in the webpage.
- If a <li> tag has an anchor tag in it we will look for the href attribute and store its parameter in a list. It is the url we were looking for.
- The print the list that contains all the urls.
Let’s have a look at the code, We will see what’s happening at each significant step.
Step 1: Initialize the Python program by importing all the required libraries and setting up the URL of the web page from which you want all the URLs contained in an anchor tag.
In the following example, we will take another geek for geeks article on implementing web scraping using BeautifulSoup and extract all the URLs stored in anchor tags nested within <li> tag.
LInk of the article is : https://www.geeksforgeeks.org/implementing-web-scraping-python-beautiful-soup/
Python3
import requests
from bs4 import BeautifulSoup
|
Step 2: We will perform a get request to the desired URL and pass all the text from it into BeautifuLSoup and convert it into a soup object. We will set the parser as html.parser. You can set it different depending on the webpage you are scraping.
Python3
reqs = requests.get(URL)
content = reqs.text
soup = BeautifulSoup(content, 'html.parser' )
|
Step 3: Create an empty list to store all the URLs that you will receive as your desired output. Run a for loop that iterates over all the <li> tags in the web page. Then for each <li> tag check if it has an anchor tag in it. If that anchor tag has an href attribute then store the parameter of that href in the list that you created.
Python3
urls = []
for h in soup.findAll( 'li' ):
a = h.find( 'a' )
try :
if 'href' in a.attrs:
url = a.get( 'href' )
urls.append(url)
except :
pass
|
Step 4: We print the output by iterating over the list of the url.
Python3
for url in urls:
print (url)
|
Complete code:
Python3
import requests
from bs4 import BeautifulSoup
reqs = requests.get(URL)
content = reqs.text
soup = BeautifulSoup(content, 'html.parser' )
urls = []
for h in soup.findAll( 'li' ):
a = h.find( 'a' )
try :
if 'href' in a.attrs:
url = a.get( 'href' )
urls.append(url)
except :
pass
for url in urls:
print (url)
|
Output:
Share your thoughts in the comments
Please Login to comment...