Open In App

Python program to Recursively scrape all the URLs of the website

In this tutorial we will see how to we can recursively scrape all the URLs from the website

Recursion in computer science is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time.



Note: For more information, refer to Recursion

Modules required and Installation

Code :




from bs4 import BeautifulSoup
import requests
   
# lists
urls=[]
   
# function created
def scrape(site):
       
    # getting the request from url
    r = requests.get(site)
       
    # converting the text
    s = BeautifulSoup(r.text,"html.parser")
       
    for i in s.find_all("a"):
          
        href = i.attrs['href']
           
        if href.startswith("/"):
            site = site+href
               
            if site not in  urls:
                urls.append(site) 
                print(site)
                # calling it self
                scrape(site)
   
# main function
if __name__ =="__main__":
   
    # website to be scrape
   
    # calling function
    scrape(site)
    

Output :


Article Tags :