Open In App

Python program to Recursively scrape all the URLs of the website

Last Updated : 26 Mar, 2020
Improve
Improve
Like Article
Like
Save
Share
Report

In this tutorial we will see how to we can recursively scrape all the URLs from the website

Recursion in computer science is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time.

Note: For more information, refer to Recursion

Modules required and Installation

  • Requests :
    Requests allows you to send HTTP/1.1 requests extremely easily. There’s no need to manually add query strings to your URLs.

    pip install requests
  • Beautiful Soup:
    Beautiful Soup is a library that makes it easy to scrape information from web pages. It sits atop an HTML or XML parser, providing Pythonic idioms for iterating, searching, and modifying the parse tree.

    pip install beautifulsoup4

Code :




from bs4 import BeautifulSoup
import requests
   
# lists
urls=[]
   
# function created
def scrape(site):
       
    # getting the request from url
    r = requests.get(site)
       
    # converting the text
    s = BeautifulSoup(r.text,"html.parser")
       
    for i in s.find_all("a"):
          
        href = i.attrs['href']
           
        if href.startswith("/"):
            site = site+href
               
            if site not in  urls:
                urls.append(site) 
                print(site)
                # calling it self
                scrape(site)
   
# main function
if __name__ =="__main__":
   
    # website to be scrape
   
    # calling function
    scrape(site)
    


Output :

python-web-scraping



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads