Skip to content
Related Articles

Related Articles

Python program to Recursively scrape all the URLs of the website

Improve Article
Save Article
  • Difficulty Level : Hard
  • Last Updated : 26 Mar, 2020
Improve Article
Save Article

In this tutorial we will see how to we can recursively scrape all the URLs from the website

Recursion in computer science is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem. Such problems can generally be solved by iteration, but this needs to identify and index the smaller instances at programming time.

Note: For more information, refer to Recursion

Modules required and Installation

  • Requests :
    Requests allows you to send HTTP/1.1 requests extremely easily. There’s no need to manually add query strings to your URLs.

    pip install requests
  • Beautiful Soup:
    Beautiful Soup is a library that makes it easy to scrape information from web pages. It sits atop an HTML or XML parser, providing Pythonic idioms for iterating, searching, and modifying the parse tree.

    pip install beautifulsoup4

Code :

from bs4 import BeautifulSoup
import requests
# lists
# function created
def scrape(site):
    # getting the request from url
    r = requests.get(site)
    # converting the text
    s = BeautifulSoup(r.text,"html.parser")
    for i in s.find_all("a"):
        href = i.attrs['href']
        if href.startswith("/"):
            site = site+href
            if site not in  urls:
                # calling it self
# main function
if __name__ =="__main__":
    # website to be scrape
    # calling function

Output :


My Personal Notes arrow_drop_up
Related Articles

Start Your Coding Journey Now!