Skip to content
Related Articles

Related Articles

Improve Article
Save Article
Like Article

Downloading PDFs with Python using Requests and BeautifulSoup

  • Last Updated : 13 Apr, 2021

BeautifulSoup object is provided by Beautiful Soup which is a web scraping framework for Python. Web scraping is the process of extracting data from the website using automated tools to make the process faster. The BeautifulSoup object represents the parsed document as a whole. For most purposes, you can treat it as a Tag object.

Requests library is an integral part of Python for making HTTP requests to a specified URL. Whether it be REST APIs or Web Scrapping, requests must be learned for proceeding further with these technologies. When one makes a request to a URI, it returns a response. Python requests provide inbuilt functionalities for managing both the request and response.

 Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.  

To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning - Basic Level Course

This article deals with downloading PDFs using BeautifulSoup and requests libraries in python. Beautifulsoup and requests are useful to extract the required information from the webpage.



Approach:

To find PDF and download it, we have to follow the following steps:

  • Import beautifulsoup and requests library.
  • Request the URL and get the response object.
  • Find all the hyperlinks present on the webpage.
  • Check for the PDF file link in those links.
  • Get a PDF file using the response object.

Implementation:

Python3




# Import libraries
import requests
from bs4 import BeautifulSoup
  
# URL from which pdfs to be downloaded
  
# Requests URL and get response object
response = requests.get(url)
  
# Parse text obtained
soup = BeautifulSoup(response.text, 'html.parser')
  
# Find all hyperlinks present on webpage
links = soup.find_all('a')
  
i = 0
  
# From all links check for pdf link and
# if present download file
for link in links:
    if ('.pdf' in link.get('href', [])):
        i += 1
        print("Downloading file: ", i)
  
        # Get response object for link
        response = requests.get(link.get('href'))
  
        # Write content in pdf file
        pdf = open("pdf"+str(i)+".pdf", 'wb')
        pdf.write(response.content)
        pdf.close()
        print("File ", i, " downloaded")
  
print("All PDF files downloaded")

Output:

Downloading file:  1
File  1  downloaded
All PDF files downloaded

The above program downloads the PDF files from the provided URL with names pdf1, pdf2, pdf3 and so on respectively.




My Personal Notes arrow_drop_up
Recommended Articles
Page :