Downloading files from web using Python

Requests is a versatile HTTP library in python with various applications. One of its applications is to download a file from web using the file URL.
Installation: First of all, you would need to download the requests library. You can directly install it using pip by typing following command:

pip install requests

Or download it directly from here and install manually.

Downloading files

filter_none

edit
close

play_arrow

link
brightness_4
code

# imported the requests library
import requests
  
# URL of the image to be downloaded is defined as image_url
r = requests.get(image_url) # create HTTP response object
  
# send a HTTP request to the server and save
# the HTTP response in a response object called r
with open("python_logo.png",'wb') as f:
  
    # Saving received content as a png file in
    # binary format
  
    # write the contents of the response (r.content)
    # to a new file in binary mode.
    f.write(r.content)

chevron_right


This small piece of code written above will download the following image from the web. Now check your local directory(the folder where this script resides), and you will find this image:

All we need is the URL of the image source. (You can get the URL of image source by right-clicking on the image and selecting the View Image option.)



Download large files

The HTTP response content (r.content) is nothing but a string which is storing the file data. So, it won’t be possible to save all the data in a single string in case of large files. To overcome this problem, we do some changes to our program:

  • Since all file data can’t be stored by a single string, we use r.iter_content method to load data in chunks, specifying the chunk size.
  •  r = requests.get(URL, stream = True)

    Setting stream parameter to True will cause the download of response headers only and the connection remains open. This avoids reading the content all at once into memory for large responses. A fixed chunk will be loaded each time while r.iter_content is iterated.

    Here is an example:

    filter_none

    edit
    close

    play_arrow

    link
    brightness_4
    code

    import requests
      
    r = requests.get(file_url, stream = True)
      
    with open("python.pdf","wb") as pdf:
        for chunk in r.iter_content(chunk_size=1024):
      
             # writing one chunk at a time to pdf file
             if chunk:
                 pdf.write(chunk)

    chevron_right

    
    

    Downloading Videos

    In this example, we are interested in downloading all the video lectures available on this web-page. All the archives of this lecture are available here. So, we first scrape the webpage to extract all video links and then download the videos one by one.

    filter_none

    edit
    close

    play_arrow

    link
    brightness_4
    code

    import requests
    from bs4 import BeautifulSoup
      
    '''
    URL of the archive web-page which provides link to
    all video lectures. It would have been tiring to
    download each video manually.
    In this example, we first crawl the webpage to extract
    all the links and then download videos.
    '''
      
    # specify the URL of the archive here
    archive_url = "http://www-personal.umich.edu/~csev/books/py4inf/media/"
      
    def get_video_links():
          
        # create response object
        r = requests.get(archive_url)
          
        # create beautiful-soup object
        soup = BeautifulSoup(r.content,'html5lib')
          
        # find all links on web-page
        links = soup.findAll('a')
      
        # filter the link sending with .mp4
        video_links = [archive_url + link['href'] for link in links if link['href'].endswith('mp4')]
      
        return video_links
      
      
    def download_video_series(video_links):
      
        for link in video_links:
      
            '''iterate through all links in video_links
            and download them one by one'''
              
            # obtain filename by splitting url and getting 
            # last string
            file_name = link.split('/')[-1]   
      
            print "Downloading file:%s"%file_name
              
            # create response object
            r = requests.get(link, stream = True)
              
            # download started
            with open(file_name, 'wb') as f:
                for chunk in r.iter_content(chunk_size = 1024*1024):
                    if chunk:
                        f.write(chunk)
              
            print "%s downloaded!\n"%file_name
      
        print "All videos downloaded!"
        return
      
      
    if __name__ == "__main__":
      
        # getting all video links
        video_links = get_video_links()
      
        # download all videos
        download_video_series(video_links)
         

    chevron_right

    
    

    Advantages of using Requests library to download web files are:

    • One can easily download the web directories by iterating recursively through the website!
    • This is a browser-independent method and much faster!
    • One can simply scrape a web page to get all the file URLs on a webpage and hence, download all files in a single command-

      Implementing Web Scraping in Python with BeautifulSoup

    This blog is contributed by Nikhil Kumar. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

    Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.



    My Personal Notes arrow_drop_up


    Article Tags :

    Be the First to upvote.


    Please write to us at contribute@geeksforgeeks.org to report any issue with the above content.