Open In App

How to use Scrapy Items?

In this article, we will scrape Quotes data using scrapy items, from the webpage https://quotes.toscrape.com/tag/reading/. The main objective of scraping, is to prepare structured data, from unstructured resources. Scrapy Items are wrappers around, the dictionary data structures. Code can be written, such that, the extracted data is returned, as Item objects, in the format of “key-value” pairs.  Using Scrapy Items is beneficial when –

Via the Item adapter library, Scrapy supports various Item Types. One can choose, the Item type, they want. Following, are the Item Types supported:



Installing Scrapy library

The Scrapy library, requires a Python version, of 3.6 and above. Install the Scrapy library, by executing the following command, at the terminal –

pip install Scrapy



This command will install the Scrapy library, in the project environment. Now, we can create a Scrapy project, to write the Spider code.

Create a Scrapy Project

Scrapy has, an efficient command-line tool, also called the ‘Scrapy tool’. Commands accept a different set of arguments and options based on their purpose. To write the Spider code, we begin by creating, a Scrapy project, by executing the following command, at the terminal –

scrapy startproject <project_name>

Output:

Scrapy ‘startproject’ command to create Spider project

This should create a folder, in your current directory. It contains a ‘scrapy.cfg’, which is a configuration file, of the project. The folder structure is as shown below:

The folder structure of ‘gfg_spiderreadingitems’

The scrapy.cfg, is a project configuration file. The folder, that contains this file, is the root directory. The folder structure, of folder, created is as follows:

File ‘items.py’ inside the ‘gfg_spiderreadingitems’ folder

The folder, contains items.py,middlerwares.py and other settings files, along with the ‘spiders’ folder. The crawling code will be written, in a spider python file. We will alter, ‘items.py’ file, to mention, our data items, to be extracted. Keep the contents of ‘items.py’, as they are currently.

Spider Code to Extract Data

The code for web scraping is written in the spider code file. To create the spider file, we will make use, of the ‘genspider’ command. Please note, that this command, is executed, at the same level, where scrapy.cfg file is present. 

We are scraping, reading quotes present, on https://quotes.toscrape.com/tag/reading/ webpage. Hence, we will run the command as –

scrapy genspider spider_name url_to_be_scraped

Use ‘genspider’ command to create Spider file

The above command will create a spider file, “gfg_spiitemsread.py” in the ‘spiders’ folder. The spider name will also be,’gfg_spiitemsread’. The default code, for the same, is as follows:




# Import the required libraries
import scrapy
 
# Spider Class Created
 
 
class GfgSpiitemsreadSpider(scrapy.Spider):
    # Name of the spider
    name = 'gfg_spiitemsread'
    # The domain to be scraped
    allowed_domains = ['quotes.toscrape.com/tag/reading/']
    # The URLs from domain to scrape
    start_urls = ['http://quotes.toscrape.com/tag/reading//']
 
    # Spider default callback function
    def parse(self, response):
        pass

We will scrape Quotes Title, Author and Tags from the webpage https://quotes.toscrape.com/tag/reading/. Scrapy provides us, with Selectors, to “select” parts of the webpage, desired. Selectors are CSS or XPath expressions, written to extract data, from the HTML documents. In this tutorial, we will make use of XPath expressions, to select the details we need. Let us understand, the steps for writing the selector syntax, in the spider code.  

Right Click first quote, and, check its CSS “class” attribute

Based on this, the XPath expression, for the same, can be written as – 

The code is as follows:




# Import the required library
import scrapy
 
# The Spider class
class GfgSpiitemsreadSpider(scrapy.Spider):
    # Name of the spider
    name = 'gfg_spiitemsread'
     
    # The domain allowed to scrape
    allowed_domains = ['quotes.toscrape.com/tag/reading']
     
    # The URL to be scraped
    start_urls = ['http://quotes.toscrape.com/tag/reading/']
     
    # Default callback function
    def parse(self, response):
         
        # Fetch all quotes tags
        quotes = response.xpath('//*[@class="quote"]')
         
        # Loop through the Quote selector elements
        # to get details of each
        for quote in quotes:
             
            # XPath expression to fetch text of the Quote title
            title = quote.xpath('.//*[@class="text"]/text()').extract_first()
             
            # XPath expression to fetch author of the Quote
            authors = quote.xpath('.//*[@itemprop="author"]/text()').extract()
             
            # XPath expression to fetch Tags of the Quote
            tags = quote.xpath('.//*[@itemprop="keywords"]/@content').extract()
             
            # Yield all elements
            yield {"Quote Text ": title, "Authors ": authors, "Tags ": tags}

The crawl command is used to run the spider. Mention the spider name, in the crawl command. If we run, the above code, using the crawl command, then the output at the terminal would be:

scrapy crawl filename

Output:

Quotes scraped as shown by the ‘yield’ statement

Here, the yield statement, returns the data, in Python dictionary objects. 

Understanding Python Dictionary and Scrapy Item

The data yielded above, are  Python dictionary objects. Advantages of using them are –

For using Item objects we will make changes in the following files –

Use Scrapy Items to Collect Data

Now, we will learn, the process of writing our Scrapy Item, for Quotes. To do so, we will follow, the steps as mentioned below –




# Define here the models for your scraped
# items
# Import the required library
import scrapy
 
# Define the fields for Scrapy item here
# in class
class GfgSpiderreadingitemsItem(scrapy.Item):
     
    # Item key for Title of Quote
    quotetitle = scrapy.Field()
     
    # Item key for Author of Quote
    author = scrapy.Field()
     
    # Item key for Tags of Quote
    tags = scrapy.Field()

As seen, in the file above, we have defined one scrapy Item called ‘GfgSpiderreadingitemsItem’. This class, is our blueprint, for all elements, we will scrape. It is going to persist, three fields namely, quote title, author name, and tags. We can now add, only the fields, we mention in the class.

The Field() class, is an alias, to built-in dictionary class. It allows a way to define all field metadata, in one location. It does not provide, any extra attributes.

Now modify the spider file,  to store the values, in the item file’s class’s object, instead of yielding them directly. Please note, you need to import the Item class module, as seen in the code below.




# Import the required library
import scrapy
 
# Import the Item class with fields
# mentioned in the items.py file
from ..items import GfgSpiderreadingitemsItem
 
 
class GfgSpiitemsreadSpider(scrapy.Spider):
    name = 'gfg_spiitemsread'
    allowed_domains = ['quotes.toscrape.com/tag/reading']
    start_urls = ['http://quotes.toscrape.com/tag/reading/']
 
    def parse(self, response):
       
        # Write XPath expression to loop through
        # all quotes
        quotes = response.xpath('//*[@class="quote"]')
         
        # Loop through all quotes
        for quote in quotes:
             
            # Create an object of Item class
            item = GfgSpiderreadingitemsItem()
             
            # XPath expression to fetch text of the
            # Quote title Store the title in the class
            # attribute in key-value pair
            item['quotetitle'] = quote.xpath(
                './/*[@class="text"]/text()').extract_first()
             
            # XPath expression to fetch author of the Quote
            # Store the author in the class attribute in
            # key-value pair
            item['author'] = quote.xpath(
                './/*[@itemprop="author"]/text()').extract()
             
            # XPath expression to fetch tags of the Quote title
            # Store the tags in the class attribute in key-value
            # pair
            item['tags'] = quote.xpath(
                './/*[@itemprop="keywords"]/@content').extract()
             
            # Yield the item object
            yield item

 
 

As seen above, the keys mentioned, in the Item class, can now be used, to collect the data scraped, by XPath expressions. Make sure you mention, the exact key names, at both places. For example, use “item[‘author’]”, when ‘author’ is the key defined, in the items.py file.

 

The items, yielded at the terminal, are as shown below :

 

Data extracted from webpage using Scrapy Items

 


Article Tags :