Open In App

How to use Scrapy Items?

Improve
Improve
Like Article
Like
Save
Share
Report

In this article, we will scrape Quotes data using scrapy items, from the webpage https://quotes.toscrape.com/tag/reading/. The main objective of scraping, is to prepare structured data, from unstructured resources. Scrapy Items are wrappers around, the dictionary data structures. Code can be written, such that, the extracted data is returned, as Item objects, in the format of “key-value” pairs.  Using Scrapy Items is beneficial when –

  • As the scraped data volume increases, they become irregular to handle.
  • As your data gets complex, it is vulnerable to typos, and, at times may return faulty data.
  • Formatting of data scraped, is easier, as Item objects, can be further passed to Item Pipelines.
  • Cleansing the data, is easy, if we scrape the data, as Items.
  • Validating data, handling missing data, is easier with Scrapy Items.

Via the Item adapter library, Scrapy supports various Item Types. One can choose, the Item type, they want. Following, are the Item Types supported:

  • Dictionaries – Items can be written in form of dictionary objects. They are convenient to use.
  • Item objects – They provide dictionary like API, wherein we need to declare, fields for the Item, needed. It consists of key-value pair, of Field objects used, while declaring the Item class. In this tutorial, we are using Item objects.
  • Dataclass objects – They are used, when you need to store, the scraped values, in JSON or CSV files. Here we need to define, the datatype of each field, needed.
  • attr.s – The attr.s allows, defining item classes, with field names, so that scraped data, can be imported, to different file formats. They work similar to Dataclass objects only that the attr package needs to be installed.

Installing Scrapy library

The Scrapy library, requires a Python version, of 3.6 and above. Install the Scrapy library, by executing the following command, at the terminal –

pip install Scrapy

This command will install the Scrapy library, in the project environment. Now, we can create a Scrapy project, to write the Spider code.

Create a Scrapy Project

Scrapy has, an efficient command-line tool, also called the ‘Scrapy tool’. Commands accept a different set of arguments and options based on their purpose. To write the Spider code, we begin by creating, a Scrapy project, by executing the following command, at the terminal –

scrapy startproject <project_name>

Output:

Scrapy ‘startproject’ command to create Spider project

This should create a folder, in your current directory. It contains a ‘scrapy.cfg’, which is a configuration file, of the project. The folder structure is as shown below:

The folder structure of ‘gfg_spiderreadingitems’

The scrapy.cfg, is a project configuration file. The folder, that contains this file, is the root directory. The folder structure, of folder, created is as follows:

File ‘items.py’ inside the ‘gfg_spiderreadingitems’ folder

The folder, contains items.py,middlerwares.py and other settings files, along with the ‘spiders’ folder. The crawling code will be written, in a spider python file. We will alter, ‘items.py’ file, to mention, our data items, to be extracted. Keep the contents of ‘items.py’, as they are currently.

Spider Code to Extract Data

The code for web scraping is written in the spider code file. To create the spider file, we will make use, of the ‘genspider’ command. Please note, that this command, is executed, at the same level, where scrapy.cfg file is present. 

We are scraping, reading quotes present, on https://quotes.toscrape.com/tag/reading/ webpage. Hence, we will run the command as –

scrapy genspider spider_name url_to_be_scraped

Use ‘genspider’ command to create Spider file

The above command will create a spider file, “gfg_spiitemsread.py” in the ‘spiders’ folder. The spider name will also be,’gfg_spiitemsread’. The default code, for the same, is as follows:

Python3




# Import the required libraries
import scrapy
 
# Spider Class Created
 
 
class GfgSpiitemsreadSpider(scrapy.Spider):
    # Name of the spider
    name = 'gfg_spiitemsread'
    # The domain to be scraped
    allowed_domains = ['quotes.toscrape.com/tag/reading/']
    # The URLs from domain to scrape
 
    # Spider default callback function
    def parse(self, response):
        pass


We will scrape Quotes Title, Author and Tags from the webpage https://quotes.toscrape.com/tag/reading/. Scrapy provides us, with Selectors, to “select” parts of the webpage, desired. Selectors are CSS or XPath expressions, written to extract data, from the HTML documents. In this tutorial, we will make use of XPath expressions, to select the details we need. Let us understand, the steps for writing the selector syntax, in the spider code.  

  • The default callback method, present in the spider class, responsible for, processing the response received, is the parse() method. We will write, selectors with XPath expressions, responsible for data extraction, here.
  • Select the element to be extracted, on the webpage, say Right-Click, and choose the Inspect option. This will allow us, to view its CSS attributes.
  • When we right-click on the first Quote and choose Inspect, we can see it has the CSS ‘class’ attribute “quote”. Similarly, all the quotes on the webpage, have CSS ‘class’ attribute as “quote”. It can be as seen below:

Right Click first quote, and, check its CSS “class” attribute

Based on this, the XPath expression, for the same, can be written as – 

  • quotes = response.xpath(‘//*[@class=”quote”]’). This syntax will fetch all elements, having “quote”, as the CSS ‘class’ attribute.
  • We will fetch the Quote Title, Author and Tags, of all the Quotes. Hence, we will write XPath expressions for extracting them, in a loop. For Quote Title, CSS ‘class’ attribute, is “text”. Hence, the XPath expression, for the same, would be – quote.xpath(‘.//*[@class=”text”]/text()’).extract_first(). The text() method, will extract the text, of the Quote title. The extract_first() method, will give the first matching value, with the CSS attribute “text”. The dot operator ‘.’ in the start, indicates extracting data, from a single quote.
  • Similarly, CSS  attributes, “class” and “itemprop”, for author element, is “author”. We can use, any of these, in the XPath expression. The syntax would be – quote.xpath(‘.//*[@itemprop=”author”]/text()’).extract(). This will extract, the Author name, where the CSS ‘itemprop’ attribute is ‘author’.
  • The CSS  attributes, “class” and “itemprop”, for tags element, is “keywords”. We can use, any of these, in the XPath expression. Since there are many tags, for any quote, looping through them, will be complex. Hence, we will extract the CSS attribute “content”, from every quote. The XPath expression for the same is – quote.xpath(‘.//*[@itemprop=”keywords”]/@content’).extract(). This will extract, all tags values, from “content” attribute, for quotes.
  • We use ‘yield’ syntax to get the data. We can collect, and, transfer data to CSV, JSON and other file formats, with the ‘yield’ syntax.
  • If we observe the code till here, it will crawl, and, extract data for the webpage.

The code is as follows:

Python3




# Import the required library
import scrapy
 
# The Spider class
class GfgSpiitemsreadSpider(scrapy.Spider):
    # Name of the spider
    name = 'gfg_spiitemsread'
     
    # The domain allowed to scrape
    allowed_domains = ['quotes.toscrape.com/tag/reading']
     
    # The URL to be scraped
     
    # Default callback function
    def parse(self, response):
         
        # Fetch all quotes tags
        quotes = response.xpath('//*[@class="quote"]')
         
        # Loop through the Quote selector elements
        # to get details of each
        for quote in quotes:
             
            # XPath expression to fetch text of the Quote title
            title = quote.xpath('.//*[@class="text"]/text()').extract_first()
             
            # XPath expression to fetch author of the Quote
            authors = quote.xpath('.//*[@itemprop="author"]/text()').extract()
             
            # XPath expression to fetch Tags of the Quote
            tags = quote.xpath('.//*[@itemprop="keywords"]/@content').extract()
             
            # Yield all elements
            yield {"Quote Text ": title, "Authors ": authors, "Tags ": tags}


The crawl command is used to run the spider. Mention the spider name, in the crawl command. If we run, the above code, using the crawl command, then the output at the terminal would be:

scrapy crawl filename

Output:

Quotes scraped as shown by the ‘yield’ statement

Here, the yield statement, returns the data, in Python dictionary objects. 

Understanding Python Dictionary and Scrapy Item

The data yielded above, are  Python dictionary objects. Advantages of using them are –

  • They are convenient, and, easy to handle key-value pair structures, when the data size is less.
  • Use them, when no further processing, or, formatting on scraped data, is required.
  • Use a dictionary, when the data you want to scrape, is complete and simple.

For using Item objects we will make changes in the following files –

  • The items.py file present
  • Current spider class generated, gfg_spiitemsread.py file.

Use Scrapy Items to Collect Data

Now, we will learn, the process of writing our Scrapy Item, for Quotes. To do so, we will follow, the steps as mentioned below –

  • Open the items.py file. It is present, on the same level, as the ‘spiders’  folder. Mention the fields, we need to extract, in the file, as shown below:

Python3




# Define here the models for your scraped
# items
# Import the required library
import scrapy
 
# Define the fields for Scrapy item here
# in class
class GfgSpiderreadingitemsItem(scrapy.Item):
     
    # Item key for Title of Quote
    quotetitle = scrapy.Field()
     
    # Item key for Author of Quote
    author = scrapy.Field()
     
    # Item key for Tags of Quote
    tags = scrapy.Field()


As seen, in the file above, we have defined one scrapy Item called ‘GfgSpiderreadingitemsItem’. This class, is our blueprint, for all elements, we will scrape. It is going to persist, three fields namely, quote title, author name, and tags. We can now add, only the fields, we mention in the class.

The Field() class, is an alias, to built-in dictionary class. It allows a way to define all field metadata, in one location. It does not provide, any extra attributes.

Now modify the spider file,  to store the values, in the item file’s class’s object, instead of yielding them directly. Please note, you need to import the Item class module, as seen in the code below.

Python3




# Import the required library
import scrapy
 
# Import the Item class with fields
# mentioned in the items.py file
from ..items import GfgSpiderreadingitemsItem
 
 
class GfgSpiitemsreadSpider(scrapy.Spider):
    name = 'gfg_spiitemsread'
    allowed_domains = ['quotes.toscrape.com/tag/reading']
 
    def parse(self, response):
       
        # Write XPath expression to loop through
        # all quotes
        quotes = response.xpath('//*[@class="quote"]')
         
        # Loop through all quotes
        for quote in quotes:
             
            # Create an object of Item class
            item = GfgSpiderreadingitemsItem()
             
            # XPath expression to fetch text of the
            # Quote title Store the title in the class
            # attribute in key-value pair
            item['quotetitle'] = quote.xpath(
                './/*[@class="text"]/text()').extract_first()
             
            # XPath expression to fetch author of the Quote
            # Store the author in the class attribute in
            # key-value pair
            item['author'] = quote.xpath(
                './/*[@itemprop="author"]/text()').extract()
             
            # XPath expression to fetch tags of the Quote title
            # Store the tags in the class attribute in key-value
            # pair
            item['tags'] = quote.xpath(
                './/*[@itemprop="keywords"]/@content').extract()
             
            # Yield the item object
            yield item


 
 

As seen above, the keys mentioned, in the Item class, can now be used, to collect the data scraped, by XPath expressions. Make sure you mention, the exact key names, at both places. For example, use “item[‘author’]”, when ‘author’ is the key defined, in the items.py file.

 

The items, yielded at the terminal, are as shown below :

 

Data extracted from webpage using Scrapy Items

 



Last Updated : 19 Sep, 2021
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads