Link extraction is a very common task when dealing with the HTML parsing. For every general web crawler that’s the most important function to perform. Out of all the Python libraries present out there,
lxml is one of the best to work with. As explained in this article, lxml provides a number of helper function in order to extract the links.
lxml installation –
It is a Python binding for C libraries –
libxml2. So, maintaining a Python base, it is very fast HTML parsing and XML library. To let it work – C libraries also need to be installed. For installation instruction, follow this link.
Command to install –
sudo apt-get install python-lxml or pip install lxml
What is lxml?
It is designed specifically for parsing HTML and therefore comes with an html module. HTML string can be easily parsed with the help of
fromstring() function. This will return the list of all the links.
iterlinks() method has four parameters of tuple form –
element : Link is extracted from this parsed node of the anchor tag. If interested in the link only, this can be ignored.
attr : attribute of the link from where it has come from, that is simply ‘href’
link : The actual URL extracted from the anchor tag.
pos : The anchor tag numeric index of the anchor tag in the document.
Length of the link : 1
Code #2 : Retrieving the
attribute : 'href' link : '/world' position : 0
ElementTree is built up when lxml parses the HTML. ElementTree is a tree structure having parent and child nodes. Each node in the tree is representing an HTML tag and it contains all the relative attributes of the tag. A tree after its creation can be iterated on to find elements. These elements can be an anchor or link tag. While the lxml.html module contains only HTML-specific functions for creating and iterating a tree,
lxml.etree module contains the core tree handling code.
HTML parsing from files –
Instead of using
fromstring() function to parse an HTML,
parse() function can be called with the filename or the URL – like
html.parse('/path/to/filename'). Same result will be generated as loaded in the URL or file as in the string and then call
Code #3 : ElementTree working
Tag title : title Text title : GeeksforGeeks | A computer science portal for geeks html title : b'
GeeksforGeeks | A computer science portal for geeks\r\n' title tag: title Parent's tag title: head
Using request to scrap –
request is a Python library, used to scrap the website. It requests the URL of the webserver using
get() method with URL as a parameter and in return, it gives the Response object. This object will include details about the request and the response. To read the web content,
response.text() method is used. This content is sent back by the webserver under the request.
Code #4 : Requesting web server
It will generate a huge script, of which only a sample is added here.
Response from web server : <!DOCTYPE html> <!--[if IE 7]> <html class="ie ie7" lang="en-US" prefix="og: http://ogp.me/ns#"> <![endif]--> <<!--> <html lang="en-US" prefix="og: http://ogp.me/ns#" > ... ... ...
- How to write an empty function in Python - pass statement?
- Operator Functions in Python | Set 2
- Time Functions in Python | Set-2 (Date Manipulations)
- Send mail from your Gmail account using Python
- Python – The new generation Language
- Print Single and Multiple variable in Python
- Increment and Decrement Operators in Python
- str() vs repr() in Python
- Swap two variables in one line in C/C++, Python, PHP and Java
- Generate all permutation of a set in Python
- Class or Static Variables in Python
- trunc() in Python
- Division Operators in Python
- Interesting facts about strings in Python | Set 1
- When to use yield instead of return in Python?
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.