Open In App

Python – Chunking text using Enchant

Improve
Improve
Like Article
Like
Save
Share
Report

Enchant is a module in Python which is used to check the spelling of a word, gives suggestions to correct words. Also, gives antonym and synonym of words. It checks whether a word exists in dictionary or not.

Enchant also provides the enchant.tokenize module to tokenize text. Tokenizing involves splitting words from the body of the text. But at times not all the words are required to be tokenized. Suppose we have and HTML file, on tokenization all the tags will also be included. Usually the HTML tags do not contribute to the content of the article, therefore there is a need to tokenize excluding them.

Currently the only implemented chunker is HTMLChunker. A chunker for LaTeX documents is in the works.




# import the required modules
from enchant.tokenize import get_tokenizer
from enchant.tokenize import HTMLChunker
  
# the text to be tokenized
text = "<div> <h1> Geeks for Geeks </h1> <br> </div>"
  
# getting tokenizer class
tokenizer = get_tokenizer("en_US")
  
# printing tokens without chunking
print("Printing tokens without chunking:")
token_list = []
for words in tokenizer(text):
    token_list.append(words)
print(token_list)
  
  
# getting tokenizer class with chunk
tokenizer_chunk = get_tokenizer("en_US", chunkers = (HTMLChunker, ))
  
# printing tokens after chunking
print("\nPrinting tokens after chunking:")
token_list_chunk = []
for words in tokenizer_chunk(text):
    token_list_chunk.append(words)
print(token_list_chunk)


Output :

Printing tokens without chunking:
[(‘div’, 1), (‘h’, 7), (‘Geeks’, 11), (‘for’, 17), (‘Geeks’, 21), (‘h’, 29), (‘br’, 34), (‘div’, 40)]

Printing tokens after chunking:
[(‘Geeks’, 11), (‘for’, 17), (‘Geeks’, 21)]


Last Updated : 26 May, 2020
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads